AI
Google's AI Tool Generates Misleading 'Moon Cats' Info
By Clementine Crooks
May 24, 2024
The introduction of artificial intelligence (AI) in Google's search engine has sparked controversy, with the tech giant coming under fire for its AI-generated responses that often contain incorrect information. A recent query regarding cats on the moon resulted in an absurd response from Google: "Yes, astronauts have met cats on the moon, played with them and provided care," it said. It even added that Neil Armstrong and Buzz Aldrin deployed cats during their Apollo 11 mission - a claim which is entirely false.
This humorous gaffe is just one of many errors made by Google's newly reworked search engine since it introduced AI overviews earlier this month. The new feature frequently places these summaries at the top of search results. However, experts are raising concerns about this system as they believe it could perpetuate bias and misinformation, potentially endangering people seeking help in emergencies.
Melanie Mitchell, an AI researcher at New Mexico's Santa Fe Institute highlighted another example when she asked Google how many Muslims had been U.S presidents; to her surprise, the answer was Barack Obama – a long-debunked conspiracy theory. Despite citing an academic book chapter written by historians as evidence to back up its claim, Mitchell pointed out that the cited source did not actually support such a statement.
Mitchell emphasized her concern over Google’s new feature stating that given its unreliability thus far —it should be taken offline immediately until improvements can be made.
Google responded saying it was taking swift action to rectify any errors violating their content policies while using those mistakes as opportunities for broader improvements already underway. However, despite some blunders emerging from uncommon queries or doctored examples difficult to reproduce—Google maintains confidence in its system due largely to extensive testing carried out prior to public release.
Artificial Intelligence language models like ones used by Google work based on prediction—they generate responses according to data they've been trained on, making them prone to fabricating answers—a phenomenon known as ‘hallucination’.
The unpredictability of AI language models presents a challenge for users who rely on search engines for accurate information, particularly in emergency situations where errors can have serious consequences.
Emily M. Bender, a linguistics professor and director at the University of Washington’s Computational Linguistics Laboratory expressed her concerns over Google's new feature—stating that it could potentially confirm biases and perpetuate misinformation found within its vast data sources.
Furthermore, Bender warns against the dangers of replacing human search with AI chatbots which could compromise online literacy and interaction while disrupting internet traffic flow to forums and websites reliant on Google referrals.
Google's rivals like OpenAI maker ChatGPT or Perplexity AI are closely monitoring these developments. The latter company has criticized Google’s hasty implementation of their new feature causing unnecessary issues regarding quality control.
Despite criticisms, Google remains committed to advancing its AI features as part of an ongoing effort to enhance user experience—a move indicative not just of technological advancements but also shifts in how we consume information.
LATEST ARTICLES IN AI
IndiaAI Mission: Hackathon to Fight Cyber Crime with AI Launched.
Microsoft to Enable Autonomous AI Development Next Month.
Google Restructures to Boost AI Development.
AI Navigation Part 1: Managing Governance".
Join Our Newsletter
Popular Articles
-
Mar 13, 2024
Anyone But You - A Romantic Comedy Surprise of 2023 -
Oct 16, 2024
Strong Stock Drops 11% Post-Q2 Results: Is it Time to Buy? -
Oct 16, 2024
Tata Stocks Soar 16% Following 104% QoQ Profit Spike -
Oct 16, 2024
Boosting Bank Efficiency with Corporate Digital ID Tech