Would You like a feature Interview?
All Interviews are 100% FREE of Charge
Liz Reid, head of Google Search, acknowledged that the company’s search engine returned some “weird, inaccurate or unhelpful AI summaries” after it was rolled out across the U.S. The executive offered an explanation for some of Google’s stranger AI-generated responses: Blog PostThe company also announced that it has implemented safeguards to ensure the new feature returns more accurate and less meme-like results.
Reid defended Google, noting that some of the bad answers circulating in AI profiles are fake, such as the claim that it’s safe to leave dogs in cars. While the screenshot showing the answer to the popular question “How many rocks should you eat?” is real, Reid said Google came up with the answer after a website published satirical content on the subject. “Before these screenshots went viral, very few people had asked Google that question,” Reid explained, which is why the company’s AI linked to the website.
The Google vice president also acknowledged that AI Overview has told people to use glue to make cheese stick to pizza, based on content cited from the forum. She said that while the forum typically provides “trusted, first-hand information,” it can also lead to “less useful advice.” The executive did not mention other AI Overview responses being circulated, but The Washington Post The technology also reportedly told users that Barack Obama was a Muslim and that they should drink lots of urine to flush kidney stones.
Reid said the company tested the feature thoroughly before launching it, but “it’s hard to beat millions of people using it on many new searches.” Looking at example responses from the past few weeks, Google appears to have been able to identify patterns where its AI technology doesn’t work well. It then implemented safeguards based on its observations, first by adjusting its AI to better detect humorous or satirical content. It also updated its system to stop adding user-generated replies to summaries that may give misleading or harmful advice, such as social media or forum posts. It also “added trigger limits for queries where we found AI summaries to be less helpful” and stopped showing AI-generated replies for certain health topics.