Would You like a feature Interview?
All Interviews are 100% FREE of Charge
A Google executive told a German newspaper that current forms of generative AI, such as ChatGPT, are unreliable and could end up dreamily zoned out.
Prabhakar Raghavan, Senior Vice President and Head of Google Search at Google, said: Wert am Sonntag.
“This is phrased in such a way that the machine provides a compelling, yet completely made-up, answer,” he said.
In fact, many ChatGPT users, including Apple co-founder Steve Wozniak, believe AI often make mistakes.
Encoding and decoding errors between text and representation can cause artificial intelligence hallucinations.
Ted Chiang’s comment on ChatGPT’s “hallucinations”: “When a compression algorithm is designed to reconstruct text after 99% of the original text is discarded, a significant portion of what is produced is completely fabricated. You should expect to be…” https://t.co/7QP6zBgrd3
— Matt Bell (@mdbell79) February 9, 2023
It was unclear if Raghavan was referring to Google’s own forays into generative AI.
RELATED: Will Robots Replace Us? 4 Jobs Artificial Intelligence Can’t Beat (Yet!)
Last week, the company announced it was testing a chatbot called Bard Apprentice. The technology is built on the same LaMDA technology as his OpenAI large language model for ChatGPT.
The demonstration in Paris was seen as a PR disaster as the majority of investors were overwhelmed.
Google developers have been under intense pressure since the release of OpenAI’s ChatGPT, which took the world by storm and threatened Google’s core business.
“We clearly feel the urgency, but we also feel a great responsibility,” Raghavan told the newspaper. “We certainly don’t want to mislead the public.”