Would You like a feature Interview?
All Interviews are 100% FREE of Charge
It’s a regular reminder that AI-powered chatbots still fudge facts and lie with as much confidence as your GPS system will tell you that the quickest route home is to drive through the lake.
My reminder is Nieman Labwhich The idea was to make sure ChatGPT would provide the correct links to articles in news publications that were paying it millions of dollars. It turns out ChatGPT doesn’t do that. Instead, it confidently makes up entire URLs. In AI circles, we call this “hallucinating,” which seems like a better word for a real human being drunk on his own nonsense.
Nieman LabAndrew Deck of OpenAI asked the service to provide links to high-profile exclusives from 10 publishers with whom OpenAI has signed deals worth millions of dollars. These include: Associated Press, The Wall Street Journal, Financial Times, Times (England), Le Monde, El Pais, Atlantic, The Verge, Voxand PoliticoIn response, ChatGPT returned a fictitious URL that didn’t exist, resulting in a 404 error page. In other words, the system was working as designed: predicting the most likely version of the story’s URL, rather than actually citing the correct URL. Nieman Lab A similar experiment was carried out in one publication. Business Insider — Earlier this month .
An OpenAI spokesperson said: Nieman Lab The company said it was building “an experience that blends conversational features with up-to-date news content and ensures proper attribution and links to source material,” and that the “enhanced experience” is still in development and not yet available on ChatGPT. But it declined to explain the fake URL.
It is unclear when this new experience will be available or how reliable it will be. Nevertheless, the journalism industry is They’re working to figure out how to make money without relying on technology companies. Meanwhile, AI companies The content published by people who don’t sign these Faustian bargains is still used to train models. Microsoft AI chief Mustafa Suleiman said: Any “freeware” available on the internet can be used to train AI models. At the time of writing, Microsoft was valued at $3.36 trillion.
There’s a lesson here: if ChatGPT makes up URLs, it’s also making up facts. This is how generative AI works. In essence, the technology is a more sophisticated version of autocomplete, where it just guesses the next plausible word in a sequence. It acts like it “understands” what you say, but it doesn’t. Recently, I had a major chatbot say, The New York Times I watched the spelling bee.
If generative AI can’t even solve a spelling bee, then we shouldn’t be using it to get facts.