Would You like a feature Interview?
All Interviews are 100% FREE of Charge
In the age of generative AI, where chatbots can provide detailed answers to questions based on content retrieved from the internet, the lines between fair use and plagiarism, between routine web scraping and unethical summarization, are very blurred.
Perplexity AI is a startup that combines a search engine with a large-scale language model to generate answers with detailed responses rather than just links. Unlike OpenAI’s ChatGPT and Anthropic’s Claude, Perplexity does not train its own underlying AI model, but rather uses open or commercially available models to take information it collects from the internet and turn it into answers.
But a series of allegations in June suggested the startup’s approach was bordering on unethical: Forbes accused the company of plagiarizing one of Perplexity’s news articles in the startup’s beta Perplexity Pages feature; and Wired slams Perplexity They illegally scraped the company’s website and other sites.
Perplexity, which was raising $250 million in April at a valuation of nearly $3 billion, denies it has done anything wrong. The Nvidia- and Jeff Bezos-backed company says it respects publishers’ requests not to scrape their content and that it operates within the bounds of fair use copyright law.
The situation is complicated. At its heart, there are nuances surrounding two concepts. The first is the Robots Exclusion Protocol, a standard that websites can use to indicate that they don’t want web crawlers to access or use their content. The second is fair use in copyright law, which establishes a legal framework that allows copyrighted material to be used without permission or payment under certain circumstances.
Secretly scrape web content
Perplexity ignores the Robots Exclusion Protocol to secretly scrape areas of websites that publishers don’t want bots to access, according to a June 19 article in Wired. Wired reported that it had observed Perplexity-linked machines doing this not only on the publisher’s news sites, but also on other publications under parent company Condé Nast.
In the report, the developer Rob Knight conducted a similar experiment. And came to the same conclusion.
Wired’s reporter and Knight tested their suspicions by asking Perplexity to summarize a set of URLs, then performing server-side observations of IP addresses associated with Perplexity visiting those sites. Perplexity then “summarized” the text of those URLs — except for a dummy website with limited content that Wired created for this purpose, in which case it just returned the raw text from the page.
This is where the nuances of robot exclusion protocols become important.
Web scraping is Technically Automated software called crawlers comb the web and collect information from websites. Search engines like Google do this to include web pages in their search results. Other companies and researchers use crawlers to gather data from the internet for market analysis, academic research, and, as we’ve learned, training machine learning models.
Web scrapers that follow this protocol first look for a “robots.txt” file in a site’s source code to see what is and isn’t allowed. What’s not allowed today is typically scraping publisher sites to build large training datasets for AI. Search engines and AI companies like Perplexity have announced that they follow this protocol, but they’re not legally required to do so.
Perplexity’s VP of operations Dmitry Shevelenko told TechCrunch that summarizing URLs is not the same as crawling. “Crawling is just sucking up information and adding it to your index,” Shevelenko said. He noted that Perplexity’s IPs only show up as visitors to “robots.txt banned” websites when users type the URL into a query, which “doesn’t meet the definition of crawling.”
“We are simply responding to a direct and specific request from a user to access the URL,” Shevelenko said.
In other words, when a user manually provides a URL to the AI, Perplexity’s AI doesn’t act as a web crawler, but as a tool to help retrieve and process the information the user requested, Perplexity said.
But for Wired and many other publishers, that’s a distinction without a difference, because hitting a URL, pulling information from it, and summarizing the text looks exactly like scraping, when done thousands of times a day.
(Wired also reported that one of Perplexity’s cloud service providers, Amazon Web Services, Researching startups AWS, which said it ignored the robots.txt protocol to scrape web pages cited by users at its prompts, told TechCrunch that Wired’s report was inaccurate and that it was treating media inquiries like any other report alleging misuse of its services.
Plagiarism or fair use?
Wired and Forbes have also accused Perplexity of plagiarism. Ironically, Wired stated: The confusion was that the article was plagiarized They accused the startup of secretly scraping their web content.
A reporter from Wired wrote that Perplexity’s chatbot “reads six paragraphs of text.” 287 words of text The sentence “thoroughly summarizes the article’s conclusions and the evidence used to reach those conclusions” is an exact copy of the original article, which Wired claims is plagiarized. Poynter Institute Guidelines They say if an author (or AI) uses seven consecutive words from an original source work, it may be plagiarism.
Forbes magazine also accused Perplexity of plagiarism. The news site said: Research Report In early June, Forbes published a story about Google CEO Eric Schmidt’s new venture testing AI-enabled drones for military use and actively hiring. The next day, Forbes editor John Paczkowski wrote: Post to X Confusion is Scoop republished As part of the beta feature “Perplexity Pages.”
Puzzle pageis a new tool, currently only available to certain Perplexity subscribers, which Perplexity promises will help users turn their research findings into “visually compelling, comprehensive content.” Examples of such content on the site, created by the startup’s employees, include articles like “A Beginner’s Guide to Drums” and “Steve Jobs: A Visionary CEO.”
“They plagiarized most of our reporting,” Paczkowski wrote, “citing us and a few people who reblogged us as sources, in a way that is most easily ignored.”
Forbes magazine reported. Many of the posts curated by the Perplexity team “are strikingly similar to original articles from multiple publications, including Forbes, CNBC and Bloomberg.” Forbes said the posts had been viewed tens of thousands of times and that none of the publications were named in the article. Rather, Perplexity’s articles were cited with “small, easily missed logos linked to them.”
Forbes also said the article about Schmidt contained “substantially identical language” to Forbes’ scoop. The roundup also included an image created by Forbes’ design team, but which appears to have been slightly edited by Perplexity.
Perplexity CEO Aravind Srinivas told Forbes at the time that the company would cite sources more prominently going forward, but the solution isn’t perfect because citations themselves have technical issues. There are illusory links to ChatGPT and other modelsAnd because Perplexity uses an OpenAI model, it’s likely susceptible to such hallucinations — in fact, Wired reports that they’ve observed Perplexity hallucinating entire stories.
Besides pointing out Perplexity’s “rough edges,” Srinivas and company are largely asserting Perplexity’s right to use such content for summarization.
This is where the nuances of fair use come into play: Plagiarism is bad, but it’s not strictly illegal.
by United States Copyright OfficeHowever, it is legal to use limited portions of a work, including quotations, for purposes such as commentary, criticism, news reporting, and academic reporting. AI companies such as Perplexity argue that providing summaries of articles falls within the scope of fair use.
“Nobody has a monopoly on the facts,” Shevelenko said. “Once the facts are published, they are available to everyone.”
Shevelenko likened Perplexity summarization to the way journalists often use information from other news sources to bolster their own reporting.
Mark McKenna, a law professor at the UCLA Institute for Technology, Law and Policy, told TechCrunch that this situation isn’t easy to sort out: In a fair use case, the court will likely consider whether the summary uses much of the original article’s language or just the ideas. They might also consider whether reading the summary is a substitute for reading the article.
“There’s no clear line,” McKenna said. “So [Perplexity] “A factual summary of what an article or report is is using non-copyrightable parts of a work — just facts and ideas. But the more actual words and sentences included in the summary, the more it starts to look like a copy rather than just a summary.”
Unfortunately for publishers, unless Perplexity uses the full wording (which it clearly does in some cases), its summaries may not be considered a violation of fair use.
How Perplexity Protects Itself
AI companies like OpenAI have signed media deals with various news publishers to access current and archived content to train their algorithms. In return, OpenAI promises to surface news articles from those publishers in response to user queries on ChatGPT. (However, There are some issues that need to be resolvedAs reported by the Nieman Institute last week.
Perplexity has held off on announcing a series of its own media deals, presumably waiting for the criticism against it to die down, but the company is “full steam ahead” on a series of advertising revenue-sharing deals with publishers.
Perplexity will start placing ads in answering queries, and publishers whose content is cited in the answer will get a cut of the corresponding ad revenue. Shevelenko said Perplexity is also looking at giving publishers access to its technology so they can build Q&A experiences or embed related questions natively within their sites or products.
But is this just a facade for organized IP theft? Perplexity isn’t the only chatbot that summarizes content so thoroughly that readers no longer feel the need to click through to the original source material.
And as these AI scrapers continue to steal publishers’ work and repurpose it for their own business, publishers will find it harder to earn advertising revenue, which ultimately means there will be less content to scrape. With no content to scrape, generative AI systems will turn to training on synthetic data, which can lead to a vicious cycle of producing biased and inaccurate content.