A Lawsuit Against Perplexity Calls Out Fake News Hallucinations | EUROtoday

Get real time updates directly on you device, subscribe now.

Perplexity didn’t reply to requests for remark.

In a press release emailed to WIRED, News Corp chief govt Robert Thomson in contrast Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the assertion says. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

OpenAI is dealing with its personal accusations of trademark dilution, although. In New York Times v. OpenAIthe Times alleges that ChatGPT and Bing Chat will attribute made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its status by means of trademark dilution. In one instance cited within the lawsuit, the Times alleges that Bing Chat claimed that the Times known as pink wine (sparsely) a “heart-healthy” meals, when actually it didn’t; the Times argues that its precise reporting has debunked claims concerning the healthfulness of average consuming.

“Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT director of exterior communications Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

If publishers prevail in arguing that hallucinations can violate trademark legislation, AI corporations may face “immense difficulties” in response to Matthew Sag, a professor of legislation and synthetic intelligence at Emory University.

“It is absolutely impossible to guarantee that a language model will not hallucinate,” Sag says. In his view, the best way language fashions function by predicting phrases that sound appropriate in response to prompts is all the time a kind of hallucination—generally it’s simply extra plausible-sounding than others.

“We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not.”

https://www.wired.com/story/dow-jones-new-york-post-sue-perplexity/