‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw | EUROtoday
Here’s a pleasant little distraction out of your workday: Head to Google, sort in any made-up phrase, add the phrase “meaning,” and search. Behold! Google’s AI Overviews is not going to solely affirm that your gibberish is an actual saying, it would additionally let you know what it means and the way it was derived.
This is genuinely enjoyable, and you will discover a lot of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom meaning “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”
It all sounds completely believable, delivered with unwavering confidence. Google even gives reference hyperlinks in some circumstances, giving the response an added sheen of authority. It’s additionally improper, at the very least within the sense that the overview creates the impression that these are frequent phrases and never a bunch of random phrases thrown collectively. And whereas it’s foolish that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s additionally a tidy encapsulation of the place generative AI nonetheless falls brief.
As a disclaimer on the backside of each AI Overview notes, Google makes use of “experimental” generative AI to energy its outcomes. Generative AI is a strong software with every kind of legit sensible functions. But two of its defining traits come into play when it explains these invented phrases. First is that it’s in the end a likelihood machine; whereas it might seem to be a large-language-model-based system has ideas and even emotions, at a base degree it’s merely inserting one most-likely phrase after one other, laying the observe because the practice chugs ahead. That makes it superb at developing with a proof of what these phrases would imply in the event that they meant something, which once more, they don’t.
“The prediction of the next word is based on its vast training data,” says Ziang Xiao, a pc scientist at Johns Hopkins University. “However, in many cases, the next coherent word does not lead us to the right answer.”
The different issue is that AI goals to please; analysis has proven that chatbots usually inform folks what they wish to hear. In this case meaning taking you at your phrase that “you can’t lick a badger twice” is an accepted flip of phrase. In different contexts, it’d imply reflecting your individual biases again to you, as a crew of researchers led by Xiao demonstrated in a examine final yr.
“It’s extremely difficult for this system to account for every individual query or a user’s leading questions,” says Xiao. “This is especially challenging for uncommon knowledge, languages in which significantly less content is available, and minority perspectives. Since search AI is such a complex system, the error cascades.”
https://www.wired.com/story/google-ai-overviews-meaning/