“You can’t lick a Badger twice”: Google Failures highlights a major defect of artificial intelligence


Here is a small distraction from your business day: go to Google, type any made phrase, add the word “meaning” and search. Looking at the AI ​​Google’s overall view not only confirm that your controversy is a real sentence, but it also tells you what it means and how it is derived.

This is really fun, and you can find many examples on social media. In the world of artificial intelligence reviews, “a loose dog does not surf” “a playful way to say that something is likely to happen or something that doesn’t work.” The invented phrase “wired as wired” is a term in the sense that “one’s behavior or characteristics is a direct result of the intrinsic nature or” their wiring “, just like the function of a computer with its physical connections.”

All of this seems quite acceptable, delivered with unmatched confidence. Google even offers reference bonds in some cases and gives this response a glossy authority. It is also wrong, at least in the sense that the overall view creates the impression that these are common terms, not a bunch of random words that are shattered. And while it is stupid that AI’s overall view “never throw a poodle into the pig” is a proverb with the Bible derivative, and a regular siege from where the AI ​​is still short.

As a denial at the bottom of any brief note of artificial intelligence, Google uses the “experimental” AI to provide the power of its results. AI is a powerful tool with a variety of legal practical programs. But its two defining features are played when explaining these invented terms. The first is that ultimately a device is probable. While it may seem that a large -language stool -based system has thoughts or even emotions, at one base level simply puts one word after another and pushes the path as a train. This makes you present the explanations of these phrases He wanted That is, if they meant something, they wouldn’t do it again.

“Predicting the next word is based on its extensive education data,” says Ziang Xiao, a computer scientist at Johns Hopkins University. “However, in many cases, the next coherent word does not lead us to the right answer.”

Another factor is that artificial intelligence intends to be happy. Research has shown that chatbots often tell people what they want to hear. In this case, this means to bring you to your remarks that “you can’t lick twice” is an accepted phrase. In other areas, this may mean reflecting on your personal prejudice, as a team of scholars led by Xiao showed in a study last year.

“It is very difficult for this system to calculate any personal query or outstanding questions,” says Xiao. “This is especially challenging for unusual knowledge, languages ​​in which less content is available and minority views. Since the search for artificial intelligence is such a complex system, waterfalls of error.”

Leave a Reply

Your email address will not be published. Required fields are marked *