3 AI Disasters That Stood Out

Share

The insights that come from data and machine learning algorithms can be invaluable, but mistakes can also be irreversible. The following recent high-profile AI blunders demonstrate the consequences of how AI technology can go wrong.
In 2017, The Economist declared that data had replaced oil as the world’s most valuable resource, a claim that has been widely circulated ever since. Organizations across all industries continue to invest more in data and analytics, however, just as oil has a dark side, data and analytics are just as risky.
According to the State of the CIO 2023 report published by CIO, 26% of IT leaders say machine learning (ML) and AI will drive the most important IT investments. While decisions based on ML algorithms can give organizations a competitive advantage, mistakes can be costly to reputation, revenue, and even life safety.
While it’s important to understand the data and the information it conveys, it’s equally important to know your tools, be familiar with the data, and always keep your organization’s values at the forefront.
Here are a few high-profile AI blunders from the past decade that show what can go wrong.

McDonald’s terminates experiment over AI ordering blunder

After three years of working with IBM to utilize AI for drive-thru ordering, McDonald’s announced in June 2024 that it was terminating the program. The reason for this was a series of videos that appeared on social media showing customers confused and frustrated by AI misinterpreting their orders.
One TikTok video in particular stood out, showing two customers constantly pleading with the AI to stop the operation as it continued to add more McNuggets to their order, with the final number actually reaching 260. On June 13, 2024, McDonald’s announced in an internal memo obtained by the trade publication Restaurant Business that it would end its partnership with IBM and stop testing.
McDonald’s had piloted this AI technology in over 100 U.S. drive-thru restaurants, but said it remains bullish on the future of voice ordering solutions.

Grok AI wrongly accuses NBA star of vandalism

In April 2024, Grok, a chatbot launched by Elon Musk’s xAI, incorrectly accused NBA star Klay Thompson on the X platform of smashing windows in multiple homes in Sacramento, California.
Some commentators speculated that Grok may have been “hallucinating” after absorbing posts about Thompson’s “throwing bricks” (“throwing bricks” is a basketball term for missed shots) and incorrectly constructed the story. “ and incorrectly constructed the sabotage. Thompson’s poor performance in his final game for the Golden State Warriors, and the Warriors’ crushing loss, was one of the worst playoff performances of his career. Thompson was then traded to the Dallas Mavericks.
Although Grok displays a disclaimer stating that “Grok is an early feature and errors may occur. Please verify its output.” , the incident still raises questions about who should be held liable when AI chatbots post false and defamatory statements.

NYC AI chatbot encourages business owners to break the law

In March 2024, Markup reported that MyCity, a Microsoft-powered chatbot, was providing misinformation to entrepreneurs, leading them to potentially break the law.
Launched in October 2024, MyCity was designed to provide New Yorkers with information about starting and running a business, housing policies, and workers’ rights; however, Markup found serious problems with MyCity, which incorrectly claimed that business owners could take a cut of employee tips, fire employees who complained of sexual harassment, and even serve food that had been bitten by rodents, and it also incorrectly claimed that landlords could discriminate based on source of income.
After the story broke, New York City Mayor Eric Adams, who faced the allegations, defended the program. For now, the chatbot is still operating online.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *