Business mishaps in AI can have serious consequences for companies, so you need to test machine learning systems very carefully before bringing them to market. Otherwise, large investments in new technologies can end in a major financial and reputational disaster.
So how do you avoid business mishaps when implementing artificial intelligence? While the first piece of advice may seem too simple to work, it is: “Pay attention to data quality!”
The second piece of advice concerns extensive testing in a closed environment before releasing the tool to the public. You should not only test the technical performance of the tool but also verify its:
Even the giants in the field of artificial intelligence are not following this advice today, releasing chatbots into the world labeled as “early experiments” or “research projects.” However, as the technology matures and laws governing the use of artificial intelligence come into effect, these issues will become more pressing.
The list of business mishaps related to the use of artificial intelligence begins with a case from 2015. That’s when the Google Photos app, which used an early version of artificial intelligence for image recognition (computer vision), incorrectly labeled photos of black people as photos of gorillas. This business mishaps happened because the training data set used to teach the algorithm contained too few photos of black people.
What’s more, Google had a similar problem with a Nest smart home camera, which misidentified some dark-skinned people as animals. These incidents show that computer vision systems still have trouble recognizing the characteristics of different races.
In 2023, iTutor Group agreed to pay $365,000 to settle a lawsuit over the use of discriminatory recruiting software. The software was found to have automatically rejected female candidates over the age of 55 and candidates over the age of 60, without considering their experience or qualifications.
Amazon failed similarly business mishaps. Back in 2014, the company was working on AI to help with the hiring process. The system struggled to evaluate female candidates’ resumes because it was learning from data that included mostly documents submitted by men. As a result, Amazon abandoned the project to implement artificial intelligence in the process.
These cases show that automating the recruitment process carries the risk of perpetuating bias and treating candidates unfairly.
Source: DALL·E 3, prompt: Marta M. Kania (https://www.linkedin.com/in/martamatyldakania/)
Another case is very contemporary. In 2023, lawyer Steven A. Schwartz used ChatGPT to find past legal cases for a lawsuit against the airline Avianca. However, it turned out that at least six of the cases provided by AI were false – they contained incorrect names, case numbers, and citations.
This happened because Large Language Models (LLM) hallucinate, i.e. they create probable answers when they can’t find the right facts. Therefore it is necessary to check their answers every time. And Schwartz skipped this step. That’s why the judge fined him $5,000 for “gross negligence”.
According to a 2021 class action lawsuit, the blood oxygen app on Apple Watch does not work properly for people with darker skin tones. Apple asserts that it has tested the app on a “wide range of skin types and tones,” but critics say the technology devices are still not designed with dark-skinned people in mind.
Source: DALL·E 3, prompt: Marta M. Kania (https://www.linkedin.com/in/martamatyldakania/)
Zillow, a real estate company, launched Zillow Offers Zillow, a program to buy homes and quickly resell them, in 2018. CNN reported that Zillow had bought 27,000 homes since its launch in April 2018, but had only sold 17,000 by the end of September 2021. Zillow said the business mishap was caused by its use of artificial intelligence. The algorithm incorrectly predicted home prices, causing Zillow to overpay for purchases. Although the company immediately shut down the program, it had to lay off 25 percent of its staff. By unintentionally buying homes at prices higher than current estimates of future sale prices, the company recorded a $304 million loss.
In 2016, Microsoft released an experimental AI chatbot called Tay. It was supposed to learn by interacting with Twitter (now X) users. Within 16 hours, Tay “learned” to post offensive, racist, and sexist tweets. Twitter users deliberately provoked the bot, which did not have the proper safety mechanisms used in chatbots today, such as ChatGPT, Microsoft Copilot, and Google Bard. Microsoft quickly disabled the bot and apologized for the incident, but Tay is one of Microsoft’s bigger business mishaps.
A chatbot mishap also happened to Google, which released a bot named Meena in 2020. Meta (formerly Facebook) also failed to avoid a similar mistake. In August 2022, it launched a new AI chatbot called BlenderBot 3, which was designed to chat with people and learn from those interactions.
Within days of its release, there were reports of the chatbot making offensive, racist, and factually incorrect statements in conversations. For example, it claimed that Donald Trump won the 2020 U.S. election, spread anti-Semitic conspiracy theories, and criticized Facebook.
If you like our content, join our busy bees community on Facebook, Twitter, LinkedIn, Instagram, YouTube, Pinterest, TikTok.
Author: Robert Whitney
JavaScript expert and instructor who coaches IT departments. His main goal is to up-level team productivity by teaching others how to effectively cooperate while coding.
Pinterest, which made its debut on the social media scene a decade ago, never gained…
Thinking carefully on a question of how to promote a startup will allow you to…
A podcast in marketing still seems to be a little underrated. But it changes. It…
Video marketing for small business is an excellent strategy of internet marketing. The art of…
Are you wondering how to promote a startup business? We present crowdfunding platforms and websites…
How to use social media to increase sales? Well, let's start like that. Over 2.3…