Artificial Intelligence Mistakes

Bayram EKER
5 min readAug 5, 2022

--

Breaking news: an AI-powered object detection engine has been able to identify the longest cow on earth.

(never forget: without contextual understanding, data is just reduced to random information).

1. Microsoft Tay

Conversations with chatbots have become increasingly lifelike and efficient. If you’ve ever interacted with a chatbot, then you’ve seen the power of AI in action. These computer systems employ natural language processing (NLP) to understand and recreate human language.

About six years ago, Microsoft decided to enter this space. Their chatbot, named Tay, debuted on Twitter on March 23, 2016.

While it sounded promising at first, something went very wrong. Basically, Twitter users preyed on the bot’s rudimentary NLP and found ways to target its design vulnerabilities, manipulating it to learn and repeat inappropriate sentiments.

It didn’t take long for Tay to start mimicking some of the remarks and phrases used on the social media platform, eventually making sexist, racist, and demeaning remarks toward other Twitter users.

In fewer than 24 hours, Microsoft turned it off for good.

2. Amazon’s AI recruiting tool showed bias against women

Amazon started building machine learning programs in 2014 to review job applicants’ resumes. However, the AI-based experimental hiring tool had a major flaw: it was biased against women.

The model was trained to assess applications by studying resumes submitted to the company over a span of 10 years. As most of these resumes were submitted by men, the system taught itself to favor male candidates. This meant that the AI downgraded resumes with words such as “women’s” (as in the case with “women’s chess club captain”). Similarly, graduates from two all-women’s colleges were also ranked lower.

By 2015, the company recognized the tool was not evaluating applicants for various roles in a gender-neutral way, and the program was eventually disbanded. The incident came to light in 2018 after Reuters reported it.

3. False facial recognition match leads to Black man’s arrest

In February 2019, Nijeer Parks, a 31-year-old Black man living in Paterson, New Jersey, was accused of shoplifting and trying to hit a police officer with a car in Woodbridge, New Jersey. Although he was 30 miles away at the time of the incident, the police identified him using facial recognition software.

Parks was later arrested for charges including aggravated assault, unlawful possession of weapons, shoplifting, and possession of marijuana, among others, and spent 11 days in jail. According to a police report, the officers arrested Parks following a “high profile comparison” from a facial recognition scan of a fake ID left at the crime scene.

The case was dismissed in November 2019 for lack of evidence. Parks is now suing those involved in his arrest for violation of his civil rights, false arrest, and false imprisonment.

Facial recognition technology, which uses machine learning algorithms to identify a person based on their facial features, is known to have many flaws. In fact, a 2019 study found that facial recognition algorithms are “far less accurate” in identifying Black and Asian faces.

Parks is the third known person to be arrested due to false facial recognition matches. In all cases, the individuals wrongly identified were Black men.

Ultimately, while AI has grown in leaps and bounds in recent years, it is far from perfect. Going forward, it will be crucial to address its many vulnerabilities for it to truly emerge as a technological driving force for the world.

Common Mistakes Behind Artificial Intelligence Failure

1. Using the Wrong Data

In the rush to implement AI at an organization, business leaders often grab any data they can find and try to use it in their machine learning application. Then, they wonder why it isn’t generating insights from that information.

For data to be actionable and AI-ready, it must be clean and accurate. Even the largest and most robust dataset in the world would be unusable if it’s outdated, incorrect, or incomplete.

Not only must the data be free of defects, but it must also be multi-faceted enough to establish readable patterns.

2. Using AI as a Quick Fix

Often, business problems exist because there’s an issue with an existing workflow or process. While AI may be able to help solve some of these roadblocks, it isn’t a band-aid.

More often, business process reengineering is required to truly understand and fix inefficiencies.

As you look for areas in which to use AI, remember that, by itself, AI can’t fix your operations. Furthermore, it can’t fix your operations overnight, even with a focus on process improvement.

3. Operating in a Silo

Sure, your data science team might be able to complete an AI project without any outside help. Yet, what would happen if the system configuration was misaligned with your business needs?

We recommend including your operations staff in the project from the very beginning. This includes process engineers, plant operators, and warehouse managers, among others. These are the people who understand the data and its business context.

4. Emphasizing Technology Over People

It’s true that AI is exciting technology. However, it’s easy to become so focused on technology that you forget there are real people using it.

An AI project can often stir up feelings of uneasiness within your departments, especially among employees who fear that the technology may lessen or even replace their role. Without a focus on organizational change management, these uneasy employees will be reluctant to learn and embrace new software.

Conclusion

In this article, we have explored the top ten challenges that can prevent your organization from achieving the AI goals and also outlined some helpful solutions for such obstacles. The earlier you can resolve these challenges in AI, the better your chance will be of making effective use of artificial intelligence technologies across all areas of your enterprise.

--

--

No responses yet