Share
Commentary

Facebook's AI Bugs Out in an Ugly Way, Labels Black Men 'Primates'

Share

Artificial intelligence has become such an integral part of our online experience, we barely think about it.

It gives marketers the ability to gather data about an individual’s activities, purchases, opinions and interests. That information is then used to predict what products and services will appeal to him or her.

This technology has come a long way, but it is far from perfect.

The Daily Mail released a video on Facebook last June that included clips of black men clashing with white civilians and police officers.

Facebook users who recently watched the video were alarmed when an automatic prompt asked them if they would like to “keep seeing videos about Primates,” according to The New York Times.

Trending:
Travis Kelce Angers Taylor Swift Fans After Reaction to Pro-Trump Post, Stirs Up Major Controversy

The outlet reported that there had been no references to monkeys in the video and that Facebook is at a loss as to why such a prompt would appear.

The company immediately disabled the “artificial intelligence-powered feature” responsible for the prompt.

“As we have said, while we have made improvements to our AI, we know it’s not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations,” Facebook spokeswoman Dani Lever said.

The company said the error was “unacceptable” and that it is conducting an investigation to “prevent this from happening again.”

This incident is not the first time a Big Tech company has been called out for faulty AI.

The Times cited a similar hiccup involving Google Photos in 2015. Several images of black people were labeled as “gorillas.” The company issued an apology and said it would fix the problem.

Two years later, Wired determined that all Google had done to address the issue was to censor the words “gorilla,” “chimp,” “chimpanzee” and “monkey” from searches.

According to the Times, AI is especially suspect in the area of facial recognition technology.

In 2018, the outlet detailed a study on facial recognition conducted by a researcher at the MIT Media Lab. The project found that “when the person in the photo is a white man, the software is right 99 percent of the time.

Related:
An 85-Year-Old Woman Pulls Out Hidden Revolver When Armed Robber Least Expects It

“But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women.”

“These disparate results … show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.”

Are real-world racial “biases” somehow seeping into AI? Or is it just a case of the system having more difficulty “seeing” darker images? I think we know the answer.

Regardless, it is a little concerning that Facebook, the master of the universe and the gatekeeper of what the public can and cannot see, uses AI that apparently can’t tell the difference between a black person and an ape.

Truth and Accuracy

Submit a Correction →



We are committed to truth and accuracy in all of our journalism. Read our editorial standards.

Tags:
, , , , ,
Share
Elizabeth writes commentary for The Western Journal and The Washington Examiner. Her articles have appeared on many websites, including MSN, RedState, Newsmax, The Federalist and RealClearPolitics. Please follow Elizabeth on Twitter or LinkedIn.
Elizabeth is a contract writer at The Western Journal. Her articles have appeared on many conservative websites including RedState, Newsmax, The Federalist, Bongino.com, HotAir, MSN and RealClearPolitics.

Please follow Elizabeth on Twitter.




Conversation