Facebook users who watched a video from a British newspaper showing black men received an automated prompt from the social media platform asking if they wanted to “keep seeing videos about Primates,” according to the New York Times.
The incident caused the company to investigate and disable its artificial intelligence-powered feature that issued the message, the outlet reported Friday.
The report continued:
On Friday, Facebook apologized for what it called “an unacceptable error” and said it was looking into the recommendation feature to “prevent this from happening again.” The video, dated June 27, 2020, was by The Daily Mail and featured clips of Black men in altercations with white civilians and police officers. It had no connection to monkeys or primates.
Darci Groves, a former content design manager with Facebook, told the outlet a friend sent her a screenshot of the prompt.
“She then posted it to a product feedback forum for current and former Facebook employees. In response, a product manager for Facebook Watch, the company’s video service, called it ‘unacceptable’ and said the company was ‘looking into the root cause,'” according to the Times article.
Below is the Daily Mail video in question:
'I'm being harassed by a bunch of black men': White man calls 911 on a group of friends celebrating a birthdayFULL STORY: http://dailym.ai/2Z7l1NS
Posted by Daily Mail on Saturday, June 27, 2020
“This was clearly an unacceptable error and we disabled the entire topic recommendation feature as soon as we realized this was happening so we could investigate the cause and prevent this from happening again,” Facebook spokesperson Dani Lever told USA Today in a statement.
“As we have said, while we have made improvements to our AI we know it’s not perfect and we have more progress to make,” she continued. “We apologize to anyone who may have seen these offensive recommendations.”
In May 2019, Facebook lifted its ban on a pro-life ad campaign in Ireland featuring an image of a human fetus, claiming it was mistaken in judging the picture to be “graphic or violent.”
“In this instance we made a mistake in applying a warning screen over the image used in The Iona Institute’s ad. We have removed the warning screen and apologise for any inconvenience caused,” the company said.
“The incident demonstrates that the AI tools the Masters of the Universe rely on to police their platforms are not yet up to the task,” Breitbart News reported in February.