Facebook has apologized after its artificial intelligence (AI) technology mislabeled a video of black men in altercations with white cops and civilians as "about primates."

After watching the video, which was published by the Daily Mail newspaper in the U.K. earlier this week, readers were asked if they wanted to "keep seeing videos about primates."

The social media giant has subsequently deactivated the subject recommendation feature and stated that it is looking into the reason for the issue, but the video has been up for more than a year.

On Friday, a Facebook spokeswoman told The New York Times, which broke the story first, that the automated prompt was an "unacceptable error" and apologized to anyone who came across the offensive suggestion.

"We apologize to anyone who may have seen these offensive recommendations," Facebook said in response to an AFP inquiry.

"We disabled the entire topic recommendation feature as soon as we realized this was happening so we could investigate the cause and prevent this from happening again."

Darci Groves, a former Facebook employee, tweeted about the gaffe on Thursday after a friend pointed out the mistake. She posted a screenshot of the video with the message "Keep seeing videos about Primates?" from Facebook.

Civil rights activists have attacked facial recognition software, claiming that it is inaccurate, especially when it comes to persons who are not white.

This isn't the first time Facebook has been criticized for serious technological flaws. When translated from Burmese to English, Chinese President Xi Jinping's name appeared on the platform as "Mr. S***hole" last year.

According to Reuters, the translation mishap appeared to be limited to Facebook and did not occur on Google.

The incident is the latest in a string of racial gaffes on the internet, allegedly caused by racial bias in automated systems. Recent studies have found facial recognition software is biased towards people of color and has a harder time recognizing them.

As a result, black people have been discriminated against or arrested as a result of a computer error.

Google had to apologize in 2015 after its Photos app incorrectly recognized black people as "gorillas." Later that year, after Microsoft's AI chatbot Tay began spewing racial obscenities and had to be taken offline, the company issued an apology.