Facebook users who watched a newspaper video featuring Black men were asked if they wanted to keep seeing videos about primates by an artificial-intelligence recommendation system. Facebook described it as clearly an unacceptable error, before disabling the system and launched an investigation.
It said: "We apologise to anyone who may have seen these offensive recommendations." It is the latest in a long-running series of errors that have raised concerns over racial bias in AI.
In 2015, Google's Photos app labelled pictures of Black people as "gorillas". The company said that it was appalled and genuinely sorry, though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word "gorilla". In May, Twitter admitted racial biases in the way its "saliency algorithm" cropped previews of images.
Studies have also shown biases in the algorithms powering some facial-recognition systems. In 2020, Facebook announced a new "inclusive product council" - and a new equity team in Instagram - that would examine, among other things, whether its algorithms exhibited racial bias.
A representative said that the "primates" recommendation was an algorithmic error on Facebook and that it did not reflect the content of the video.
"We disabled the entire topic-recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again. As we have said while we have made improvements to our AI, we know it's not perfect and we have more progress to make."