AI is soaked in racism and misogyny

Sometimes, AI will be very upfront in its racism, encouraging the torturing of Iranians and the surveilling of mosques, for example. It will crudely over-predict crimes in US Black and Latino neighborhoods while under-predicting crime in white neighborhoods. These sort of occurrences are not flaws in the AI design. They are features and benefits designed by white programmers and sold to white police and white judges.

That AI controlled car is more likely to run over a darker skinned person if it gets the chance because darker skinned people don’t exist in the white code. AI will eagerly crop people of color out of pictures too, or change their faces into images of apes. When AI judged beauty competitions, it was white beauties that won every time. When a range of AI tools were asked to produce a picture of a “normal woman”, all the images were thin and 98% had fair skin. “The biases are embedded deep in these systems, so it becomes ingrained and automatic,” Black artist Stephanie Dinkins stated. Years and years after AI ‘confused’ images of Black people with gorillas, the problem had not been fixed because the ‘problem’ is not a problem at all. It is a design feature. It was working as designed by the white men for the white men, or by white man-surrogate designers working under white man rules. “People think that all these choices are so data driven,” said Sasha Luccioni, a research scientist at the open-source AI start-up Hugging Face but “it’s very few people making very subjective decisions.”

Data is political. What you collect. What you don’t collect. “Women who have a heart attack in the UK are 50 per cent more likely to be misdiagnosed than men,” author Caroline Criado Perez wrote. “They’re also more likely to die. And it’s basically because the vast majority of medical data we have collected historically and continue to collect today, including in cardiovascular research, has been in the male body. Male humans, male animals. Even male cells.” A report by Dr. Anne-Sophie Morand, an attorney-at-law, stated that, “a lack of physiological indicators of heart attacks in women led to AI systems being 50% more likely to misdiagnose heart attacks in women compared to men.” Such behavior, of course, is merely replicating what male doctors have done forever. Research has found that AI “could transform” such misdiagnosis for the better. That might happen for rich, white women. However, if AI starts doing proper diagnoses that result in higher costs, AI will be quickly sent back for retraining. AI must cut costs. The huge expense of running AI systems must show a good return, so AI must cut costs, and the best way to cut costs is to target poor, marginalized and helpless people, such as old people.

In the USA, an AI health model with 90% error rate, was found to be overriding doctors’ judgments and forcing elderly people out of rehab and nursing homes, thus cutting costs. That’s not an error rate. That’s design. The World Health Organization has warned about AI agism and how it could seriously impact elderly healthcare. An editorial from the prestigious Nature magazine stated that, “Debates on these issues are being starved of oxygen.” At the other end of the age scale, another AI system had an 83% error rate in diagnosing children. You don’t get 90% and 83% ‘error’ rates by accident. It’s design. It’s all part of the design.