AI is increasingly a feature of everyday life. But with its models based on often outdated data and the field still dominated by male researchers, as its influence on society grows it is also perpetuating sexist stereotypes.
Well, most of humanity has been sexist since time out of mind. Even if AIs are trained on nothing but objective data, they can’t help but mirror our sexist society. They must be carefully programmed to show society as it could be, instead of how it is. At the same time, they have to keep facts right - like the number of fingers on a hand, for example.
Even then, I’m sure we’ll encounter problems. AIs do not “think” in ways we understand.
They must be carefully programmed to show society as it could be, instead of how it is.
Must they? Which particular flavour of utopia should they choose?
I was only thinking of eliminating learned bigotry, but you raise a worthwhile question.
As you might expect, the answer is no, but the data it’s trained on probably is.
Relatedly, remember last year when Google told Gemini to make generated people racially diverse, so it started making black nazis, black popes, and Asian Vikings?
Not Google, but my favorite was “ethnaically anabigaus” appearing randomly on generated pictures.
And this is why anyone who suggests that machine learning systems can be used to make social decisions of any kind should be laughed out of the room. Even when the system programs itself, its goals are set by people and it is trained on data generated and selected by people.
Aren’t these things trained quite heavily on stock photos? Because that would explain a lot of the gender bias.