Disclaimer and Content Warning:
This is a big topic that we are only going to scratch the surface of today.
On this page, we will cite a few examples of racist, sexist, and/or otherwise harmful incidents involving AI or related technologies. There are no visible examples of offensive language or images but there are dry and euphemistic descriptions of the incidents. Always be aware that discussions about algorithmic bias might involve systemic and/or individual examples of bias, offensive or dehumanizing language, and other problematic content.
Scroll down to continue.
Let's define our terms, first.
-- https://en.wikipedia.org/wiki/Algorithmic_bias
This is Tay.
Tay was an artificial intelligence chatbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter.
-- https://en.wikipedia.org/wiki/Tay_(chatbot)
-- screenshot of Dall-E "picture of teenagers in a public library" generated 25 Jan 2023 in-house at Galecia Group.
-- https://slate.com/technology/2023/02/dalle2-stable-diffusion-ai-art-race-bias.html
"Amazon scraps secret AI recruiting tool that showed bias against women" -- Reuters
But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.
Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.
-- https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Why does this happen?