Entities
View all entitiesIncident Stats
CSETv0 Taxonomy Classifications
Taxonomy DetailsProblem Nature
Specification
Physical System
Software only
Level of Autonomy
High
Nature of End User
Amateur
Public Sector Deployment
No
Data Inputs
images of people's faces
CSETv1 Taxonomy Classifications
Taxonomy DetailsIncident Number
49
Notes (special interest intangible harm)
Beauty.ai determined mostly white applicants to be most attractive among all contestants.
Special Interest Intangible Harm
yes
Date of Incident Year
2016
Date of Incident Month
08
Estimated Date
No
CSETv1_Annotator-1 Taxonomy Classifications
Taxonomy DetailsIncident Number
49
Notes (special interest intangible harm)
Beauty.ai determined mostly white applicants to be most attractive among all contestants.
Special Interest Intangible Harm
yes
Date of Incident Year
2016
Date of Incident Month
08
Estimated Date
No
CSETv1_Annotator-2 Taxonomy Classifications
Taxonomy DetailsIncident Number
49
Special Interest Intangible Harm
yes
Date of Incident Year
2016
Estimated Date
No
Multiple AI Interaction
no
Embedded
no
Incident Reports
Reports Timeline
- View the original report at its source
- View the report at the Internet Archive
Image: Flickr/Veronica Jauriqui
Beauty pageants have always been political. After all, what speaks more strongly to how we see each other than which physical traits we reward as beautiful, and which we code as ugly? It wasn't until 1983 tha…
- View the original report at its source
- View the report at the Internet Archive
The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners
The first international beauty contest judged by “machines” was supposed to use objecti…
- View the original report at its source
- View the report at the Internet Archive
Only a few winners were Asian and one had dark skin, most were white
Just months after Microsoft's Tay artificial intelligence sent racist messages on Twitter, another AI seems to have followed suit.
More than 6,000 selfies of individuals w…
- View the original report at its source
- View the report at the Internet Archive
The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners
The first international beauty contest judged by “machines” was supposed to use objecti…
- View the original report at its source
- View the report at the Internet Archive
With more than 6,000 applicants from over 100 countries competing, the first international beauty contest judged entirely by artificial intelligence just came to an end. The results are a bit disheartening.
The team of judges, a five robot …
- View the original report at its source
- View the report at the Internet Archive
It’s not the first time artificial intelligence has been in the spotlight for apparent racism, but Beauty.AI’s recent competition results have caused controversy by clearly favouring light skin.
The competition, which ran online and was ope…
- View the original report at its source
- View the report at the Internet Archive
If you’re one who joins beauty pageants or merely watches them, what would you feel about a computer algorithm judging a person’s facial attributes? Perhaps we should ask those who actually volunteered to be contestants in a beauty contest …
- View the original report at its source
- View the report at the Internet Archive
An AI designed to do X will eventually fail to do X. Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric s…
- View the original report at its source
- View the report at the Internet Archive
It’s long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans’ emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way int…
- View the original report at its source
- View the report at the Internet Archive
In 2016, researchers from Boston University and Microsoft were working on artificial intelligence algorithms when they discovered racist and sexist tendencies in the technology underlying some of the most popular and critical services we us…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Gender Biases in Google Translate
TayBot
Similar Incidents
Did our AI mess up? Flag the unrelated incidents