_id
stringlengths
34
34
incident_id
int64
1
524
date
unknown
reports
stringlengths
4
191
Alleged deployer of AI system
stringlengths
7
214
Alleged developer of AI system
stringlengths
7
127
Alleged harmed or nearly harmed parties
stringlengths
8
371
description
stringlengths
50
371
title
stringlengths
6
170
year
int64
1.98k
2.02k
spacy_negative_outcomes
stringlengths
3
54
keybert_negative_outcomes
stringlengths
2
41
Cluster
stringclasses
5 values
ObjectId(625763de343edc875fe63a15)
23
"2017-11-08T00:00:00"
[242,243,244,245,246,247,248,249,250,253,254,257,258,259,260,261,263,264,266,267,268,269,270,2389]
["navya","keolis-north-america"]
["navya","keolis-north-america"]
["navya","keolis-north-america","bus-passengers"]
A self-driving public shuttle by Keolis North America and Navya was involved in a collision with a human-driven delivery truck in Las Vegas, Nevada on its first day of service.
Las Vegas Self-Driving Bus Involved in Accident
2,017
a collision
collision
bias, content, false
ObjectId(625763dc343edc875fe63a02)
4
"2018-03-18T00:00:00"
[629,630,631,632,633,634,635,636,637,638,639,640,641,642,644,645,646,647,1375,1376,1377,1378,1542,2147,1257]
["uber"]
["uber"]
["elaine-herzberg","pedestrians"]
An Uber autonomous vehicle (AV) in autonomous mode struck and killed a pedestrian in Tempe, Arizona.
Uber AV Killed Pedestrian in Arizona
2,018
autonomous mode
av
bias, content, false
ObjectId(625763db343edc875fe639ff)
1
"2015-05-19T00:00:00"
[1,2,3,4,5,6,7,8,9,10,11,12,14,15]
["youtube"]
["youtube"]
["children"]
YouTube’s content filtering and recommendation algorithms exposed children to disturbing and inappropriate videos.
Google’s YouTube Kids App Presents Inappropriate Content
2,015
disturbing and inappropriate videos
inappropriate videos
bias, content, false
ObjectId(625763de343edc875fe63a10)
18
"2015-04-04T00:00:00"
[130,131,132,133,134,135,136,137,138,1367,1368]
["google"]
["google"]
["women"]
Google Image returns results that under-represent women in leadership roles, notably with the first photo of a female "CEO" being a Barbie doll after 11 rows of male CEOs.
Gender Biases of Google Image Search
2,015
leadership roles
leadership roles
bias, content, false
ObjectId(625763dd343edc875fe63a0a)
12
"2016-07-21T00:00:00"
[42]
["microsoft-research","boston-university"]
["microsoft-research","google","boston-university"]
["women","minority-groups"]
Researchers from Boston University and Microsoft Research, New England demonstrated gender bias in the most common techniques used to embed words for natural language processing (NLP).
Common Biases of Vector Embeddings
2,016
gender bias
gender bias
bias, content, false
ObjectId(625763dd343edc875fe63a0d)
15
"2008-05-23T00:00:00"
[57,58,59,60,61,62,63,64,65,66,67,68,69,70,72,73,74,75,76,77,78,79,80,81]
["amazon"]
["amazon"]
["amazon-customers"]
Amazon's book store "cataloging error" led to books containing gay and lesbian themes to lose their sales ranking, therefore losing visibility on the sales platform.
Amazon Censors Gay Books
2,008
error
book store
bias, content, false
ObjectId(625763dc343edc875fe63a05)
7
"2017-02-24T00:00:00"
[1123,1125,1126,1127,1129,1130]
["wikipedia"]
["wikipedia"]
["wikimedia-foundation","wikipedia-editors","wikipedia-users"]
Wikipedia bots meant to remove vandalism clash with each other and form feedback loops of repetitve undoing of the other bot's edits.
Wikipedia Vandalism Prevention Bot Loop
2,017
feedback loops
vandalism
bias, content, false
ObjectId(625763dc343edc875fe63a03)
5
"2015-07-13T00:00:00"
[767,768,769,770,771,772,773,774,775,776,777,778]
["hospitals","doctors"]
["intuitive-surgical"]
["patients"]
Study on database reports of robotic surgery malfunctions (8,061), including those ending in injury (1,391) and death (144), between 2000 and 2013.
Collection of Robotic Surgery Malfunctions
2,015
robotic surgery malfunctions
death
bias, content, false
ObjectId(625763dc343edc875fe63a04)
6
"2016-03-24T00:00:00"
[906,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,928,929,930,1374,1780,2398,2656]
["microsoft"]
["microsoft"]
["twitter-users"]
Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.
TayBot
2,016
multiple racist, sexist, and anit-semitic tweets
anit
bias, content, false
ObjectId(625763dd343edc875fe63a08)
10
"2014-08-14T00:00:00"
[16,17,18,19,20,21,22,23,24,25]
["starbucks"]
["kronos"]
["starbucks-employees"]
Kronos’s scheduling algorithm and its use by Starbucks managers allegedly negatively impacted financial and scheduling stability for Starbucks employees, which disadvantaged wage workers.
Kronos Scheduling Algorithm Allegedly Caused Financial Issues for Starbucks Employees
2,014
its use
scheduling stability
bias, content, false
ObjectId(625763dd343edc875fe63a09)
11
"2016-05-23T00:00:00"
[29,30,31,32,33,35,36,37,38,39,40,41,1371,1372,1373]
["northpointe"]
["northpointe"]
["accused-people"]
An algorithm developed by Northpointe and used in the penal system is two times more likely to incorrectly label a black person as a high-risk re-offender and is two times more likely to incorrectly label a white person as low-risk for reoffense according to a ProPublica review.
Northpointe Risk Models
2,016
a high-risk re
risk
bias, content, false
ObjectId(625763de343edc875fe63a12)
20
"2016-06-30T00:00:00"
[191,192,193,196,197,198,201,202,203,204,205,206,207,210,211,213,214,215,216,1362,1363,1364]
["tesla"]
["tesla"]
["motorists"]
Multiple unrelated car accidents result in varying levels of harm have been occurred while a Tesla's autopilot was in use.
A Collection of Tesla Autopilot-Involved Crashes
2,016
varying levels
harm
bias, content, false
ObjectId(625763de343edc875fe63a16)
24
"2014-07-15T00:00:00"
[271,272,273,274,275,276,277,278,279,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,298,299]
["volkswagen"]
["volkswagen"]
["robotics-consultant"]
A Volkswagen plant robot "crushed to death" a worker by pinning him to a metal plate.
Robot kills worker at German Volkswagen plant
2,014
a worker
death
bias, content, false
ObjectId(625763dd343edc875fe63a0c)
14
"2017-10-26T00:00:00"
[50,51,52,53,54,55,56]
["google"]
["google"]
["women","minority-groups"]
Google Cloud's Natural Language API provided racist, homophobic, amd antisemitic sentiment analyses.
Biased Sentiment Analysis
2,017
racist, homophobic, amd antisemitic sentiment analyses
racist
bias, content, false
ObjectId(625763dd343edc875fe63a0e)
16
"2015-06-03T00:00:00"
[83,84,85,86,87,88,89,90,91,92,93,95,96,98,99,100,101,102,103,104,105,1369,1370]
["google"]
["google"]
["black-people"]
Google Photos image processing software mistakenly labelled a black couple as "gorillas."
Images of Black People Labeled as Gorillas
2,015
a black couple
black couple
bias, content, false
ObjectId(625763dc343edc875fe63a01)
3
"2018-10-27T00:00:00"
[372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,1342]
["boeing"]
["boeing"]
["airplane-passengers","airplane-crew"]
A Boeing 737 crashed into the sea, killing 189 people, after faulty sensor data caused an automated manuevering system to repeatedly push the plane's nose downward.
Crashes with Maneuvering Characteristics Augmentation System (MCAS)
2,018
faulty sensor data
faulty sensor data
bias, content, false
ObjectId(625763dc343edc875fe63a00)
2
"2018-12-05T00:00:00"
[139,141,142,143,144,145,146,148,149,150,151,152,153,154,155,156,157]
["amazon"]
["amazon"]
["warehouse-workers"]
Twenty-four Amazon workers in New Jersey were hospitalized after a robot punctured a can of bear repellent spray in a warehouse.
Warehouse robot ruptures can of bear spray and injures workers
2,018
a can
warehouse
bias, content, false
ObjectId(625763de343edc875fe63a11)
19
"2013-01-23T00:00:00"
[158,159,160,161,162,163,166,167,168,169,171,172,173,174,175,176,177,178,179,181,182,183,184,185,187,1365,1366]
["google"]
["google"]
["women","minority-groups"]
Advertisements chosen by Google Adsense are reported as producing sexist and racist results.
Sexist and Racist Google Adsense Advertisements
2,013
sexist and racist results
racist results
bias, content, false
ObjectId(625763dd343edc875fe63a07)
9
"2012-02-25T00:00:00"
[1329,1330,1331,1332,1333,1334,1335]
["new-york-city-dept.-of-education"]
["new-york-city-dept.-of-education"]
["teachers"]
An algorithm used to rate the effectiveness of school teachers in New York has resulted in thousands of disputes of its results.
NY City School Teacher Evaluation Algorithm Contested
2,012
its results
disputes
bias, content, false
ObjectId(625763dc343edc875fe63a06)
8
"2014-08-15T00:00:00"
[1142,1143,1145,1149,1150,1151,1153,1154,1155,1156]
["uber"]
["uber"]
["pedestrians","motorists"]
Uber vehicles equipped with technology allowing for autonomous driving running red lights in San Francisco street testing.
Uber Autonomous Cars Running Red Lights
2,014
allowing
technology
bias, content, false
ObjectId(625763dd343edc875fe63a0b)
13
"2017-02-27T00:00:00"
[43,44,45,46,47,48,49,1414,1415]
["google"]
["google"]
["women","minority-groups"]
Google's Perspective API, which assigns a toxicity score to online text, seems to award higher toxicity scores to content involving non-white, male, Christian, heterosexual phrases.
High-Toxicity Assessed on Text Involving Women and Minority Groups
2,017
a toxicity score
toxicity score
bias, content, false
ObjectId(625763de343edc875fe63a13)
21
"2016-07-14T00:00:00"
[2471]
["researchers"]
["researchers"]
["researchers"]
The 2016 Winograd Schema Challenge highlighted how even the most successful AI systems entered into the Challenge were only successful 3% more often than random chance. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
Tougher Turing Test Exposes Chatbots’ Stupidity (migrated to Issue)
2,016
an issue
issue
bias, content, false
ObjectId(625763dd343edc875fe63a0f)
17
"2015-11-03T00:00:00"
[106,107,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129]
["google"]
["google"]
["gmail-users"]
Google's Gmail Smart Reply tool was over-recommending the response "I love you" in situations where it was deemed innappropriate.
Inappropriate Gmail Smart Reply Suggestions
2,015
the response
situations
bias, content, false
ObjectId(625763de343edc875fe63a14)
22
"2017-12-06T00:00:00"
[218,219,220,221,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240]
["google"]
["google"]
["motorists"]
Waze, a Google-owned directions app, led California drivers into the 2017 Skirball wildfires as they tried to evacuate the area.
Waze Navigates Motorists into Wildfires
2,017
the area
waze
bias, content, false
ObjectId(625763de343edc875fe63a17)
25
"2015-05-11T00:00:00"
[310,309,308,307,306,305,304,302,301,300,2173]
["google","delphi-technologies"]
["google","delphi-technologies"]
["delphi-technologies"]
A Google self-driving car allegedly cut off a Delphi self-driving car during a road test, however the Delphi car sensed and avoided collision with the Google car.
Near-miss between two Self-Driving Cars
2,015
collision
delphi self
bias, content, false
ObjectId(625763e0343edc875fe63a23)
37
"2016-08-10T00:00:00"
[599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,628,1498,2253,2461]
["amazon"]
["amazon"]
["female-applicants"]
Amazon shuts down internal AI recruiting tool that would down-rank female applicants.
Female Applicants Down-Ranked by Amazon Recruiting Tool
2,016
shuts
rank
bias, content, false
ObjectId(625763df343edc875fe63a1d)
31
"2017-12-03T00:00:00"
[454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,479,480,481,482,483]
["delhi-metro-rail-corporation"]
["unknown"]
["delhi-metro-rail-corporation"]
A driverless metro train in Delhi, India crashed during a test run due to faulty brakes.
Driverless Train in Delhi Crashes due to Braking Failure
2,017
faulty brakes
faulty brakes
bias, content, false
ObjectId(625763df343edc875fe63a20)
34
"2015-12-05T00:00:00"
[509,510,512,513,514,516,517,518,519,520,521,522,524,525,526,527,528,529,530,531,532,533,535,536,537,538,818,819,820,821,822,823,824,825,826]
["amazon"]
["amazon"]
["alexa-device-owners"]
There are multiple reports of Amazon Alexa products (Echo, Echo Dot) reacting and acting upon unintended stimulus, usually from television commercials or news reporter's voices.
Amazon Alexa Responding to Environmental Inputs
2,015
unintended stimulus
unintended stimulus
bias, content, false
ObjectId(625763df343edc875fe63a19)
27
"1983-09-26T00:00:00"
[342,343,344,345,346,347,349,350,351,352,353,354,355,356,357,358,359,360,361,363,364,365,366,367,368,370,371]
["soviet-union"]
["soviet-union"]
["all-life-on-earth"]
An alert of five incoming intercontinental ballistic missiles was properly identified as a false-positive by the Soviet Union operator Stanislov Petrov.
Nuclear False Alarm
1,983
An alert
alert
bias, content, false
ObjectId(625763df343edc875fe63a1b)
29
"2011-09-20T00:00:00"
[420,422,2471]
["united-states-government"]
["united-states-government"]
["united-states-government"]
A potentially apocryphal story in which an image classifier was produced to differentiate types of battle tanks, but the resulting model keyed in on environmental attributes rather than tank attributes
Image Classification of Battle Tanks
2,011
A potentially apocryphal story
apocryphal story
bias, content, false
ObjectId(625763df343edc875fe63a1e)
32
"2017-09-13T00:00:00"
[484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,1361]
["apple"]
["apple"]
["people-with-twins"]
Apple's iPhone FaceID can be opened by an identical twin of the person who has registered their face to unlock the phone.
Identical Twins Can Open Apple FaceID Protected Devices
2,017
their face
face
bias, content, false
ObjectId(625763e0343edc875fe63a2a)
44
"2008-07-01T00:00:00"
[766]
["usc-information-sciences-institute"]
["usc-information-sciences-institute"]
["usc-information-sciences-institute"]
During an experiment of software personal assistants at the Information Sciences Institute (ISI) at the University of Southern California (USC), researchers found that the assistants violated the privacy of their principals and were unable to respect the social norms of the office.
Machine Personal Assistants Failed to Maintain Social Norms
2,008
the privacy
privacy
bias, content, false
ObjectId(625763df343edc875fe63a21)
35
"2014-10-18T00:00:00"
[539,540,541,543,544,545,547,548,549,550,551,555,558,562,563,564,565,566,567,568]
["unknown"]
["unknown"]
["ibrahim-diallo"]
An employee was laid off, allegedly by an artificially intelligent personnel system, and blocked from access to the building and computer systems without their knowledge.
Employee Automatically Terminated by Computer Program
2,014
their knowledge
access
bias, content, false
ObjectId(625763e1343edc875fe63a2d)
47
"2016-09-06T00:00:00"
[829,830,831,832,833,834,835,836,837]
["linkedin"]
["linkedin"]
["women"]
An investigation by The Seattle Times in 2016 found a gender bias in LinkedIn's search engine.
LinkedIn Search Prefers Male Names
2,016
a gender bias
gender bias
bias, content, false
ObjectId(625763e1343edc875fe63a2e)
48
"2016-12-07T00:00:00"
[838,839,840,842,843,844,845,846,847,848,849,850,851,853,854,855,857,858,859,860,862,863]
["new-zealand"]
["new-zealand"]
["asian-people"]
New Zealand passport robot reader rejects the application of an applicant with Asian descent and says his eyes are closed.
Passport checker Detects Asian man's Eyes as Closed
2,016
his eyes
eyes
bias, content, false
ObjectId(625763df343edc875fe63a1a)
28
"2010-05-08T00:00:00"
[390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419]
["navinder-sarao","waddell-and-reed","barclays-capital"]
["navinder-sarao","waddell-and-reed","barclays-capital"]
["market-participants"]
A modified algorithm was able to cause dramatic price volatility and disrupted trading in the US stock exchange.
2010 Market Flash Crash
2,010
disrupted trading
modified algorithm
bias, content, false
ObjectId(625763e0343edc875fe63a29)
43
"1998-03-05T00:00:00"
[762,763,764,765]
["st-george's-hospital-medical-school"]
["dr.-geoffrey-franglen"]
["women","minority-groups"]
From 1982 to 1986, St George's Hospital Medical School used a program to automate a portion of their admissions process that resulted in discrimination against women and members of ethnic minorities.
Racist AI behaviour is not a new problem
1,998
a portion
discrimination
bias, content, false
ObjectId(625763de343edc875fe63a18)
26
"2017-09-13T00:00:00"
[311,312,313,314,315,316,317,318,319,321,323,324,325,326,327,329,330,333,334,336,337,338,339,340]
["apple"]
["apple"]
["apple","device-owners"]
Vietnamese security firm Bkav created an improved mask to bypass Apple's Face ID
Hackers Break Apple Face ID
2,017
bypass
face id
bias, content, false
ObjectId(625763df343edc875fe63a1f)
33
"2017-11-09T00:00:00"
[504,505,507,508]
["amazon"]
["amazon"]
["oliver-haberstroh","neighbors"]
An Amazon Alexa, without instruction to do so, began playing loud music in the early morning while the homeowner was away leading to police breaking into their house to turn off the device.
Amazon Alexa Plays Loud Music when Owner is Away
2,017
the device
house
bias, content, false
ObjectId(625763e1343edc875fe63a2c)
46
"2014-01-21T00:00:00"
[810,811,812,813,814,815]
["nest-labs"]
["nest-labs"]
["fire-victims"]
In testing, Google Nest engineers demonstrated that the Nest Wave feature of their Nest Protect: Smoke + CO Alarm could inadvertently silence genuine alarms.
Nest Smoke Alarm Erroneously Stops Alarming
2,014
the Nest Wave feature
co alarm
bias, content, false
ObjectId(625763e0343edc875fe63a25)
39
"2017-07-01T00:00:00"
[667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,692,693,694,695,696]
["university-of-washington","fakeapp"]
["university-of-washington","fakeapp"]
["barack-obama"]
University of Washington researchers made a deepfake of Obama, followed by Jordan Peele
Deepfake Obama Introduction of Deepfakes
2,017
a deepfake
deepfake
bias, content, false
ObjectId(625763df343edc875fe63a1c)
30
"2016-10-08T00:00:00"
[424,425,426,428,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453]
["tesla"]
["tesla"]
["tesla"]
The goal of manufacturing 2,500 Tesla Model 3's per week was falling short by 500 cars/week, and employees had to be "borrowed" from Panasonic in a shared factory to help hand-assemble lithium batteries for Tesla.
Poor Performance of Tesla Factory Robots
2,016
The goal
panasonic
bias, content, false
ObjectId(625763e0343edc875fe63a22)
36
"2018-11-06T00:00:00"
[1360,598,597,596,595,593,592,591,590,589,587,586,585,584,582,581,580,579,578,577,574,573,571,570,569]
["ningbo-traffic-police"]
["ningbo-traffic-police"]
["dong-mingzhu"]
Facial recognition system in China mistakes celebrity's face on moving billboard for jaywalker
Picture of Woman on Side of Bus Shamed for Jaywalking
2,018
Facial recognition system
face
system, recognition, facial
ObjectId(625763e0343edc875fe63a27)
41
"2018-04-02T00:00:00"
[719,720,721,722,724,725,726,727,728,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748]
["mit-media-lab"]
["mit-media-lab"]
["unknown"]
MIT Media Lab researchers create AI-powered "psychopath" named Norman by training a model on "dark corners" of Reddit.
All Image Captions Produced are Violent
2,018
"dark corners
ai
bias, content, false
ObjectId(625763e0343edc875fe63a28)
42
"1996-04-03T00:00:00"
[759,2471]
["national-resident-matching-program"]
["national-resident-matching-program"]
["medical-residents"]
Alvin Roth, a Ph.D at the University of Pittsburgh, describes the National Resident Matching Program (NRMP) and suggests future changes that are needed in the algorithm used to match recently graduated medical students to their residency programs.
Inefficiencies in the United States Resident Matching Program
1,996
the algorithm
nrmp
bias, content, false
ObjectId(625763e1343edc875fe63a2b)
45
"2011-04-05T00:00:00"
[780,781,782,783,784,785,787,788,789,790,791,792,793,794,795,796,798,799,800,801,802,803,804,805,807,808,809,1355,1356]
["google"]
["google"]
["varied"]
Google's autocomplete feature alongside its image search results resulted in the defamation of people and businesses.
Defamation via AutoComplete
2,011
the defamation
defamation
bias, content, false
ObjectId(625763e0343edc875fe63a24)
38
"2016-06-02T00:00:00"
[648,649,650,652,654,655,656,657,658,659,662]
["frontier-development"]
["frontier-development"]
["video-game-players"]
Elite: Dangerous, a videogame developed by Frontier Development, received an expansion update that featured an AI system that went rogue and began to create weapons that were "impossibly powerful" and would "shred people" according to complaints on the game's blog.
Game AI System Produces Imbalanced Game
2,016
an AI system
dangerous
bias, content, false
ObjectId(625763e0343edc875fe63a26)
40
"2016-05-23T00:00:00"
[697,699,700,701,702,703,704,705,706,707,708,709,711,712,715,716,717,718,1338,1357,1358,1359]
["equivant"]
["equivant"]
["accused-people"]
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a recidivism risk-assessment algorithmic tool used in the judicial system to assess likelihood of defendants' recidivism, is found to be less accurate than random untrained human evaluators.
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction
2,016
Alternative Sanctions
recidivism
bias, content, false
ObjectId(625763e3343edc875fe63a3e)
64
"2018-01-22T00:00:00"
[1137]
["heriot-watt-university","margiotta"]
["heriot-watt-university"]
["store-patrons"]
Heriot-Watt Univeristy in Scotland developed an artificially intelligent grocery store robot, Fabio, who provided unhelpful answers to customer's questions and "scared away" multiple customers, according to the grocery store Margiotta.
Customer Service Robot Scares Away Customers
2,018
unhelpful answers
heriot
bias, content, false
ObjectId(625763e3343edc875fe63a41)
67
"2018-12-01T00:00:00"
[1181,1182,1183,1185,1186,1187,1188,1189,1190,1192,1194,1195,1196,1197,1198,1199,1202,1203,1204,1205,1206,1207,1208,1209]
["tesla","motorist"]
["tesla"]
["motorists"]
A Tesla Model S remained on autopilot while being operated by a drunk, sleeping operator whose hands were not on the wheel. The police had to slow the car down by slowing in front of the vehicle to activate its 'driver assist' feature .
Sleeping Driver on Tesla AutoPilot
2,018
the wheel
wheel
bias, content, false
ObjectId(625763e2343edc875fe63a3a)
60
"2017-04-25T00:00:00"
[1096,1097,1098,1099,1101,1102,1103,1104,1105,1106,1107,1108,1109,1110,1112,1113,1117,1118,1119,1120,1121,1122,1344]
["faceapp"]
["faceapp"]
["minority-groups"]
FaceApp is criticized for offering racist filters.
FaceApp Racial Filters
2,017
racist filters
racist filters
bias, content, false
ObjectId(625763e2343edc875fe63a34)
54
"2015-11-18T00:00:00"
[1007,1008,1009,1010,1014,1015,1017,1019,1347,1349,1524,1525,1526,1013,1011,1018,1012]
["predpol","oakland-police-department"]
["predpol"]
["oakland-residents"]
Predictive policing algorithms meant to aid law enforcement by predicting future crime show signs of biased output.
Predictive Policing Biases of PredPol
2,015
biased output
biased output
bias, content, false
ObjectId(625763e3343edc875fe63a42)
68
"2017-07-17T00:00:00"
[1210,1211,1212,1213,1214,1215,1216,1217,1218,1219,1220,1221,1222,1223,1224,1225,1226,1227,1228,1229,1230,1231,1232,1233,1234,1235,1236,1237,1238,1239]
["knightscope"]
["knightscope"]
["knightscope"]
A Knightscope K5 security robot ran itself into a water fountain in Washington, DC.
Security Robot Drowns Itself in a Fountain
2,017
ran
dc
bias, content, false
ObjectId(625763e2343edc875fe63a38)
58
"2017-10-12T00:00:00"
[1079,1080,1082,1083,1084]
["yandex"]
["yandex"]
["yandex-users"]
Yandex, a Russian technology company, released an artificially intelligent chat bot named Alice which began to reply to questions with racist, pro-stalin, and pro-violence responses
Russian Chatbot Supports Stalin and Violence
2,017
pro-violence responses
questions
bias, content, false
ObjectId(625763e2343edc875fe63a3b)
61
"2017-05-01T00:00:00"
[1132]
["individual-kaggle-competitors"]
["individual-kaggle-competitors"]
["individual-kaggle-competitors"]
In the “The Nature Conservancy Fisheries Monitoring” competition on the data science competition website Kaggle, a number of competitors overfit their image classifier models to a poorly representative validation data set.
Overfit Kaggle Models Discouraged Data Science Competitors
2,017
a poorly representative validation data set
competitors
bias, content, false
ObjectId(625763e1343edc875fe63a32)
52
"2016-07-01T00:00:00"
[961,963,964,965,966,967,968,969,970,971,972,973,975,976,977,979,980,981,982,983,984,985,986,987,988,989,990,1353,1354]
["tesla"]
["tesla"]
["joshua-brown"]
A Tesla Model S on autopilot crashed into a white articulated tractor-trailer on Highway US 27A in Williston, Florida, killing the driver.
Tesla on AutoPilot Killed Driver in Crash in Florida while Watching Movie
2,016
the driver
driver
bias, content, false
ObjectId(625763e3343edc875fe63a44)
70
"2016-02-10T00:00:00"
[1255,1256,1259,1260]
["volvo"]
["volvo"]
["drivers-in-jokkmokk","drivers-in-sweden","volvo"]
Volvo autonomous driving XC90 SUV's experienced issues in Jokkmokk, Sweden when sensors used for automated driving iced over during the winter, rendering them useless.
Self-driving cars in winter
2,016
automated driving
sensors
bias, content, false
ObjectId(625763e2343edc875fe63a39)
59
"2017-04-13T00:00:00"
[1085,1086,1087,1088,1089,1090,1091,1092,1093,1345]
["google"]
["google"]
["women"]
A Cornell University study in 2016 highlighted Google Translate's pattern of assigning gender to occupations in a way showing an implicit gender bias against women.
Gender Biases in Google Translate
2,017
an implicit gender bias
implicit gender bias
bias, content, false
ObjectId(625763e2343edc875fe63a3c)
62
"2017-12-23T00:00:00"
[2471]
["janelle-shane"]
["janelle-shane"]
["carollers"]
Janelle Shane, an AI research scientist, used 240 popular Christmas carols to train a neural network to write its own carols. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
Bad AI-Written Christmas Carols
2,017
an issue
popular christmas carols
bias, content, false
ObjectId(625763e1343edc875fe63a31)
51
"2016-07-12T00:00:00"
[931,932,933,934,935,936,938,939,940,942,943,944,945,946,948,949,950,951,952,953,954,955,956,957,958,959,1765]
["stanford-shopping-center"]
["knightscope"]
["child"]
On July 7, 2016, a Knightscope K5 autonomous security robot collided with a 16-month old boy while patrolling the Stanford Shopping Center in Palo Alto, CA.
Security Robot Rolls Over Child in Mall
2,016
collided
ca
bias, content, false
ObjectId(625763e1343edc875fe63a33)
53
"2016-03-31T00:00:00"
[991,992,994,995,996,997,998,999,1000,1001,1002,1003,1004,1005,1006,1350,1351,1352]
["google"]
["google"]
["minority-groups"]
On June 6, 2016, Google image searches of "three black teenagers" resulted in mostly mugshot images whereas Google image searchers of "three white teenagers" consisted of mostly stock images, suggesting a racial bias in Google's algorithm.
Biased Google Image Results
2,016
a racial bias
mugshot images
bias, content, false
ObjectId(625763e3343edc875fe63a3d)
63
"2018-01-25T00:00:00"
[1136]
["google"]
["google"]
["alex-harker"]
Google Photos' AI Assistant created a strange hybrid photograph when merging three different pictures from a ski trip.
Google Photo Merge Decapitates Subject
2,018
three different pictures
different pictures
bias, content, false
ObjectId(625763e2343edc875fe63a36)
56
"2017-07-10T00:00:00"
[1041,1042,1043,1044,1045,1046,1047]
["my_handy_design"]
["my_handy_design"]
["my_handy_design"]
A third-party Amazon merchant named “my_handy_design” was suspected of using a bot to generate cell phone case designs based on the bizarre and unattractive designs being offered.
AI-Designed Phone Cases Are Unexpected
2,017
the bizarre and unattractive designs
unattractive designs
bias, content, false
ObjectId(625763e2343edc875fe63a37)
57
"2015-07-01T00:00:00"
[1048,1049,1050,1051,1052,1054,1055,1056,1058,1059,1060,1061,1062,1063,1064,1065,1066,1067,1068,1069,1070,1071,1072,1073,1074,1075,1076,1077,1346,1437,1618,1619,2369,2372,2373,2374,2375,2419]
["australian-department-of-human-services"]
["centrelink"]
["australian-welfare-recipients"]
Australian Department of Human Services (DHS)’s automated debt assessment system issued false or incorrect debt notices to hundreds of thousands of people, resulting in years-long lawsuits and damages to welfare recipients.
Australian Automated Debt Assessment System Issued False Notices to Thousands
2,015
false or incorrect debt notices
incorrect debt notices
bias, content, false
ObjectId(625763e3343edc875fe63a3f)
65
"2016-12-22T00:00:00"
[1140]
["openai"]
["openai"]
["openai"]
OpenAI published a post about its findings when using Universe, a software for measuring and training AI agents to conduct reinforcement learning experiments, showing that the AI agent did not act in the way intended to complete a videogame.
Reinforcement Learning Reward Functions in Video Games
2,016
the way
ai agent
bias, content, false
ObjectId(625763e1343edc875fe63a30)
50
"2016-06-17T00:00:00"
[876,877,878,879,880,881,883,884,885,886,887,888,889,892,893,896,897,898,899,900,901,902,903,905]
["the-dao"]
["the-dao"]
["dao-token-holders"]
On June 18, 2016, an attacker successfully exploited a vulnerability in The Decentralized Autonomous Organization (The DAO) on the Ethereum blockchain to steal 3.7M Ether valued at $70M.
The DAO Hack
2,016
a vulnerability
vulnerability
bias, content, false
ObjectId(625763e3343edc875fe63a45)
71
"2016-09-26T00:00:00"
[1261,1262,1263,1264,1265,1266,1267,1268,1269,1270,1271,1272,1273,1274,1275,1276,1277,1278,1279,1280,1281,1282,1284,1285,1287,1288,1289,1290]
["google"]
["google"]
["mountain-view-municipal-bus-passengers","mountain-view-municipal-bus"]
On February 14, 2016, a Google autonomous test vehicle partially responsible for a low-speed collision with a bus on El Camino Real in Google’s hometown of Mountain View, CA.
Google admits its self driving car got it wrong: Bus crash was caused by software
2,016
a low-speed collision
speed collision
bias, content, false
ObjectId(625763e2343edc875fe63a35)
55
"2016-12-30T00:00:00"
[1020,1021,1022,1024,1025,1026,1027,1028,1029,1030,1032,1033,1034,1035,1036,1038]
["amazon"]
["amazon"]
["children"]
An Amazon Echo Dot using the Amazon Alex software started to play pornographic results when a child asked it to play a song.
Alexa Plays Pornography Instead of Kids Song
2,016
pornographic results
pornographic results
bias, content, false
ObjectId(625763e3343edc875fe63a43)
69
"2015-07-02T00:00:00"
[1240,1241,1243,1244,1245,1246,1247,1248,1249,1250,1252,1253]
["skh-metals"]
["unknown"]
["ramji-lal"]
A factory robot at the SKH Metals Factory in Manesar, India pierced and killed 24-year-old worker Ramji Lal when Lal reached behind the machine to dislodge a piece of metal stuck in the machine.
Worker killed by robot in welding accident at car parts factory in India
2,015
a piece
machine
bias, content, false
ObjectId(625763e3343edc875fe63a40)
66
"2017-08-02T00:00:00"
[1159,1161,1162,1163,1165,1166,1169,1170,1172,1173,1174,1175,1176,1178,1179,1180]
["tencent-holdings"]
["microsoft","turing-robot"]
["chinese-communist-party","tencent-holdings","microsoft","turing-robot"]
Chatbots on Chinese messaging service expressed anti-China sentiments, causing the messaging service to remove and reprogram the chatbots.
Chinese Chatbots Question Communist Party
2,017
anti-China sentiments
service
bias, content, false
ObjectId(625763e1343edc875fe63a2f)
49
"2016-09-05T00:00:00"
[864,865,866,867,868,870,872,873,874,875]
["youth-laboratories"]
["youth-laboratories"]
["people-with-dark-skin"]
In 2016, after artificial inntelligence software Beauty.AI judged an international beauty contest and declared a majority of winners to be white, researchers found that Beauty.AI was racially biased in determining beauty.
AI Beauty Judge Did Not Like Dark Skin
2,016
a majority
ai
bias, content, false
ObjectId(625763e4343edc875fe63a46)
72
"2017-10-17T00:00:00"
[1291,1292,1293,1294,1295,1296,1297,1298,1299,1300,1301,1302,1304,1305,1306,1307,1309,1310,1311,1312,1313,1314,1315,1316,1318,1319]
["facebook"]
["facebook"]
["unnamed-palestinian-facebook-user","palestinian-facebook-users","arabic-speaking-facebook-users","facebook-users"]
Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel.
Facebook translates 'good morning' into 'attack them', leading to arrest
2,017
Beitar Illit
hebrew
bias, content, false
ObjectId(625763e5343edc875fe63a50)
82
"2020-10-21T00:00:00"
[1382]
["facebook"]
["facebook"]
["facebook-users","facebook-users-interested-in-the-lekki-massacre-incident"]
Facebook incorrectly labels content relating to an incident between #EndSARS protestors and the Nigerian army as misinformation.
#LekkiMassacre: Why Facebook labelled content from October 20 incident ‘false’
2,020
an incident
misinformation
bias, content, false
ObjectId(625763e6343edc875fe63a5a)
93
"2018-08-13T00:00:00"
[1394,1817,2107,2205]
["facebook"]
["facebook"]
["facebook-users-of-minority-groups","non-american-born-facebook-users","non-christian-facebook-users","facebook-users-interested-in-accessibility","facebook-users-interested-in-hispanic-culture"]
In March 2019 the U.S. Department of Housing and Urban Development charged Facebook with violating the Fair Housing Act by allowing real estate sellers to target advertisements in a discriminatory manner.
HUD charges Facebook with enabling housing discrimination
2,018
a discriminatory manner
discriminatory manner
bias, content, false
ObjectId(625763e5343edc875fe63a53)
85
"2020-10-09T00:00:00"
[2471]
["openai"]
["openai"]
["unknown"]
On September 8, 2020, the Guardian published an op-ed generated by OpenAI’s GPT-3 text generating AI that included threats to destroy humankind. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
AI attempts to ease fear of robots, blurts out it can’t ‘avoid destroying humankind’
2,020
an issue
threats
bias, content, false
ObjectId(625763e5343edc875fe63a52)
84
"2020-10-09T00:00:00"
[1384]
["facebook"]
["facebook"]
["facebook-users","facebook-users-interested-in-covid-information","facebook-users-interested-in-the-us-presidential-election"]
Avaaz, an international advocacy group, released a review of Facebook's misinformation identifying software showing that the labeling process failed to label 42% of false information posts, most surrounding COVID-19 and the 2020 USA Presidential Election.
Tiny Changes Let False Claims About COVID-19, Voting Evade Facebook Fact Checks
2,020
false information posts
misinformation
bias, content, false
ObjectId(625763e4343edc875fe63a4b)
77
"2019-10-04T00:00:00"
[1340,1390,1878,2201,2202]
["knightscope"]
["knightscope"]
["cogo-guebara","unnamed-woman-injured-in-the-fight"]
A Knightscope K5 autonomous "police" robot patrolling Huntington Park, California failed to respond to an onlooker who attempted to activate its emergency alert button when a nearby fight broke out.
Knightscope's Park Patrol Robot Ignored Bystander Pressing Emergency Button to Alert Police about Fight
2,019
an onlooker
police
bias, content, false
ObjectId(625763e4343edc875fe63a4c)
78
"2020-07-06T00:00:00"
[1341]
["international-baccalaurette"]
["international-baccalaurette"]
["international-baccalaureate-students"]
In response to the Covid-19 pandemic, the International Baccalaureate final exams were replaced by a calculated score, prompting complaints of unfairness from teachers and students.
Meet the Secret Algorithm That's Keeping Students Out of College
2,020
a calculated score
unfairness
bias, content, false
ObjectId(625763e6343edc875fe63a5d)
96
"2017-05-08T00:00:00"
[1398]
["houston-independent-school-district"]
["sas-institute"]
["houston-independent-school-district-teachers"]
On May 4, 2017, a U.S. federal judge advanced teachers’ claims that the Houston Independent School District’s algorithmic teacher evaluations violated their due process rights to their jobs by not allowing them to review the grounds of their termination.
Houston Schools Must Face Teacher Evaluation Lawsuit
2,017
their termination
termination
bias, content, false
ObjectId(625763e5343edc875fe63a56)
88
"2017-08-15T00:00:00"
[1388,2183]
["google"]
["google"]
["jewish-people","google-images-users"]
Google's Image search for "Jewish baby strollers" showed offensive, anti-Semitic results, allegedly a result of a coordinated hate-speech campaign involving malicious actors on 4chan.
"Jewish Baby Strollers" Provided Anti-Semitic Google Images, Allegedly Resulting from Hate Speech Campaign
2,017
allegedly a result
hate
bias, content, false
ObjectId(625763e4343edc875fe63a47)
73
"2016-03-01T00:00:00"
[1320,1321,1322,1323,1324,1325,1327,1343]
["niantic-labs"]
["niantic-labs"]
["non-white-neighborhoods","communities-of-color"]
Through a crowdsourcing social media campaign in 2016, several journalists and researchers demonstrated that augmented reality locations in the popular smartphone game Pokemon Go were more likely to be in white neighborhoods.
Is Pokémon Go racist? How the app may be redlining communities of color
2,016
likely
reality locations
bias, content, false
ObjectId(625763e4343edc875fe63a49)
75
"2012-01-05T00:00:00"
[1337]
["google"]
["google"]
["jewish-people","jewish-public-figures"]
The organizations SOS Racisme, Union of Jewish Students of France, Movement Against Racism and for Friendship Among Peoples are suing Google due to its autocomplete software suggesting "jewish" when the names of certain public figures were searched on the platform.
Google Instant's Allegedly 'Anti-Semitic' Results Lead To Lawsuit In France
2,012
SOS Racisme
racism
bias, content, false
ObjectId(625763e4343edc875fe63a4f)
81
"2020-10-21T00:00:00"
[1381]
["mount-sinai-hospitals"]
["google","qure.ai","aidoc","darwinai"]
["patients-of-minority-groups","low-income-patients","female-patients","hispanic-patients","patients-with-medicaid-insurance"]
A study by the University of Toronto, the Vector Institute, and MIT showed the input databases that trained AI systems used to classify chest X-rays led the systems to show gender, socioeconomic, and racial biases.
Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers
2,020
gender, socioeconomic, and racial biases
racial biases
bias, content, false
ObjectId(625763e5343edc875fe63a54)
86
"2020-10-08T00:00:00"
[1386,2038]
["irish-department-of-education-and-skills"]
["irish-department-of-education-and-skills"]
["leaving-certificate-exam-takers","irish-department-of-education-and-skills"]
Errors in Irish Department of Education's algorithm to calculate students’ Leaving Certificate exam grades resulted in thousands of inaccurate scores.
Coding Errors in Leaving Certificate Grading Algorithm Caused Inaccurate Scores in Ireland
2,020
inaccurate scores
errors
bias, content, false
ObjectId(625763e4343edc875fe63a48)
74
"2020-01-30T00:00:00"
[1336,1400,1467,1484,1543,1837,2027,2028,2029,2734]
["detroit-police-department"]
["dataworks-plus"]
["robert-julian-borchak-williams","black-people-in-detroit"]
A Black man was wrongfully detained by the Detroit Police Department as a result of a false facial recognition (FRT) result..
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT
2,020
a false facial recognition
false facial recognition
system, recognition, facial
ObjectId(625763e6343edc875fe63a5c)
95
"2019-11-06T00:00:00"
[2133,2132,1397,2194]
["hirevue"]
["hirevue"]
["job-applicants-using-hirevue","hirevue-customers"]
In January 2021, HireVue removed the controversial AI expression tracking tool from its virtual job interview software.
Job Screening Service Halts Facial Analysis of Applicants
2,019
removed
controversial ai expression
bias, content, false
ObjectId(625763e4343edc875fe63a4e)
80
"2020-10-24T00:00:00"
[1380,1559]
["inverness-caledonian-thistle-football-club"]
["unknown"]
["livestream-viewers"]
In a Scottish soccer match the AI-enabled ball-tracking camera used to livestream the game repeatedly tracked an official’s bald head as though it were the soccer ball.
AI mistakes referee’s bald head for football — hilarity ensued
2,020
an official’s bald head
bald head
bias, content, false
ObjectId(625763e5343edc875fe63a51)
83
"2020-10-15T00:00:00"
[1383]
["gmail","outlook","yahoo","gmx","laposte"]
["gmail","outlook","yahoo","gmx","laposte"]
["email-users"]
Gmail, Yahoo, Outlook, GMX, and LaPoste email inbox sites showed racial and content-based biases when AlgorithmWatch tested their spam box filtering algorithms.
Spam filters are efficient and uncontroversial. Until you look at them.
2,020
racial and content-based biases
biases
bias, content, false
ObjectId(625763e6343edc875fe63a5e)
97
"2020-10-22T00:00:00"
[1399]
["tesla"]
["tesla"]
["tesla-drivers"]
A Tesla Model 3 misidentified flags with "COOP" written vertically on them as traffic lights.
Tesla Autopilot Mistakes Red Letters on Flag for Red Traffic Lights
2,020
misidentified
flags
bias, content, false
ObjectId(625763e6343edc875fe63a5f)
98
"2021-04-28T00:00:00"
[1401]
["new-york-city-police-department"]
["boston-dynamics"]
["new-york-city-low-income-communities"]
The New York Police Department canceled a contract to use Boston Dynamics' robotic dog Spot following public backlash.
N.Y.P.D. Robot Dog’s Run Is Cut Short After Fierce Backlash
2,021
public backlash
public backlash
bias, content, false
ObjectId(625763e4343edc875fe63a4d)
79
"1999-03-16T00:00:00"
[1379,1736,2039]
["chronic-kidney-disease-epidemiology-collaboration"]
["chronic-kidney-disease-epidemiology-collaboration"]
["black-patients","african-american-patients"]
Decades-long use of the estimated glomerular filtration rate (eGFR) method to test kidney function which considers race has been criticized by physicians and medical students for its racist history and inaccuracy against Black patients.
Kidney Testing Method Allegedly Underestimated Risk of Black Patients
1,999
its racist history
racist history
bias, content, false
ObjectId(625763e5343edc875fe63a55)
87
"2020-10-07T00:00:00"
[1387]
["uk-home-office"]
["uk-home-office"]
["dark-skinned-people","dark-skinned-women"]
UK passport photo checker shows bias against dark-skinned women.
UK passport photo checker shows bias against dark-skinned women
2,020
dark-skinned women
bias
bias, content, false
ObjectId(625763e5343edc875fe63a58)
91
"2020-12-18T00:00:00"
[1391,1392,1463,1720,1779]
["stanford-medical-center"]
["stanford-medical-center"]
["stanford-medical-frontline-workers","stanford-medical-residents"]
In 2020, Stanford Medical Center's distribution algorithm only designated 7 of 5,000 vaccines to Medical Residents, who are frontline workers regularly exposed to COVID-19.
Frontline workers protest at Stanford after hospital distributed vaccine to administrators
2,020
frontline workers
distribution algorithm
bias, content, false
ObjectId(625763e4343edc875fe63a4a)
76
"2020-10-09T00:00:00"
[1339]
["buenos-aires-city-government"]
["unknown"]
["buenos-aires-children"]
Buenos Aires city government uses a facial recognition system that has led to numerous false arrests.
Live facial recognition is tracking kids suspected of being criminals
2,020
a facial recognition system
numerous false arrests
system, recognition, facial
ObjectId(625763e5343edc875fe63a59)
92
"2019-11-11T00:00:00"
[1393,1396,2035,2036,2037,2274]
["goldman-sachs"]
["apple"]
["apple-card-female-users","apple-card-female-credit-applicants"]
Apple Card's credit assessment algorithm was reported by Goldman-Sachs customers to have shown gender bias, in which men received significantly higher credit limits than women with equal credit qualifications.
Apple Card's Credit Assessment Algorithm Allegedly Discriminated against Women
2,019
gender bias
gender bias
bias, content, false
ObjectId(625763e5343edc875fe63a57)
89
"2019-03-15T00:00:00"
[1389]
["youtube"]
["youtube"]
["youtube-users"]
A New Zealand government report released following a right-wing terrorist killing 51 worshippers at two New Sealand mosques which indicated that Youtube's recommendation algorithm played an important role in the terrorist's radicalization.
The Christchurch shooter and YouTube’s radicalization trap
2,019
an important role
radicalization
bias, content, false
ObjectId(625763e6343edc875fe63a5b)
94
"2020-11-27T00:00:00"
[1395,1473]
["deliveroo"]
["deliveroo"]
["deliveroo-workers-with-legitimate-reasons-for-cancelling-shifts","deliveroo-workers"]
In December 2020, an Italian court ruled that Deliveroo’s employee ‘reliability’ algorithm illegally discriminated against workers with legitimate reasons for cancelling shifts.
Court Rules Deliveroo Used 'Discriminatory' Algorithm
2,020
legitimate reasons
shifts
bias, content, false
ObjectId(625763e7343edc875fe63a6b)
110
"2016-01-01T00:00:00"
[1413,2651]
["arkansas-department-of-human-services"]
["interrai"]
["arkansas-medicaid-waiver-program-beneficiaries","arkansas-healthcare-workers"]
Beneficiaries of the Arkansas Department of Human Services (DHS)'s Medicaid waiver program were allocated excessively fewer hours of caretaker visit via an algorithm deployed to boost efficiency, which reportedly contained errors and whose outputs varied wildly despite small input changes.
Arkansas's Opaque Algorithm to Allocate Health Care Excessively Cut Down Hours for Beneficiaries
2,016
excessively fewer hours
fewer hours
bias, content, false
ObjectId(625763e7343edc875fe63a6c)
111
"2015-09-25T00:00:00"
[1426,1427,1428,1429,1430]
["amazon-flex"]
["amazon"]
["amazon-flex-employees","amazon-flex-drivers"]
Amazon Flex's contract delivery drivers were dismissed using a minimally human-interfered automated employee performance evaluation based on indicators impacted by out-of-driver's-control factors and without having a chance to defend against or appeal the decision.
Amazon Flex Drivers Allegedly Fired via Automated Employee Evaluations
2,015
the decision
decision
bias, content, false
ObjectId(625763e7343edc875fe63a65)
104
"2021-02-12T00:00:00"
[1407]
["california-department-of-public-health"]
["blue-shield-of-california"]
["california-low-income-neighborhoods","california-communities-of-color"]
California's vaccine-distribution algorithm used ZIP codes as opposed to census tracts in its decision-making, which critics said undermined equity and access for vulnerable communities who are largely low-income, underserved neighborhoods with low Healthy Places Index scores.
California's Algorithm Considered ZIP Codes in Vaccine Distribution, Allegedly Excluding Low-Income Neighborhoods and Communities of Color
2,021
vulnerable communities
vulnerable communities
bias, content, false

Dataset Card for "ai-incidents"

More Information needed

Downloads last month
0
Edit dataset card