Rabbit’s Reviews > A Citizen's Guide to Artificial Intelligence > Status Update
Rabbit
is on page 49 of 232
“Predictive policing employs artificial intelligence to identify likely targets for police intervention, to prevent crime, and to solve past crimes. Its
most famous incarnation is PredPol, which began in 2006 as a collaboration between the Los Angeles Police Department and … the University of California … is now used by more than sixty police departments around the US…”
— Mar 26, 2022 09:35PM
most famous incarnation is PredPol, which began in 2006 as a collaboration between the Los Angeles Police Department and … the University of California … is now used by more than sixty police departments around the US…”
Like flag
Rabbit’s Previous Updates
Rabbit
is on page 122 of 232
If you know anything about tech this book is a total slog
— Mar 28, 2022 10:42AM
Rabbit
is on page 61 of 232
I’m determined to finish this book but it’s very entry level on tech ethics, and the technosolutionism that goes unexamined in chapter 3 (“Bias”) rubs me the wrong way. I may need to put this thing down for now and come back later.
— Mar 26, 2022 10:43PM
Rabbit
is on page 51 of 232
I know we can’t expect every book to be systematically critical but I’m lost entirely on a lack of criticism for predictive policing as a body of work, not just of current predictive policing techniques.
— Mar 26, 2022 09:49PM
Rabbit
is on page 46 of 232
… • it is pseudo-objective;
it evades existing protections against discriminatory reasoning based on
race, gender, and other protected categories;
it obscures the often complex decisions made by developers about how
to interpret and categories facts about people's lives; and
it fundamentally distorts the nature of commerce, politics, and everyday life.
— Mar 26, 2022 08:30PM
it evades existing protections against discriminatory reasoning based on
race, gender, and other protected categories;
it obscures the often complex decisions made by developers about how
to interpret and categories facts about people's lives; and
it fundamentally distorts the nature of commerce, politics, and everyday life.
Rabbit
is on page 46 of 232
Its surprising, then, that the most persistent objections to the use of Al in
government, commerce, and daily life include allegations of unfairness and
bias. Here are the main ones:
it perpetuates or exacerbates existing inequalities;
it discriminates against minorities;
it hyper-scrutinizes the poor and disadvantaged;
its risk assessments in domains like justice and policing are fundamen.
tally unfair; …
— Mar 26, 2022 08:29PM
government, commerce, and daily life include allegations of unfairness and
bias. Here are the main ones:
it perpetuates or exacerbates existing inequalities;
it discriminates against minorities;
it hyper-scrutinizes the poor and disadvantaged;
its risk assessments in domains like justice and policing are fundamen.
tally unfair; …
Rabbit
is on page 33 of 232
Noted tools for problematic AI-based policing: PredPol, COMPAS
— Mar 26, 2022 05:30PM
Rabbit
is on page 28 of 232
“Anti-discrimination provisions, for example, prevent the use of a person’s ethnicity, sexuality, or religious beliefs from influencing the decision to hire them. A company that uses AI probably purchased the software from another company. How can we be sure the AI doesn’t take these prohibited (“protected”) characteristics into account when making a recommendation to hire a person or refuse a loan? …”
— Mar 26, 2022 05:29PM
Rabbit
is on page 24 of 232
Evergreen question: Is a programmer culpable for the black box of the AI model, and if so, to what extent?
— Mar 26, 2022 05:27PM
Rabbit
is on page 21 of 232
Very nice demonstrative example of the black box AI problem here: a German horse named Clever Hans. Marvin Minsky, ed. Semantic Information Processing (Cambridge, MA: MIT Press, 1968).
— Mar 26, 2022 05:26PM

