Steph | bookedinsaigon’s Reviews > The Alignment Problem: Machine Learning and Human Values > Status Update

Steph | bookedinsaigon
Steph | bookedinsaigon is 37% done
Section 2 was quite interesting, but I don’t think I’d be able to get through were it not for switching to audio.
9 hours, 48 min ago
The Alignment Problem: Machine Learning and Human Values

flag

Steph | bookedinsaigon’s Previous Updates

Steph | bookedinsaigon
Steph | bookedinsaigon is 21% done
Struggling with this.
Mar 30, 2026 11:17AM
The Alignment Problem: Machine Learning and Human Values


Steph | bookedinsaigon
Steph | bookedinsaigon is 10% done
“But, from the perspective of the New York Times editorial board, there was a problem: the state wasn’t using them *enough*. The [recidivism prediction] tools, even where their use was mandated, still were not always given appropriate consideration. The Times urged wider acceptance of risk-assessment tools in parole…”

It’s giving tech bro “if you only used AI, it’d be better for everyone!”
Mar 26, 2026 09:59PM
The Alignment Problem: Machine Learning and Human Values


Steph | bookedinsaigon
Steph | bookedinsaigon is 7% done
I will never understand people’s love for the steaming pile of human feces that is LLM-trained “AI.”
Mar 25, 2026 08:57AM
The Alignment Problem: Machine Learning and Human Values


No comments have been added yet.