This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Why it’s so hard to make welfare AI fair
There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness.
But the city of Amsterdam spent a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right?
Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here!
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 America’s grand data center ambitions aren’t being realized
A major partnership between SoftBank and OpenAI hasn’t got off to a flying start. (WSJ $)
+ The setback hasn’t stopped OpenAI opening its first DC office. (Semafor)