Machine-learning facial recognition is biased, say researchers. OK, let’s arrest a guy based on it, say Detroit police.
Terrific collection of articles on algorithmic bias in AI on getpocket’s blog: https://blog.getpocket.com/2020/06/the-bias-embedded-in-algorithms/ – curated by Safiya Umoja Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism.
This is my second post on the emerging tech of emotion-recognition AI. In my last post, I considered some of the consequences of algorithmic blind spots on likely applications of emotion-recognition tech. In this post, I’ll get into algorithmic bias.
Continue reading “Emotion-monitoring AI, part II”
Back in 2019, when in-person conventions were still a thing, I participated in a Can-Con panel about the future of emotion-monitoring technology and AI. The panel was terrific, with able moderation by Kim-Mei Kirtland and fascinating contributions from my fellow panelists. I’ve written up some of my thoughts from that panel to share here.
Because of my tech background, I always find it interesting to think
about the potential effects of bugs in fictional and emerging
This is is the first of a series of posts the emerging tech of emotion-recognition AI, focusing on the strange and dark places that bugs in this tech could take us.
Continue reading “Emergent tech: emotion-monitoring AI”