AI Week

New newsletter & free ebook!


AI Week is my free weekly newsletter about AI and society.
I’d love for you to join me! Sign up to get a free ebook
of my 2019 novella about AI in the workplace,
"The Auditor and the Exorcist."

Emergent tech: emotion-monitoring AI

Back in 2019, when in-person conventions were still a thing, I participated in a Can-Con panel about the future of emotion-monitoring technology and AI. The panel was terrific, with able moderation by Kim-Mei Kirtland and fascinating contributions from my fellow panelists. I’ve written up some of my thoughts from that panel to share here.

Because of my tech background, I always find it interesting to think about the potential effects of bugs in fictional and emerging technology.

This is is the first of a series of posts the emerging tech of emotion-recognition AI, focusing on the strange and dark places that bugs in this tech could take us.

I want to start with the “AI” piece. AI, as it’s being used here, doesn’t really refer to “intelligence” in the human sense. We’d call that “general artificial intelligence.” This use of AI refers to machine learning that’s been trained on some sort of dataset.

One thing about algorithms derived from machine learning, that we see in facial recognition, in Facebook’s algorithms, and so on, is they can fail in weird ways on cases that weren’t covered in their training. And they can have terrible difficulty with subtleties. So, for example. Facebook’s algorithms for determining what is and isn’t OK to post have a hard time figuring out the edge cases, things right on the edge of violating their guidelines. To the extent that they’ve hired an army of 1500 content reviewers (through Accenture) to go through the edgy stuff, all day, every day. It’s such a terrible job that they have to provide their content reviewers with free, on-site trauma counselling.

Since the pandemic started, Facebook has been warning that their content moderation will have more mistakes. These edge cases are the reason. Facebook closed their content moderation centres (although it’s a mystery to me why they couldn’t work remotely) and is just now starting to reopen some of them.

So even in a very exhaustively trained algorithm, there are big gaps that need people to do the work of patching them. We can extend that to affective computing, to an algorithm that’s trained to recognize emotions rather than bad content. There are going to be big gaps on the subtle stuff.

This has the potential to be a huge deal for emotion-recognition AI. People can’t always read each others’ faces 100% accurately, and we’re literally hardwired for this with lifetimes of experience. And as any human knows, misreading someone’s emotions can have hugely negative interpersonal consequences.

The impact of those consequences depends on how the emotion-recognition software is deployed.

The panel was loosely focused on one application that’s currently being marketed, called RealEyes, which claims to use webcams to detect how a virtual focus group feels about ads. Missing subtle facial cues there would get in the way of the software’s effectiveness, especially since most people aren’t particularly demonstrative in response to ads they see on the internet. (The biggest reaction most ads get out of me is, “Oh jeez, not this one again?”) But gaps around the edge cases aren’t going to have a huge impact on the world here.

It’s got potential to have a much bigger impact in a current growth area for machine learning: predictive policing.

For example, imagine software that delivers a “threat level assessment” of anyone a police officer interacts with, based partly on affective analysis. It’s easy to see how any bias in the software will be reinforced: if a police officer inteprets neutral behaviour as hostility thanks to a flawed algorithm, they’ll get hostility back, like a self-fulfilling prophecy.

In my next post, I’ll get into algorithmic bias.

One thought on “Emergent tech: emotion-monitoring AI”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.