The Unreadable

Paul Ekman was convinced. Six basic emotions expressed the same way by all humans, everywhere. Happiness, sadness, fear, disgust, anger, surprise. A universal language written in the muscles of the face.

In the 1960s, it sounded like a democratic idea. We are all the same beneath the skin. A smile means joy in Stockholm and in Papua New Guinea.

Fifty years later, his theory has become a billion-dollar industry. Affective computing. Emotion AI. Systems that scan your face during a job interview, during an exam, in a conversation with customer service. Systems that determine whether you are credible, engaged, threatening.

The science that says no

In 2019, five researchers from institutions including Northeastern University and Caltech conducted a comprehensive systematic review of the research on facial expressions and emotions, published in Psychological Science in the Public Interest. They came from different sides of the debate. They disagreed about many things.

They agreed on one thing: there is no scientific support for inferring a person's emotional state from their face.

The same face can mean different things. The same emotion can take different expressions. In a study from Papua New Guinea published in PNAS in 2016, researchers showed that Trobriand Islanders interpreted the Western "fear face" as an expression of threat and aggression. You can smile from hatred. You can furrow your brow in delight.

This should be the end of the story. Science says no, the industry packs up, we move on.

But the industry is growing.

And this is where it gets interesting. Because the question is no longer whether the technology works. The question is what it does. Regardless of whether it works.

The desire for readability

There is something seductive about the idea of readability. That the world can be decoded if we only have the right tools.

Ekman wanted it. The police wanted it. The TSA in the United States spent 900 million dollars between 2007 and 2012 on a program called SPOT (Screening of Passengers by Observation Techniques) designed to identify terrorists at airports by reading stress and fear in their faces. According to a review by the U.S. Government Accountability Office (GAO-14-159, 2013), there was no evidence that it ever worked. An earlier GAO report from 2010 documented that at least 16 individuals later linked to terrorist activity had passed through SPOT airports on a total of 23 occasions without being detected.

But the desire for readability doesn't disappear just because reality is messier.

The face as surface

I think about my daughter. She has autism and ADHD. She sometimes laughs when she's sad. Smiles when something is wrong. The connection between feeling and expression takes different paths in her.

An algorithm trained on Ekman's six categories would read her wrong. Every time.

And she's not unique. She's just clear. What she shows openly exists in all of us, more or less hidden: that the face is a surface. Sometimes transparent, sometimes impenetrable, often something in between.

Why we keep building

So why do we keep building systems as if it were otherwise?

Perhaps because the alternative is harder to sell. "We can give you an indication that requires interpretation and context" is not a pitch that attracts venture capital.

Perhaps because we ourselves want to believe in readability. It's comforting to think that intentions are visible, that whoever hides something reveals themselves.

But here things begin to turn. Because even if the systems can't read us correctly, they do something else.

They change us.

When surveillance changes the brain

In December 2024, researchers at the University of Technology Sydney published a study in the journal Neuroscience of Consciousness that worried me in a new way.

They divided 54 participants into two groups. One group knew they were being monitored via camera. The other did not know.

Then they measured something that cannot be consciously controlled: how quickly the brain detects faces in the visual field.

Those who knew they were being monitored became almost a second faster at registering faces. The effect was specific to faces. To social information.

The brain had turned up its radar.

Beyond the Hawthorne effect

This is something other than behavioral change. The Hawthorne effect, that we change our behavior when we know we're being observed, has been known for a hundred years. We tidy up more, wash our hands more often, behave more prosocially.

But what the researchers in Sydney showed was that surveillance goes deeper. It doesn't just change what we do. It changes how we process the world. Unconsciously. Involuntarily.

The panopticon has moved into perception.

From Bentham to neurology

Jeremy Bentham designed his panopticon in the 18th century. A prison where inmates never knew if they were being watched or not, but always had to assume they could be. The point was that the prisoners would internalize the gaze.

Foucault took it further. Power doesn't need to be present to act. It's enough that it could be. We begin to surveil ourselves.

But Foucault wrote about behavior. About disciplining the body, about how we learn to sit still, walk in line, obey the clock.

What we see now is something more. Discipline has reached perception. The brain itself adapts.

The boundary to the pathological

The research from Sydney showed something else too. Hyperawareness of others' gazes is a hallmark of social anxiety. Of psychosis. Of paranoia.

What surveillance does to us on a measurable scale resembles what happens in states we call illness.

The boundary between an adaptive response and a pathological response may not be as sharp as we'd like to think. What we call "normal adaptation" to the conditions of surveillance moves through the same neurological landscape.

A generation trained in readability

And we haven't even begun to talk about what happens when surveillance doesn't just observe, but interprets. When the system isn't content with seeing you, but also wants to know what you feel.

There's a concept in psychology: reactivity. It describes how we change our behavior when we know we're being studied.

But reactivity presupposes that there is a "before." A way we were before the gaze. An authentic state to deviate from.

What happens if the gaze is always there?

Children without a camera-free world

Children growing up today have never known a world without cameras. The selfie camera has existed since they could hold a phone. Video calls are normal communication. Being recorded during a school exam doesn't seem remarkable.

A generation is being trained to be readable.

Through something subtle. Through there always being a potential audience. Through every expression potentially becoming data.

Helping the algorithm

I think about how quickly we learn to help the algorithm.

If you've ever adjusted your facial expression to pass through facial recognition at the airport. If you've ever toned down or up your expression in a video call to "come across" better. If you've ever wondered how you look to the system.

That's adaptation. We do it because it works. Because communication is about being understood, and if the receiver is an algorithm, we adapt to the algorithm's horizon of understanding.

The problem is that the algorithm's horizon is narrow.

Emotional compliance

Six emotions. Or seven, if we count contempt. Or eight, if we add neutral.

That's what the systems see. Those are the boxes available.

What happens to everything that doesn't fit? Ambivalence. Bittersweet joy. Grief mixed with relief. Anger that is actually fear. Love that feels like anxiety.

It doesn't disappear. But it becomes unreadable. And what is unreadable to the system becomes, gradually, harder to make room for.

Simplification in the direction of the system

This is what I mean by emotional compliance. That we begin to simplify ourselves in the direction of the system.

Because it's frictionless. Because it works. Because we internalize the system's categories and begin to experience ourselves through them.

Am I happy? Sad? Angry?

The question sounds simple. But it's already shaped by the assumption that the answer is one of these. That the feeling has a name. That the name corresponds to a category that can be recognized from the outside.

When the mirror becomes an algorithm

There's an older philosophical question here. The sociologist Charles Horton Cooley called it "the looking-glass self" in 1902. We are shaped by how we think others see us. The mirror self.

But Cooley's mirror was other people. People who were themselves complex, interpreting, fallible. People who could meet ambiguity with ambiguity.

When the mirror becomes an algorithm, something else happens.

The algorithm doesn't hesitate. It returns a value. Happy: 73%. Sad: 12%. Neutral: 15%.

And we, who want to be seen, begin to move toward what is registered.

The right to be unreadable

This is where I land on something that resembles a hypothesis, or perhaps a worry:

That authenticity requires room to be unreadable.

That an inner life requires an area beyond the registerable. Something that doesn't fit in categories.

If the systems become present enough, long enough, early enough. If we never get to practice being unread.

What remains then?

The infrastructure being built

It's easy to think we're at the beginning of something. That the technology is young, that it will improve, that the problems are transitional problems.

But perhaps it's the opposite. Perhaps it's now, while the systems are still primitive, that we have the best chance to understand what is happening.

Because if the systems get better at reading patterns, we will start to believe them. And if we start to believe them, we will conform to them. And if we conform to them long enough, what they measure will become what exists.

The worry that remains

I worry less about AI reading us wrong. I worry about us beginning to read ourselves through AI's categories.

That the child who grows up with emotion-detecting software in the classroom learns that feeling is something visible. That whoever doesn't show the right things at the right time is deviant. That what isn't registered doesn't quite exist.

The problem with the solution

There's a kind of technological optimism that says: give us time, give us data, give us resources, and we'll solve the problems. Better training data. More nuanced categories. Culturally adapted models.

But the problem may not be that the models are too crude. The problem may be the project itself: making the inner readable from the outside.

Not because the inner is sacred or mystical. But because readability itself changes what is being read.

There's no answer key here. Just an observation: that we are building an infrastructure for emotional reading without having asked whether we want to live in that world.

And once the infrastructure exists, the question becomes harder to ask.


Sources

On facial expressions and emotion recognition:

Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(1), 1–68. https://doi.org/10.1177/1529100619832930

On cultural differences in facial interpretation:

Crivelli, C., Jarillo, S., Russell, J. A., & Fernández-Dols, J. M. (2016). The fear gasping face as a threat display in a Melanesian society. Proceedings of the National Academy of Sciences, 113(44), 12403–12407. https://doi.org/10.1073/pnas.1611622113

On the TSA SPOT program:

U.S. Government Accountability Office (2013). Aviation Security: TSA Should Limit Future Funding for Behavior Detection Activities. GAO-14-159. https://www.gao.gov/products/gao-14-159

U.S. Government Accountability Office (2010). Aviation Security: Efforts to Validate TSA's Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges. GAO-10-763. https://www.gao.gov/products/gao-10-763

On the effects of surveillance on perception:

Seymour, K., McNicoll, J., & Koenig-Robert, R. (2024). Big brother: the effects of surveillance on fundamental aspects of social vision. Neuroscience of Consciousness, 2024(1), niae039. https://doi.org/10.1093/nc/niae039

On the looking-glass self:

Cooley, C. H. (1902). Human Nature and the Social Order. New York: Charles Scribner's Sons.