I'm Not Outraged, You Are
Social media distorts our perception of how irritated people really are. Here's what to look out for.
I’m not immune to the attraction of internet drama. The opposite, in fact. Just like everyone else, I want to figure out who did what, who is right… how angry I should be, and on whose behalf. But because of my work in crisis management strategy, my interest runs a little deeper than most. (At least, I’m choosing to see it that way instead of, perhaps, acknowledging it as an unhealthy obsession with other people’s drama).
I want the data.
And those data often echo a specific and somewhat uncomfortable story about emotion and the internet
Outrage on the internet: Observed… or invented?
It can be both, of course, depending on the circumstances, but this article is about cases in the grey area where perhaps the outrage doesn’t seem to match the initiating event. The topic that triggered it doesn’t seem significant enough to be causing that much discussion, or the person at the center of an issue seems less bothered by it than their followers.
Is this outrage observed or invented? In these cases, that question is perhaps a little too black and white, because it’s less of an invention than a systematic overestimation. We look at a post someone made while mildly annoyed and read it as furious, or we watch a disagreement unfold and interpret it as a moral crusade.
You’ll see this in the comments sections of TikToks where creators describe a perhaps unpleasant but not downright awful interaction they’ve had with someone, and their followers go feral in response; occasionally, in extreme cases, tracking down the subject of the creator’s video and ‘going after’ them, doxxing them, making videos about them, and amplifying the outrage response. All when the creator just… wasn’t that bothered by the original issue.
I’m not talking just from observation here. This is a measurable psychological effect that affects our perception in a way that totally warps our understanding of reality. And to understand how to control subconscious reactions in our own responses, we need to (a) know that they exist and (b) recognize them when they appear.
The overperception of moral outrage
A 2023 study in Nature Human Behaviour (Brady et al.), shows that social media users overperceive the level of moral outrage felt by others. This makes us anxious and inflates our beliefs about how hostile and polarized our communities really are. The internet does more than give us access to more of other people’s anger than we ever needed to have. It also tricks us into seeing anger that isn’t really there.
This is (partially) why minor celebrity disagreements feel like a referendum on human morality and you log off Threads feeling like the world is ending.
Brady et al. ‘s study is fascinating because their methodology captured emotion in real-time. They developed a machine-learning tool (yes, I do approve of this type of ‘AI’ use) that identified political tweets as they were being posted; then, within fifteen minutes of a tweet being posted, they slid into the author’s DMs and asked them to rate how outraged (or happy) they were when they wrote it on a 1 to 7 scale.
Then they showed those tweets to an independent group of observers and asked them to rate how outraged they thought the author was. The results were remarkably consistent across multiple studies:
Observers systematically perceived more outrage in the messages than the authors themselves reported feeling.
Interestingly, this distortion was specific to negative and moralized emotions. When the researchers ran the exact same test for happiness, the overperception disappeared. The observers were perfectly capable of accurately gauging joy. But outrage… That broke their internal calibration.
Why?
The researchers explained this effect as a combination of the environmental constraints of the internet and social conditioning. When we communicate in person, we get cues that give us more information that we can use to more accurately assess emotion. And we usually know the people we’re talking with better, have more context around who they are and what they care about. On social media, those cues are either constrained to the context of how a person chooses to present themselves online (in videos) or they’re absent entirely (in text posts on X/Twitter and Threads, for example).
With this ambiguity, our brains default to a sort of ‘better safe than sorry’ approach, because failing to recognize a threat feels more dangerous than overestimating one. Subconsciously, I mean—we perceive threats because of evolutionary survival mechanisms, it’s not a conscious choice. So, then, we err on the side of perceiving hostility. Because it feels safer to perceive hostility. I know that sounds counterintuitive when we often want to feel like we aren’t biased, or we think we try to see the best in everyone, or we look for the most positive possible interpretation. But what we want to do often isn’t what we actually (again, often subconsciously) do in reality. And reacting from survival instincts doesn’t make you a bad person.
Who is most likely to overperceive outrage?
Brady et al. also found that the people most likely to overperceive outrage were those who spent the most time using social media to learn about politics. Heavy users have been somewhat ‘conditioned’ by algorithms that prioritize and promote evocative, high-engagement content. They’ve learned to expect outrage, so they see it everywhere, even when the author was intending to express a mild opinion.
This creates a feedback loop where authors learn that expressing outrage (or the appearance of it) yields engagement and reputational boosts within their ingroup, so they begin to perform outrage and adopt the linguistic markers of anger even when they’re not feeling it. And observers read these exaggerated signals while expecting hostility.
So we get interactions where individuals or groups perform feelings they don’t feel for an audience that thinks the performance is real.
Alix Earle and Alex Cooper
We don’t need to look at high-stakes political discourse to see how this plays out in real-time, and after almost falling off a treadmill watching the Kash Patel press conference this week, I’m all out of energy for using political examples. So let’s look at Alix Earle and Alex Cooper instead. (Sorry.)
Don’t worry if you don’t know who these people are; you don’t need to follow the rest of this article. Alex is the host of the Call Her Daddy podcast and founder of the Unwell Network. Alix is an influencer who used to have a podcast on Alex’s network. Alex recently posted a video directly calling out Alix.
“Alix Earle… Hey girl. The passive-aggressive reposts and the likes and the commenting on things…. I gotta call you out here,” Alex said in the video.
“You’re gonna need to get specific and just say what you gotta say about me. There’s no NDA. No-one is stopping you. Stop hiding behind other people and just say it yourself. What’s the beef?”
(It went on, but you get the idea).
Alix responded by reposting the video with a simple comment: “Okay on it!!”
What followed was that overperception of moral outrage.
I know about this story because it was everywhere. Unavoidable. Perhaps indicative of the kind of shit my search history puts on my algorithms, but regardless, there was a lot being said, and that conflict was almost immediately categorized as some huge moral failing on one side or the other. Commentators called Alex the “Grim Reaper podcasting” and said she exploited young women; others described Alix as an ungrateful and passive-aggressive opportunist. Good vs. evil and authenticity vs. exploitation framing.
The actual reality of events is far more mundane.
In August 2023, Alex’s Unwell Network signed Alix to host the Hot Mess podcast. By early 2025, their relationship had broken down, and Alix missed an Unwell Super Bowl (unfortunate naming there) party. Alix’s podcast was later dropped by the network. She told The Wall Street Journal that the situation was “a little bit of a hot mess” behind the scenes. Over the next year, Alex and Alix put out cryptic jabs at each other on the socials, like ‘mocking’ song choices.
Dave Portnoy, founder of Barstool Sports and friend of Alix and Alex, said the feud likely stemmed from ‘conflicting business interests and contract disputes’.
Although social listening data (Muck Rack, Sprout Social) showed a spike in negative sentiment, mainly toward Alex, most of the audience was neutral. Yet, despite the minority being outraged, they dictated much of the perception. When we open our apps to 10 videos dissecting the ‘toxic’ behavior of a public figure, we may process that as “the entire internet is furious about this moral transgression” rather than “10 specific people are upset about this”, thus taking the most extreme expressions of emotion and using them to calculate the average ‘temperature’ of the feelings.
This ‘crowd-emotion-amplification-effect’ happens when we see numerous emotional expressions in close succession in our feed. We overperceive the extremity of the group’s emotion as a whole. So we judge the collective moral outrage of the social network to be far greater than it actually is.
This leads to audiences taking a corporate disagreement, for example, and amplifying the mild annoyance of the participants into something that’s interpreted as a furious betrayal, convincing themselves that everyone else is equally enraged.
The drama may be real… but the perception of its intensity is amplified.
Information vacuums make everything worse
The Alix-Alex feud is a good example of what happens in an information vacuum (like the cryptic messaging). Another, which I’ve written about before, is Scott Mills: when the BBC fired him with a vague statement, they created a space that the public immediately filled with the worst possible assumptions. The same dynamic occurred here. Neither Alix nor Alex fully explained the nature of their split when it happened in early 2025. Alix said she had to “put a pause on podcasting right now for the foreseeable future” and couldn’t “get into the details of it all”.
This and the passive-aggressive jabs over the next year created a massive information vacuum, which, combined with an environment designed to overperceive outrage like social media, creates overinflated drama.
When people don’t know the specifics of an issue, they tend to come up with theories of personal betrayal, for example, and take cryptic posts as hostility because they’re expecting hostility. So when Alex addressed the issue directly, the narrative had already been set by those seeing hostility in it. It wasn’t really her anger. The facts of the original dispute became secondary to the emotional intensity of the public reaction.
Brady et al. found that participants who viewed a ‘high-overperception’ newsfeed judged the collective outrage of their social network to be significantly greater than those who viewed a “low-overperception” newsfeed, even though the authors’ self-reported outrage was the same in both conditions. The participants weighted the most intense outrage messages more heavily than the less intense ones when making collective judgments, meaning they took the loudest and angriest voices and assumed they represented the average.
Why does this matter?
The harm is that this overperception changes our behavior.
Brady et al. reported three major downstream consequences of overperceiving collective outrage:
It normalizes outrage expression. When we think everyone else is outraged, we start to believe that expressing outrage is the socially appropriate way to communicate, so we start using the language of outrage because we think that’s what our network expects from us.
It inflates affective polarization. Overperceiving outrage makes us believe our social network dislikes the ‘outgroup’ a lot more than they do in reality. The effect of the high-overperception newsfeed on outgroup dislike was nearly twice as large as the effect on ingroup liking. So it’s more than just feeling more outrage. We end up thinking other people hate other groups way more than they do… and the last thing we need is more of that.
It exaggerates ideological extremity. We start to believe that the people in our network hold more extreme views than they actually do. (AND we might start aligning with those…)
This is one of the considerable dangers of digital outrage: the way it changes how we understand what’s normal and skews our perceptions of what our peers actually believe.
When an audience overperceives the anger of the other participants and that of the audience itself, we get a situation where two groups both believe there’s a strong division when there really isn’t.
The internet is not as angry as it looks
That’s what we need to keep in mind.
Yes, outrage is real, but its intensity is an illusion that’s amplified by algorithms and our own psychological biases. And just saying ‘the algorithm is doing this to us’ isn’t the answer—we don’t have to let it. We can take accountability for our own perceptions and change them.
Our perception of reality dictates our actions, so if we believe we’re surrounded by hostility, we become defensive, stop listening, and become less able to understand nuance, retreating into our ‘side’ with the assumption that the other is acting in bad faith.
Do you want to get into a defensive posture based on misread signals and algorithmic distortion?
I didn’t think so…
So, what do we do with this information?
Recognize the mechanism. When you react to a post or situation, ask yourself: “Am I reacting to the actual emotion expressed by the creator, or am I reacting to the algorithmic amplification of that emotion?”
Look for the majority. The loudest voices are rarely the most representative. Those who are genuinely furious are almost always a minority. Most of the people engaging just don’t care that much, so maybe you don’t need to, either.
Don’t create information vacuums. If you’re at the center of a potential issue, define the narrative yourself so the loudest voices don’t do it for you.
We have to accept that the platforms we use are not neutral reflections of reality. They’re curated to build a perception (perception vs. reality is a cliche for a reason), and they do that through evoking emotions.
Once you know this, you’ll see it everywhere… and you’ll recognize it in yourself and stop it.
Source Material:
Brady, W.J., McLoughlin, K.L., Torres, M.P. et al. Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility. Nat Hum Behav 7, 917–927 (2023). https://doi.org/10.1038/s41562-023-01582-0

