In the Habit of Hate
How Social Media Profits from Radicalization & Mass Harassment.
Harvard Law Student
April 24, 2025
You wake up to your screeching alarm, curtains drawn and room dark. Rolling around blindly in bed, you begin fumbling for your phone on sound alone. The light hits your face and you squint a little, and as your eyes adjust to the first bit of light you’ve seen that day, you begin to look through your notifications from Facebook, Twitter, and Truth Social.
Without really thinking about it, you tap on Twitter, checking on the activity of your favorite account, Libs of TikTok. You scroll through recent posts and see that while the recent efforts against Planet Fitness seem to have had some effect on the company’s bottom line, Raichik reports biological men are still allowed in locker rooms:
“I just received this image from Planet Fitness in Alaska. The man who was caught shaving in the womens’ bathroom is still entering womens’ private spaces . . . . Planet Fitness is not safe for women!!”
You can see his face in the photo. It’s time for action.
Harassment and death threats sent to an election volunteer’s home.
Violent messages and phone calls against Boston Children’s Hospital employees.
Threats of planted bombs at more than 2 dozen public schools and libraries, evacuating students and shutting down classes.
More than 54 bomb threats to Planet Fitness.
Meanwhile, purveyors of accounts like Libs of TikTok earn Twitter more than 6.4 million dollars yearly.
This is merely a taste of the trail of violence—and profit—that follows in the wake of just one social media account engaged in hate politics. This sort of violence, called networked mass harassment, has surged in recent years.
While it might be tempting to treat these accounts and their followers as outliers, there is a broader, structural story to be told here: in a world where the average adult will now spend 17 years of our lives online, how easy might it be for us to find ourselves subtly pushed into spaces like these by social media? In a country with a crisis of loneliness, how easy might it be to self-medicate with online hate politics?
And most importantly, why hasn’t anyone done anything to stop this?

A far-right rally protests outside of the state house in Boston. By Samantha Slack.
The answers to these questions are troubling. Evidence overwhelming demonstrates that social media preferences controversial and often negative content for profit, a ripe opportunity for radicalization.
Simultaneously, social media platforms and political figures are disincentivized from acting on these harms, whether by fear, profit or political gain. Instead, platforms advance contradictory narratives to discourage legal liability for harms, while maintaining their competency to self-regulate.
All the while, the gap in legal recourse for victims of social media harms has grown. The courts have been gagged on the sidelines, in large part due to Section 230 of the Communications Decency Act, which provides complete liability protections to platforms for the content that they host, as well as their moderation of it. Nonetheless, the courts are beginning to stir, and scholars and activists are continuing to look for new and creative ways to bring the law back into social media.
Engagement At All Costs
Social media platforms profit, despite ostensibly offering free services, from two main avenues: advertisements and user data. Both measures rely upon engagement with the platform; more engagement provides more opportunity to advertise to the user and more opportunity to gather information on the user’s preferences for sale to advertisers. As such, it is distinctly in a platform’s best interest to keep users returning to the platform.
So, the question then becomes, what keeps users engaged? One answer has to do with which users are vulnerable to social media’s positive stimulus.
Alexa Cilia, a post-baccalaureate researcher with UNC Chapel Hill’s Winston Family Initiative in Technology and Adolescent Brain Development (WiFi), revealed that “hyper responsivity to positive social feedback in four different brain regions commonly associated with social information processing, may represent a risk factor for addiction-like social media use in later adolescence.” In other words, “if someone, let’s just say, is more sensitive to social positive feedback [or rewards,] that might be associated with [habitual] social media use.”
However, there is a more disturbing answer from the inside: Facebook’s own research teams reported in 2018 that their “algorithms exploit the human brain’s attraction to divisiveness.” Social media platforms breed engagement with content that evokes and networks negative emotions in the user.
Part of this comes from dark aspects of human psychology. An NYU study measured the reach of half a million tweets and found that moral and emotional words compounded virality by roughly 20 percent.
Algorithms exploit the human brain’s attraction to divisiveness.
Further research in 2021 found that the largest predictor of virality was actually the out-group effect: “each individual term referring to the political out-group increased the odds of a social media post being shared by 67%.” It was through othering language—especially political othering language—that posts were most likely to evoke and anger, and therefore go viral.
The propping up of this content is not just an unfortunate side effect of those using the platforms: it is by design. As explained by Cathy O’Neil in her books Weapons of Math Destruction and The Shame Machine, social media preferences content which evokes a shock response in users, like mass misinformation and conspiracy theory. O’Neil characterizes social media platforms as “networked shame engines” which seek to profit from hateful engagement, propping up content—and often people—for users to “bombard with digital tomatoes.”
Radicalization by Algorithm
What is unfortunate then, is the disturbing alignment between social media’s business model and the path to radicalization. Social media’s dark psychology underbelly—the prioritization of othering and controversial content, the creation of echo chambers, the targeted content—all provide tools for the process of radicalization.
Radicalization tends to follow a straightforward, well-documented path that online researchers have termed “The Alt-Right Playbook.” It begins with identifying the vulnerable audience member—especially a lonely one—and offering them community.
Another researcher from UNC Chapel Hill, Dr. Nathan Jorgensen, offered insights into how this happens on the ground. Jorgensen, who has studied white racial identity development, notes that in a culture of such staunch divide around race, white men often feel that they are told “you’re the bad guy,” which can leave them with difficult feelings to process. Jorgenson notes that, “when the only counter-narrative is extreme, that’s when people get radicalized. Conservative media is able to validate that feeling . . . and then give them a narrative.”

An attendee to a right-wing political rally live-streams to his followers. By Samantha Slack.
This initial draw is quickly followed by establishing shared spaces where “the ideology is the price of community.” Such shared spaces include message boards, restricted online spaces, and the ubiquitous Facebook group. Especially in social media platforms like Facebook, X, and Reddit that encourage continuous engagement, the message board becomes a habitual part of one’s everyday life, and soon the target is immersed.
“There is an echo-chamber effect, especially with social media, the more we search for something, the more we receive or are sent this information similar to it . . . . This normalizes it, and desensitizes it. Social media hate discourse increases social polarization, gives a feeling of legitimacy and normalcy to hateful rhetoric, and people can do it anonymously.”
The final step lies in isolation of the subject. Some of this happens naturally in social media’s echo chambers. Dr. Ghayda Hassan, a leading researcher on radicalization from the Universite de Quebec au Montreal, notes, “There is an echo-chamber effect, especially with social media, the more we search for something, the more we receive or are sent this information similar to it . . . . This normalizes it, and desensitizes it. Social media hate discourse increases social polarization, gives a feeling of legitimacy and normalcy to hateful rhetoric, and people can do it anonymously.”
The Other Side of the Screen: Networked Mass Harassment’s Real-World Effects
To understand a little bit more of what it can be like on the other side of the screen, I spoke with Alejandra Caraballo. Ms. Caraballo is a civil rights attorney working at Harvard Law School’s Berkman Klein Center, a trans advocate, and an avid social media user. In recent years, Ms. Caraballo’s work has focused on networked mass harassment against the LGBTQ community and private individuals, especially by Libs of TikTok.
Libs of TikTok is ironically neither a liberal nor a TikTok account. Active on X and Facebook, the account is run by Chaya Raichik, a conservative activist who reposts videos of liberal TikTok accounts with commentary. “Starting in late 2021 into 2022, [Raichik] realized that she was getting massive engagement on anti-LGBTQ content. [She] was instrumental in pushing the groomer libel and her account exploded, went from less than a few hundred thousand followers to now, in the last 2 years, 3 million followers.” Raichik began to see mainstream success: the account “was promoted heavily by people in the conservative movement, by Fox News, and by others” like Tucker Carlson.