Protest Surveillance Technology

Suppression by Surveillance

Protest Movements and the Corporate Surveillers Profiting Off Fear

Jessenia Class

August 28, 2024

Organizing is at a fever pitch today. The modern swell has been building for several years: with roots in the Occupy Wall Street and Black Lives Matter movement of the early- and mid-2010s, alongside an uptick of responses to the Trump presidency, contemporary forms of democratic social and political dissent have been incredibly diverse, multifaceted, and robust. 

And it’s only growing. In light of the ever-growing threat of climate change, and an upsurge in worker activity and unionizing efforts, protests have taken center stage in cities, campuses, and beyond. 

This year in particular has seen immense civil unrest. Mass violence in Israel and Palestine has set off a wildfire of protests across the country. This past April and May, groups at over 80 campuses set up encampments to protest the death of tens of thousands in Palestine, calling for their universities to divest from Israeli-owned companies. In turn, people supporting Israel have counter-protested these efforts, disclaiming pro-Palestinian students’ pronouncements of genocide. 

Sitting between these protests are colleges and cities, looking for means to quell the uprising of protest and return to a perceived state of order. Therein lies an opportunity for corporations to profit.

As participation in activism surges, so too have attempts at suppression. Protesters have been blacklisted as “terrorists” in the media; officials have gone as far as to bring state domestic terrorism charges against organizers. Paramilitary officers have been deployed to cities to reign in political dissidents. Recently, protesters at college and university encampments were subjected to severe police brutality at the hands of city and state law enforcement, leaving students bloodied and bruised and resulting in over 2,000 arrests nationwide. To suppress further uprisings, universities have moved classes online, barred entry into campus, and canceled graduation events.  

A combination of targeted storytelling and advertising projects led by corporations changed the public’s perception of protest — placing protesters and student organizers in more precarious positions, undermining constitutionally protected speech and manifestations of democracy — and creating an opportunity for tech surveillance companies to increase profits.

In line with these attempts to reign in perceived disruption, universities and cities have sought out methods to surveil protesters. And on the heels of recent, large political movements, corporations selling surveillance technology have fanned the flames of this need by ginning up safety concerns — all the while promising that their technology could solve these concerns and make communities feel safer. 

While examples of violent, dangerous, hateful, and fear-inducing protests certainly exist, the vast majority of demonstrations are peaceful. According to a recent report by the Armed Conflict Location & Event Data Project (ACLED), the overwhelming majority of recent protests related to Israel-Palestine — in fact, 99 percent of protests — have been peaceful demonstrations. The same was said of Black Lives Matter protests in the summer of 2020: 93 percent were reported as peaceful. 

Yet, a combination of targeted storytelling and advertising projects led by corporations changed the public’s perception of protest — placing protesters and student organizers in more precarious positions, undermining constitutionally protected speech and manifestations of democracy — and creating an opportunity for tech surveillance companies to increase profits.

In response to this somewhat manufactured fear, cities, and more recently universities, have adopted several surveillance methods. These technologies have become increasingly sophisticated. Cameras with facial recognition software have been installed on street corners; cities are equipping their police forces with novel geofencing technology using cell-site simulators. Universities outfitted already with data-tracking systems have allegedly used their technology to track students’ whereabouts and personal messages. 

These concerns ring acutely for student organizers: some have felt the need to monitor their participation in democratic mechanisms of dissent for fear of being surveilled. Many students in today’s protest climate use facemasks, sunglasses, and scarves to cover their faces and prevent identification by surveillance technology to avoid doxxing or identification by authority figures. 

“The ability of police to identify protestors using biometric data, either in real time or after the fact, makes demonstrators vulnerable to retaliation for their political speech and has the potential to chill people’s willingness to engage in constitutionally protected activities,” said Matthew Guariglia, Senior Policy Analyst at Electronic Frontier Foundation (EFF).

Yet for some, the threat of surveillance doesn’t impact their fidelity to political expression. In conversation with The Harvard Crimson, one student remarked:  “While admin is here, trying to scare us, take pictures of us, ask for our IDs — over policies that do not exist,” one student organizer said, “we continue to be as steadfast in our commitment to justice.”

As protests continue to grow, surveillance technology looms as a potential threat. Enshrining the right to safe expression of social and political dissent is critical. Revealing some of the surveillance technology corporations’ and consumers’ advertising strategies — while deconstructing those false narratives — is key to ensuring legitimate and safe protests going forward.

Rise in Surveillance Technology

A panoply of electronic surveillance tools have cropped up in recent years. Two categories of technologies have been most prominently used to identify protesters and student organizers engaging in political demonstrations: data-tracking devices and identification technology.

One form of data-tracking surveillance technology is known as a “stingray,” or a cell-site simulator. (Other forms of this technology, sold by a different company, are at times referred to as “dirtboxes.”) This electronic surveillance tool allows law enforcement to intercept cell phone signals. Phones regularly broadcast their presence to the cell phone company towers closest to them, allowing the phone carrier to provide the user service wherever they are. However, if authorities are interested in tracking a specific device, law enforcement instead constructs a stingray and channels these cell phone signals through this cell-site simulator. Once it’s built, law enforcement can track cell phone locations, review unencrypted conversations and text messages, and more.

A cell phone towering (modeling a cell-site simulator) towering over a group of protesters against a green backdrop.

Source: Created by author using Dall-E

Stingrays are widely used by local, state, and federal authorities, but they’re highly controversial. This technology reaches further than its intended audience: stingrays collect data from any phone in the vicinity of a targeted person’s cell phone. For example, during a demonstration, protesters, their associates, and passerbys near a person being tracked by law enforcement can be identified. Perhaps even more concerning, stingray technology can subject the person to a phishing campaign where malware is downloaded onto their phones through clicking a faulty link.

Once law enforcement has identified the cell phone of a protester, they can then subpoena the cell phone company to provide the name and address associated with that cell phone account. Even more, the police could identify the phone’s current location, or obtain a log of all the cell towers the phone in question has been linked to in the past.

Data-tracking technologies like stingrays aren’t cabined to city police departments; they’re also integrated into college and university campuses. Consider SpotterEDU. Used at Syracuse University, this system operates in a fashion akin to stingrays. With short-range phone sensors and campuswide WiFi networks, universities can track student locations as they move from dorms to the classroom. 

“The ability of police to identify protestors using biometric data, either in real time or after the fact, makes demonstrators vulnerable to retaliation for their political speech and has the potential to chill people’s willingness to engage in constitutionally protected activities.”

Companies using this technology have boasted of gathering “6,000 location data points per student every day.” In some cases, this is used to track student attendance in classes; professors can receive automatic notifications letting them know a student is not in their class. At least 40 colleges and universities have adopted SpotterEDU; with increased fear surrounding activism, more are likely on the horizon. 

“[A]dministrators have made a justification for surveilling a student population because it serves their interests,” said Kyle M. L. Jones, an Indiana University assistant professor who researches student privacy, in an interview with the Washington Post. “What’s to say that the institution doesn’t change their eye of surveillance and start focusing on minority populations, or anyone else?”

As political and social demonstrations continue to rise on university campuses, the “eye of surveillance” has already seemed to turn. As of last fall, Columbia University has allegedly surveilled student activists’ text messages that were sent on university WiFi networks.

Similar stories abound. Social Sentinel — now acquired by Navigate360 — is a surveillance service that scans data from student social media posts. This service had originally been purchased by the University of North Carolina at Chapel Hill to monitor students who are allegedly at risk of harming themselves or others. Yet, during protests over the removal of confederate statutes in the mid-2010s, Social Sentinel used the company’s monitoring and keyword features to surveil and scrape student social media posts about the protests

Five years prior, at Kennesaw State University in Georgia, local authorities also used Social Sentinel to track political dissidents at a town hall with a U.S. senator. The service has also been used at the University of California, Davis, Auburn University, Central Florida University, and Indiana University.

Corinne Shanahan, a student organizer at Harvard Law School, notes that the surveillance of social media posts by administrators seems to be used “to track and police student organizing and activism with the ultimate goal of stalling or intimidate the movement or individual people within it.”

Not only have data-tracking tools been on the rise, but identification-based surveillance technologies like facial recognition and biometric analyses have also been embraced by cities and universities. These tools are well-known today. For example, facial recognition — a ubiquitous feature on most modern cell phones — uses peoples’ faces, captured in video footage or photographs, and compares them to a database of known individuals to find a likely match and identify an unknown person.

This technology has already been integrated into many cities. Per EFF’s Atlas of Surveillance, a data-mapping tool documenting surveillance technology, at least 884 police departments across the country utilize facial recognition technology, with a particularly pronounced presence in the states of Florida and Michigan. Drones, or unmanned aerial vehicles (UAVs), are becoming increasingly ubiquitous as well; the Atlas of Surveillance notes that approximately 1,488 police departments use UAVs.

Universities too are beginning to add identification technology to their surveillance systems. At least 10 campus police departments have drone programs. At Lehigh University, student journalists were told that no privacy laws restrict drone usage and that drones can be used on and off campus “for any purpose.”

According to a scorecard produced by Boston-based nonprofit Fight for the Future, at least 13 colleges and universities across the country are currently using facial recognition technology. One example is the University of Southern California. At USC, residence halls are outfitted to allow students to use biometric technology to access buildings via fingerprint and facial recognition technology. USC claims not to store data or provide analytics, but the university did not clarify whether it has analyzed this data in the past.

Multiple Florida universities also use facial recognition technology. The University of Florida, the University of Miami, and the University of Central Florida police departments draw from Florida’s facial recognition network, Face Analysis Comparison & Examination System (FACES). According to FACES training material acquired by EFF, at least 230 local, state, and federal agencies also use this database. Created in 2001, FACES has few written policies regulating the use of its 38.5 million images. 

It’s already been used to identify and suppress protesters in the past. Police officials at the University of Miami, for example, have admitted to using facial recognition systems to catch “a few bad guys” on campus; students allege that the system has also been used to discipline student protesters who staged a die-in on campus to support cafeteria workers on campus. 

Records have also shown Florida police departments used the database during the summer of 2020’s calls for police reform to request matching images and identifying information for a “possible protest organizer” and their various “associates”; photos were uploaded of organizers that referenced their participation in protests, but listed no history of crime-related activity.

“It’s horrifying. To find searches run specifically for protests, which is a clearly protected First Amendment right,” said Clare Garvie, a senior associate at Georgetown University’s Center on Privacy & Technology, in a conversation with reporters Joanne Cavanaugh Simpson and Marc Freeman. “Particularly in protests against police activity, there’s the fear that police are going to target and retaliate against those individuals.”

Technology’s Limits

Cities and universities are increasingly weaving these new technologies into their systems. However, these technologies are problematic and can be targeted, as student organizer Shanahan notes: “Systems of oppression don’t surveil their friends.”

A surveillance camera with an eye on its lens towering over a group of students against a blue backdrop.

Source: Created by author using Dall-E

Take stingrays as an example. Putting the glaring privacy concerns to the side, stingray’s mimicking of cell phone towers creates other logistical concerns. By replicating cell sites, stingrays can prevent accessing legitimate sources of cell service. This can allegedly obstruct 911 calls or prevent people from accessing other hotline services in emergencies. 

Problems are perhaps most evident with identification technology. Surveillance methods that attempt to match one’s appearance with an identity — such as facial recognition devices — rely upon the accuracy of the dataset they’re drawing from. The effectiveness of facial recognition systems depends on a host of different factors: quality images of individuals within its database; an algorithm that has been trained on a wide variety of human faces; and a clear, objective definition of what the software should consider as a match between the unknown face and the database. Faulty datasets can — and have — led to inaccurate identification of communities of color and immigrants, placing them at risk.

Consider Amazon’s facial recognition product, Rekognition. Police departments across the country use this service to identify potential suspects. But this tool is wildly imprecise when it comes to identifying brown and Black people. According to a report by AI researcher Joy Buolamwini, Rekognition misidentifies darker-skinned people more than lighter-skinned people. The ACLU similarly found these tools to be flawed. In a 2018 study, the ACLU found that members of Congress were mislabeled by Rekognition as criminals: this occurred more frequently for Black officials than white ones.

In an interview with WIRED, Evan Selinger, privacy scholar and professor at the Rochester Institute of Technology, remarked on widespread concern with the failures of this technology. “Not only have civil rights groups criticized Amazon for promoting a facial recognition tool to law enforcement that poses dire threats to minorities, but so have concerned shareholders.”

Groups have been sounding the alarm on the dangers of facial recognition technology for over a decade. Georgetown Center on Privacy & Technology published a 150-page report entitled “The Perpetual Line-Up” in 2016 detailing the likelihood of false positives for individuals of color in facial recognition databases. For example, “while one in two American adults have face images stored in at least one database, African-Americans are more likely than others to have their images captured and searched by face recognition systems.” This overrepresentation puts Black people at a potentially greater risk of being subject to a false match. 

That same year, over 50 civil rights organizations submitted a letter to the Department of Justice Civil Rights Division calling for an investigation of the disparate impact of facial recognition technology on communities of color. These organizations urged the DOJ to take action, noting that “[s]afeguards to ensure this technology is being used fairly and responsibly appear to be virtually nonexistent.”

Faulty datasets have ripple effects. Corporations like Clearview AI have widened the scope of possible facial recognition technology use cases. The software pulls from over three billion images, scraped from Facebook, Venmo, YouTube, and millions of other sites, and can use photos that aren’t perfect, including those where individuals are wearing glasses, hats, or other objects that partially eclipse their faces. 

Corporations are not held back by the flaws in their technology. Quite the opposite: these actors are emboldened by the purported promise of their product.

The code underlying Clearview AI’s app shows that the software can link with augmented-reality glasses. By pairing these two tools, consumers could identify every person they saw. At a protest, then any activists could be identified, revealing their names, where they lived, what they did, and whom they knew. Should the first person identified be incorrect, potentially due to discriminatory data, the first person’s alleged associates can also be incorrectly identified.

Even if the data is not biased — which will likely become the case as technology continues to advance and become more robust — the layers of privacy infringement and neglected due process mount with each new form of surveillance technology that’s developed. 

Corporations and Fear Campaigns

Corporations are not held back by the flaws in their technology. Quite the opposite: these actors are emboldened by the purported promise of their product. Instead, they’ve sold to cities, police departments, universities, and the public by dressing up the efficacy of their product and fanning the flames of a fear campaign

By supporting these advertising tactics, and capitalizing on recent waves of political dissent, surveillance technology corporations have contributed to the safety concerns surrounding activist demonstrations. In doing so, they have amassed a mountain of private information and power and have strained constitutionally protected protest and speech.

The claims of the technology’s effectiveness are dressed up in their marketing language. “By and large, [surveillance technology companies] are advertising to the purchaser, and they’re making claims about how effective their technology is,” says David Siffert, Legal Director at Surveillance Technology Oversight Project (STOP). “So, for example, a facial recognition vendor will say, here’s a test that shows that this is 99% accurate, and will neglect that that’s for middle-aged white men in ideal lighting.” 

Once purchasers receive the product, Siffert argues they utilize similar rhetorical strategies to justify their usage. “Police engage in a lot of” narrative-building akin to advertising, Siffert says, “and that advertising is mostly fear-based. So police do everything they can to place as many scary stories in as many newspapers as possible. And they use that to justify bigger budgets, which let them buy more surveillance technology.”