Flock Safety: 5,000 US towns' data accessed by ICE for deportations
Flock Safety AI data used by ICE for Trump deportations

A major American artificial intelligence firm, which markets its technology as a tool for fighting crime, is at the centre of a growing scandal over its role in US immigration enforcement. Flock Safety, based in Atlanta, has seen numerous cities cancel or suspend contracts after it was revealed that its data was repeatedly accessed by federal immigration authorities.

National Surveillance Network and Federal Access

Flock Safety provides AI-powered automatic number plate recognition (ANPR) cameras to more than 5,000 communities across 49 US states. The company promotes its system as a way for local police forces to solve crimes more efficiently. However, investigations have uncovered that federal agencies, including Immigration and Customs Enforcement (ICE) and the Department of Homeland Security, gained access to Flock's national vehicle tracking network.

This access was used to support Donald Trump's mass deportation agenda, despite the company having initially denied having any federal contracts. The data, which logs the movements of vehicles across the country, was utilised for immigration-related searches, turning a locally-focused crime tool into a resource for federal immigration raids.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Mounting Backlash and Cancelled Contracts

The revelations have triggered a significant backlash. Privacy advocates and political critics argue that Flock's system effectively establishes a massive nationwide surveillance network, monitoring the movements of ordinary citizens without their consent. Prominent figures, including Senator Ron Wyden, have warned that such a system is open to potential abuse, potentially being used to investigate individuals seeking abortions or participating in lawful protests.

In response to the outcry, Flock Safety has introduced new rules. These include pausing federal pilot programmes and allowing its municipal customers to block certain types of searches. However, for many city officials and civil liberties groups, these measures are too little, too late. A fundamental lack of trust and transparency has led several cities to terminate their relationships with the company altogether.

A Crisis of Trust in AI Policing

The controversy strikes at the heart of the debate over technology, privacy, and policing. While Flock Safety presented its product as a benign tool for community safety, its integration into federal immigration enforcement has exposed a darker potential. The case demonstrates how data collected for one purpose can be easily repurposed for another, more controversial one, without public knowledge or consent.

The fallout continues, raising urgent questions about the safeguards needed when private companies build vast surveillance infrastructures. As cities from coast to America reassess their contracts, the future of Flock Safety and similar AI-driven policing technologies remains deeply uncertain, caught between the promise of security and the perils of pervasive surveillance.

Pickt after-article banner — collaborative shopping lists app with family illustration