The criminal justice system evolves unevenly, often stalled by bureaucracy and politics. Yet, one area is changing swiftly: the adoption of AI-powered policing. According to The Marshall Project, law enforcement agencies deploy artificial intelligence faster than oversight can keep pace, from facial recognition to predictive analytics. In New Orleans, a private network of AI-equipped cameras has been tracking individuals in real time, contributing to dozens of arrests since 2023, a Washington Post investigation revealed.
Operated by the nonprofit Project NOLA, this system uses 200 cameras installed on private property by residents and businesses. Equipped with facial recognition, the cameras alert police via an app when someone on a wanted list is detected. The technology aided responses to two high-profile incidents in 2025: the New Year’s Eve terror attack that killed 14 and injured nearly 60, and the escape of 10 inmates from the city jail last month. Supporters credit AI-powered policing with reducing crime, but critics argue it bypasses oversight. “When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there,” Danny Engelberg, New Orleans’ chief public defender, told the Post. The police department suspended the technology’s use before the report’s publication.
New Orleans’ 2022 ordinance limited police use of facial recognition to violent crimes and required oversight by state examiners. However, Project NOLA’s private network operates outside these rules, exposing gaps in regulation. Similar workarounds exist elsewhere. In San Francisco and Austin, Texas, police have skirted local bans by having partner agencies run facial recognition searches, the Post reported in 2024. In Milwaukee, officials are considering trading 2.5 million jail booking photos for free facial recognition software access, saving $24,000 in fees, per the Milwaukee Journal-Sentinel.
AI-powered policing extends beyond faces. Veritone’s “Track” tool identifies people by body size, clothing, or accessories, sidestepping biometric restrictions, MIT Technology Review noted. In New York City, police are exploring AI to flag erratic behavior, with the Metropolitan Transportation Authority’s Chief Security Officer Michael Kemper suggesting it could prompt security responses, per The Verge.
Online, AI-powered policing enhances undercover operations. The Massive Blue platform lets police engage suspects on social media or chat apps, supporting intelligence gathering or sting operations, Wired and 404 Media reported. This amplifies older tactics, like Memphis police using a fake activist account, but AI makes them more scalable.
Deepfakes pose new challenges. AI-generated videos could create false alibis or incriminate individuals, risking a “deepfake defense” that undermines evidence, especially after Google Gemini’s hyper-realistic video engine debuted in 2025. In Arizona, a court viewed an AI-generated victim impact statement, prompting an appeal over its sentencing influence, local news reported.
As AI-powered policing grows, its benefits—crime reduction and faster responses—clash with risks to privacy and fairness. Without stronger oversight, these technologies may reshape justice faster than society can adapt.
