
Last week, the Washington Post broke the news that the New Orleans Police Department (NOPD) has been using a real-time, facial recognition based surveillance system to monitor certain parts of the city. This was in direct defiance of the New Orleans City Council which established specific rules for how NOPD is allowed to use facial recognition technology in its investigations.1
Reading into the details, the story is complicated; technically, NOPD was not operating an unsanctioned network of street level surveillance cameras (the word operating is doing a lot of work here). Details aside, this specific case is also indicative of a much larger trend in urban spaces: the omnipresent surveillance of everything and everyone as we go about our daily lives.
The Details (of which there are many)
There are three players in this drama: NOPD, a 501c3 non-profit called Project NOLA, and private property owners who put surveillance cameras on the exteriors of their properties.
The way this all works is that Project NOLA receives the live feeds from the cameras of each of the participating property owners. They then employ facial recognition software to scan for anyone on their watchlist. If there’s a match, on-duty NOPD officers get a notification sent to their mobiles and the individual gets picked up.
Where does Project NOLA get its watchlist, you ask? The non-profit claims they created it based on publicly available mugshots and police provided warrants, but the list has never been published, nor has the process for updating it been publicly documented.
So all that sounds bad…but did it even work?
Also unclear.
NOPD never recorded whether a Project NOLA notification was the reason they made an arrest. Consequently, we also have no idea how many stops or full-on arrests were false positives.
At this point, I struggle to decide how nefarious this actually feels. Project NOLA has no formal relationship with the NOPD. This is a non-profit being given video feeds from private citizens and providing alerts to any NOPD officers who’ve decided to download their app. In theory, Project NOLA is working directly with individual NOPD officers. Best case, NOPD leadership failed to manage their officers and ensure they weren’t breaking the law by using illegal tools provided by a third party. Worst case is top-to-bottom active complicity.
Live feed, algorithmic state surveillance isn’t a new phenomenon. The gold star here goes to the Chinese government’s mass surveillance apparatus which employs cameras as just one tool in its larger efforts to monitor its population. In a relatively more constrained capacity, the UK has also employed CCTV cameras in policing public areas for years. Bringing it back to the U.S., Amnesty International documented around 15,000 NYPD cameras used to monitor intersections across the city.
So, state surveillance of public spaces, often as part of a more comprehensive system of monitoring, is only becoming more common. Whether this is a good thing is…complicated.
Inconclusive Thoughts
On the topic of policing and public safety, I’m a bundle of conflicting impulses.
On the one hand, I understand the need to ensure public safety and acknowledge that this sometimes requires folks with weapons deputized to do violence on our collective behalf.2 Sticking to the topic of surveillance, a study on the impact of cameras in subway stations in Stockholm found a 25% reduction in crime for central stations.3
On the other hand, I mostly don’t trust the people with guns and don’t especially love the idea of exponentially increasing society’s legibility in the eyes of The State™. The same surveillance apparatus that can help apprehend criminals for carjackings can also be used to arrest dissidents for thought crimes. One man’s protester is another man’s rioter.
And somehow on a third hand, I imagine government-operated surveillance systems in public places are just going to happen irrespective of our opinions, so maybe the most we can do is impotently stand athwart history yelling “Stop!”.
If it’s possible and there’s demand for it, it’s gonna happen. As a non-engineer, it would be a trivial amount of effort to set up a system that runs ring camera footage through a facial recognition library of choice (trained on pictures of my friends) and tells me who just showed up for my party. Is this a compelling use-case? No.4 But the point is it’s easy and cheap to implement these systems now; we’ve created and commercialized (or open-sourced) all the constituent parts.
So, where to from here?
In New Orleans, the local bench of the ACLU is on a war path and the program has been suspended while everyone figures out which heads will eventually roll. The aftermath is already turning to farce with an NOPD Spokesperson telling Ars Technica that the department:
…does not own, rely on, manage, or condone the use by members of the department of any artificial intelligence systems associated with the vast network of Project Nola crime cameras.
Again, this is technically true — there are no formal agreements between NOPD and Project NOLA. Whether that will provide plausible deniability in the face of the coming political fallout remains to be seen.
Elsewhere, every police force, intelligence service, and security apparatus has an incentive to put real-time surveillance in place. They’re all institutionally voyeuristic. Whether or not these systems get put in place is probably a foregone conclusion. How they’re implemented and the standards to which their operators are held? We might still have a collective shot at getting that right. And if not right, then something short of completely, totally, utterly wrong.

A tool is only as good or bad as the things it’s used for. The fundamental questions we have to ask ourselves as these systems continue propagating in cities across the world is (a) whether we agree that the ends to which they are being put are good, and (b) whether we trust the people in control of said systems.
And that’s all before evaluating the efficacy of a system in the performance of tasks we deem acceptable. Even put to a task we can agree on, we have to ask questions like:
How do we source and evaluate the quality of criminal mugshots?
Who’s responsible for ensuring the appropriate labels are applied and updated?
Who’s responsible for defining those processes in the first place?
How are we measuring false positives? Or false negatives for that matter?
How are we thinking about real world outcomes and ensuring we don’t create a “scientific, technology-enabled” system rife with class and racial bias?
The Director of Data Operations for a Police Department’s facial recognition program should be one of the most highly scrutinized positions in any public bureaucracy anywhere.
On a values level, we have to begin with transparency. Bureaucratic obfuscation will only serve to hide bad things. We cannot allow claims of technical complexity to be used to avoid serious conversations; trust me, I’m a product person, the technical fluency required to build any of these systems is not necessary to understand how they work or interrogate their impact on society.
The future is coming and in many cases it’s already here. While I don’t believe we ever have the ability to put the genie back in the bottle, we still have the power to get some things right. And if we don’t have all the right answers today — I surely do not — maybe just having some of the right questions is an ok place to start.
That’s it for today. We’ll dive into policing, public safety, and the innervation of public space by technology more in the future. In the meantime, let me know what y’all think in the comments, this is a topic I’m still trying to wrap my head around and would love to hear where people’s heads are at.
The rules included a requirement that facial recognition tech be used to look for specific suspects in relation to a particular investigation (i.e. no recording everyone and seeing if anyone interesting happens to pop up). Further, the facial recognition tech was limited to sending a still image of a face to a state-run center in Baton Rouge (again, no real-time, dragnet style alerts).
Back in 2021, I volunteered in Oakland Chinatown chaperoning elders as they went about their errands. There had been a rash of muggings and older immigrant folks made for good targets as they (a) carried cash, (b) were less likely to invoke an immediate police response relative to wealthier white folk in other parts of town. Eventually, the community succeeded in getting Oakland PD to have cops walking the neighborhood on a regular basis.
The entire paper is worth a read. There’s some interesting notes about how some of the crime was probably just displaced to non-monitored stations as well as some thoughts about which types of crime the system actually prevented versus those it did not. h/t to
for the research assist.But now that I’ve imagined it as an example, I might vibe code this the next time I have access to a front door with a video feed.
I am pro-surveillance in public places. The argument against—this could be used by bad authorities to do bad things—proves too much; you can also use it to argue that cops shouldn't have patrol cars, because that makes it easier to get in high-speed chases that put civilians at risk. Better, I think, to argue for using automation to make powers the police already have more efficient, but having guidelines for their use.
I make the case for all this, and the appropriate guidelines, here: https://www.changinglanesnewsletter.com/p/the-case-for-public-surveillance
And again, more fancifully, here:
https://www.changinglanesnewsletter.com/p/harry-potter-and-the-gaze-of-the