Let me help you out with this comprehension issue. The point of my comment is that I disagree with the apparent premise of the comment I replied to, which is that "AI" is some generic investigative tool that we can neatly snip out of the picture to blame this incident on human factors at the individual level ("the professional human-in-the-loop who shirked all responsibility"). Said comment also implies that people are fixating on the AI aspect of this issue while ignoring the human factors, which IMO is a strawman. To me, the existence of AI in its current incarnations and the ways in which law enforcement will inevitably abuse it are, together, inseparably, the problem. AI (in the most general sense) opens up entire new dimensions for potential abuse.
As a concrete example:
> And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. So saying that it has "nothing to do with AI" is totally ridiculous.
> Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all.
How do you arrive at that conclusion? Because it happened, and it wasn't an AI overseeing (the lack of) due process. The police identifying suspects is part of their job. So are arrest warrants and all the rest of it. I honestly don't see what AI had to do with anything here. All I see is a gaping systemic issue that could have happened regardless of AI if the wrong person got the wrong idea or had a personal vendetta.
Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list. We blame the systemic practices and legal apparatus that permitted it all to happen in the first place.
You might as well blame the SUV manufacturer because without vehicles the police wouldn't hav been able to drive over to make the arrest, right?
Because it's beyond obvious? How would this woman have ended up in jail if she hadn't been misidentified by the facial recognition software in use by the Fargo police? She lives 3 states over; would be a hell of a coincidence if some other avenue of investigation led them to her.
> I honestly don't see what AI had to do with anything here.
You honestly don't see what facial recognition software had to do with a woman being misidentified by facial recognition software?
> Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list.
I actually am completely willing to blame any entity that supplies ICE with the names of people it can reasonably assume will be targeted for "enforcement action" due to said entity representing said names as being legitimate targets for said enforcement action, without taking reasonable care to ensure said representation is correct in each and every case.
What you don't seem to understand is that these abuses of law enforcement authority are predicated on at least an appearance of legitimacy, which can be provided by (e.g.) an app with (presumably) a very official looking logo that agents can point at somebody to get a 'CITIZEN' or 'NOT CITIZEN' classification. It is upon this kind of basis that they perform illegal arrests. All parties—the app vendor and ICE, as well as the people who are meant to be overseeing ICE and providing accountability—are complicit enablers in these crimes. To absolve the vendors who provide the software knowing full well what it will be used for, what its limitations are, and how unlikely it is that ICE personnel will understand those limitations and work around them to keep everything legal, is totally absurd.
It isn't obvious, no. If I drop a hammer on my foot and break my toe I can't then blame the hardware store or the manufacturer. If the store didn't carry hammers I wouldn't have been able to purchase it, I think to myself. Then I couldn't possibly have dropped it on my foot, thus my toe wouldn't be broken right now. It's a specious line of reasoning.
It doesn't matter in the slightest by what means she was selected to "win" this particular lottery. The tool rolling the dice isn't to blame. Tools (and people!) will occasionally return spurious results. Any system needs to be set up to deal with that.
So no, I honestly don't see what facial recognition software has to do with gross negligence and process failure on the part of multiple government agencies.
> without taking reasonable care to ensure said representation is correct in each and every case.
Only if that was part of the contract. Was the product delivered according to specification or not?
What if ICE used FOSS tools to put together the list themselves? Are the tools still to blame? That would obviously be absurd.
The only way the provider (never the tool) could be at fault would be something such as willful negligence or knowingly and intentionally attempting to manipulate the user's actions to some end.
What you don't seem to understand is that human negligence can't be foisted off on tools. Of course an abuser will try to play his actions off as legitimate. That isn't the fault of the tool, it's the fault of the abuser. It isn't up to an app to determine the legitimacy of LEO agent actions. Neither is it the responsibility of an arbitrary, fungible government contractor to oversee ICE.
I think you're confusing the morality of participating in a broader ecosystem with moral culpability for the process failure associated with a specific event. You can advance a reasonable argument that AI companies that choose to do business with ICE are making an at least moderately immoral decision. However that doesn't place them at fault for the specific process failures of any particular event that happens.
If you don't agree that facial recognition software is involved in a case of a woman being misidentified by facial recognition software then there is no point in me spending any more time/effort in conversation with you. Goodbye.
You seem to be intentionally ignoring the point I made. I never disputed that facial recognition software was used (ie involved).
The facial recognition tool didn't arrest her. It holds no authority, has no will of its own, and does not possess a corporeal form with which to enact change in the world. The only parties that could possibly be at fault here are various government agents who clearly acted with negligence, failing to uphold their duty to the law and the people.
If you're unable to rebut my point then perhaps you should consider that you might be in the wrong? If you're unwilling to entertain such a possibility then I wonder why you're posting here to begin with. What is your goal?
> I never disputed that facial recognition software was used
You, yesterday:
> I honestly don't see what AI had to do with anything here.
???
> You seem to be intentionally ignoring the point I made.
I completely understand your point. You are saying that if a mentally ill high schooler manages to acquire a gun and kills 20 people at their school, we should a) punish the shooter, and b) understand the gun as a neutral object that simply popped into existence and was misused, rather than a machine whose design purpose is to kill humans, and whose manufacturer(s) (and other organizations who profit from the easy availability of guns) are actively engaged in a broad effort to preserve the status quo which allowed a mentally ill high schooler to acquire a gun and massacre 20 of their classmates/teachers.
I think it's a terrible opinion, and I vehemently disagree with it. But if you are willing to engage in the sort of rhetorical contortions highlighted at the top of this comment, there is no point in expressing my disagreements to you, because you will evidently say literally anything in response. I may as well have a debate about toilet tank design with `cat /dev/urandom`.
> If you're unable to rebut my point then perhaps you should consider that you might be in the wrong?
> This particular "AI bogeyman" isn't just AI; it's cops with AI
You can’t separate the thing from how it will be used. It’s like arguing that cars on their own aren’t particularly dangerous, but the point of buying a car is to use it thus risking the general public.
But you can in fact argue exactly that. If (arbitrary example) pedestrians are being killed due to poor road engineering practices it isn't reasonable to point at cars and say "see those are the root problem" when in fact it's due to a willful lack of sidewalks or marked crossings or whatever. Being adjacent to something bad doesn't equate to being the root cause.
History shows the timeline of dependence here. Before the introduction of cars, “poor road engineering practices” wouldn’t result in those deaths. So clearly it’s cars that are necessitating sidewalks, etc.
Same deal here, if something “becomes a problem” because of the introduction of AI, it’s AI that is the root case of the resulting issues. Many people are tempted to argue that flawed humans can’t implement the perfect system that is Anarchy, Communism, Recycling programs, or whatever but treating systems as needing to operate on the real world is productive where complaining about humans isn’t.
Well I (thought it was obvious that) I was referring to roads constructed relatively recently. If cars necessitate sidewalks and the city chooses to cut costs by not putting those in that isn't the fault of automobile designers or manufacturers or dealers or private owners or whoever.
To your example, technology changes and that necessitates infrastructure changing. That doesn't mean that fault for mishaps in the meantime can be attributed to the new technology. A user operating the new technology in an obviously unsafe manner is solely at fault for his own negligence.
The safest street designs still result in automobile fatalities. You can at best mitigate the issue with better street designs but not address the underlying issue.
Failing to acknowledge cars as the root cause may be comforting, but it blinds you to viable solutions.
Indoor shopping malls for example solve many of the issues with cars by forcing people to move around on foot in a little island surrounded by a sea of very low density parking. They are’t perfect solutions, but they still saved a lot of lives and time.
Saying people are misusing a new technology is just another way of saying that technology is flawed. This doesn’t mean you can’t utilize it, but pretending flaws don’t exist has no value.
> Before the introduction of cars, “poor road engineering practices” wouldn’t result in those deaths.
Death by adverse horse encounter was very common before the 1920s. Not sure how many of those deaths can be blamed on poor quality road engineering. But putting a bunch of humans, carts, and excitable half-ton animals in the same crowded streets seems like poor engineering practice.
very common here is a gross exaggeration compared to cars.
After vast improvements in safety ~1.3% of American deaths are still coming from automobile accidents. Horses were never close to that, meanwhile back in 1970 cars where around twice as likely to kill you.
This article states higher per-capita horse deaths in 1900 New York City than automobile deaths in 2023. This stat does not account for the significant disease caused by all that manure mixing with water supplies. Its unclear if automobile pollution is overall worse from a public health standpoint than mountains of horse poo.