Rendered at 19:56:40 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
firefoxd 1 days ago [-]
Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.
Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
The thing about the legal system is there's no incentive to investigate to find the truth.
The incentive is to prosecte and prove the charges.
Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.
The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.
retrochameleon 23 hours ago [-]
There needs to be consequences for shitty, procedure-ignoring police work. Period.
Minimum 1 year of jail time for grossly wrongful arrests that could be avoided with standard procedure or investigation tactics that were not applied.
helterskelter 20 hours ago [-]
I agree with this sentiment but when you start punishing this sort of thing you create more incentive to cover it up. It's a tricky problem and I'm not sure there's a perfect solution.
What we really need is a change in police culture.
retrochameleon 19 hours ago [-]
Then the system should be redesigned such that transparency is a priority and cover ups are not feasible. And when cover ups eventually get found out, the punishments even more severe.
true_religion 17 hours ago [-]
We already have administrative punishments for the police when they incorrectly assign blame and cause a public relations mess.
Is the termination of your career and/or potential retraining and social embarrassment not already an incentive to cover up?
vablings 5 hours ago [-]
Medicine has a culture that adapts to this quite well. If you make an honest mistake and communicate it, you are often persecuted by your peers but not hung out to dry legally by your hospital and generally your actions are always defensible.
Similar practices are used in law enforcement, but the legal implications are seemingly more severe
jcgrillo 15 hours ago [-]
> change in police culture
until then, there's a simple rule which works well: never talk to a cop. Or at least say the minimum number of words possible, give them nothing to use against you. Present ID if they ask for it, but never admit anything. If they persist, "lawyer". That has worked for me.
pstuart 18 hours ago [-]
These dialogs always prompt me to chime in with my solution: make the police be self-insured, backed by their pension fund.
The police today have zero incentive to serve the public, they have zero skin in the game and can literally get away with murder.
Any time you hear the call for "law and order", that is the audience that supports the current system, because they like it like this.
xtajv 7 hours ago [-]
> These dialogs always prompt me to chime in with my solution: make the police be self-insured, backed by their pension fund.
I'm curious, what exactly do you mean by "self-insured"?
(Is the idea to combine literal insurance underwriting for retirement planning with a monetary incentive system for ongoing work performance)?
balderdash 16 hours ago [-]
Great idea, Except that this will never happen because public sector unions are important voting blocks. Public sector unions should be abolished (don’t have a problem with unions) but the conflict of interest is just too great.
retrochameleon 11 hours ago [-]
Great point. Obviously can't expect them to vote against their own interests, because higher standards, higher accountability, and higher transparency will always be against those interests.
forshaper 5 hours ago [-]
Police in some states are actually self-insured, though not backed by a pension fund.
fc417fc802 23 hours ago [-]
> The thing about the legal system is there's no incentive to investigate to find the truth.
The truth is much more complicated and involves politics. For example Seattle (and possibly other cities?) enacted a law that involves paying damages for being wrong in the event of bringing certain types of charges. But that has resulted in some widely publicized examples where the prosecutor erred by being overly cautious.
FireBeyond 20 hours ago [-]
And then you have Florida who will bill you about $100 a day for finding yourself in a Florida jail, regardless of whether charges were dismissed, you were found not guilty or any such thing.
And to nobody’s surprise, failure to pay this bill is in itself a Class B felony…
fc417fc802 20 hours ago [-]
That sounds like a recipe for domestic terrorism - the systemic disenfranchisement of people who have done nothing wrong for no apparent reason other than sheer greed. How long has this been in effect there?
cucumber3732842 19 hours ago [-]
The system doesn't push the issue on people who can't afford it. Blood from a stone and all that.
fc417fc802 18 hours ago [-]
I'm confused. Are you suggesting such a ridiculous system is letting class B felonies slide here? That would certainly be the pragmatic approach to being evil but in that case simply treating it as regular debt and going through civil channels would be more than sufficient.
cucumber3732842 10 hours ago [-]
Are you letting stuff in your backlog that you'll never get to before the product is gone or irrelevant "slide"?
Sure they could round those people up pretty easily just by following up on any contact with the system that they have, but why, for what, to cost the state more money that will likely never be repaid? Especially when sticking a body on DUI detail is hugely in the black. They'll just let that debt, it's accruing interests and the threat of further incarceration linger on the books indefinitely. If the person ever gets their life together they'll have to pay it or face incarceration.
I'm sure someone somewhere has written a DB query to select from outstanding balance where <exists in some other DB that is a proxy for people who have money to pay> and prioritize those cases.
FireBeyond 16 hours ago [-]
Are you suggesting that Florida it’s to go ‘soft on “crime”’? That would fly in the face of almost all available evidence.
I have extended family in Florida. The system absolutely can and does and will push the issue. There’s a reason that it’s a crime not to pay for your incarceration even if you have a finding of factual innocence against you.
cucumber3732842 10 hours ago [-]
Your family isn't sleeping under a bridge or whatever. Of course the system wants your money or the money of people on comparable economic footing you associate with. If you can work as a debt slave to the system it wants you to do that even if it means a never ending cycle of robbing peter to pay paul, sleeping on other people's couches, etc. The man sleeping under a bridge cannot, so the cops and the DA and everyone else just go fry bigger fish. Maybe they push the issue 1/100th of the time and incarcerate someone every now and they but they absolutely do not prioritize it the way they do someone who could pay even if only by moving heaven and earth. The system doesn't want to manufacture yet another felony and then incarcerate someone for it out of thin air, that just costs the system more money.
Source: my tiny keyhole view into the system.
The parties involve always have have discretion to downgrade stuff to something else, or not pursue it at all and are incentivizes.
FireBeyond 3 hours ago [-]
> The system doesn't want to manufacture yet another felony and then incarcerate someone for it out of thin air, that just costs the system more money.
You say this like Florida doesn't have both the most private prisons in the country, and the most inmates held in such facilities.
"The system" doesn't care. Florida has, repeatedly, shown a willingness to cut back on education and healthcare.
And private prisons have repeatedly been shown to be a hotbed of corruption.
autotune 20 hours ago [-]
That sounds absolutely terrifying.
bko 19 hours ago [-]
> The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.
I don't get it, if they only care about prosecuting and proving the case, wouldn't they go by the bodycam evidence? They didn't prove the case. Maybe if their incentive was to prosecute and prove the charges, they'd go by the obvious evidence. Or am I missing something here?
FireBeyond 20 hours ago [-]
There’s a judge down in Texas, Dallas area I believe, who is in social media a lot because he will excoriate prosecutors who bring bs in to his court room. He’s not soft on crime but hard on rights and process. If a defendant did the wrong thing, he will have the appropriate amount of sympathy, down to zero. At times he will tell them, we all know you got lucky here, do better. But he won’t let prosecutors slate by on garbage charges or statements or investigations by police. Which leads to my primary point at least for this discussion in particular:
To me the scariest part of this as a process is how many times (I’d casually estimate at least 75%) it is blindingly obvious that the prosecutor has not read the statement of charges or officer statements until everyone is in front of the judge. I get on one hand this judge seems to often be handling probable cause hearings but so many of these should never have resulted in any paperwork being turned in to the prosecution, let alone anyone having to show up in court.
It's fascinating to me that judges are elected in Texas, and what's more, run as members of a political party.
pylua 19 hours ago [-]
There is an incentive . It’s called fraud by negligence. I’m hoping she sues everyone here.
That’s seems to be in the realm of poissibility here if I am understanding things correctly (imo)
hnuser123456 23 hours ago [-]
I would absolutely never call the police on a woman. Simply walk far away and let her be someone else's problem.
loloquwowndueo 20 hours ago [-]
Unless it’s a Karen chasing you and yelling and threatening to call the police on you for some asinine reason?
komali2 12 hours ago [-]
Imo they're right, if you're faced with the option of running away from some crazy person or interacting with the police in the USA, the safer option is to run.
A police interaction can escalate to ruinous heights within seconds due to no fault of your own. Remember that cop that got scared by an acorn falling and started shooting at random? I don't care how many "good cops" there are, I'm not rolling the dice on encountering an acorn cop.
belorn 20 hours ago [-]
Society went through the necessary lessons with DNA and fingerprints. Putting people in jail because the computer produce a match is a terrible idea, especially when its done by an proprietary dark box that no one really understand why it claims there is a match. It can be used as a tool of investigations to give the investigators an hint to find real more substantial clues, but using it like in fiction where the computer can act as the single truth is terrible for society and justice.
A month ago or so people on HN discussed facial recognition when looking victims and perpetrators in child exploitation material, and people were complaining that meta did not allow this fast enough. Neither the article or the people in that discussion draw any connection that the issues in this article could happen. People seemingly want to think that the lesson is "Never go back to North Dakota", as that is a much easier lesson than considering false positives in detection algorithms and their impact on a legal system that is constrained in budget, time, training and incentives.
latexr 1 days ago [-]
Yes, of course someone should have investigated, but the larger point here is that people don’t because they are being sold a false narrative that AI is infallible and can do anything.
We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.
HDBaseT 21 hours ago [-]
I don't think people on HN think "AI is infallible", I think people on HN believe HN is sufficient enough for "most tasks". In the context of HN "most tasks" refers to programming tasks, not arresting and jailing people tasks.
You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.
ultrarunner 17 hours ago [-]
The problem is that the people who will put this in place rate capability on a linear scale: in their view the ability to write software is sufficiently magic, so such an ability is obviously good enough to recognize criminals. From their perspective, there are hurdles to be crossed (like probable cause) and an AI flagging a suspect feels like a magical intelligence crossing those hurdles and allowing them to continue in the process.
They don't validate the results of their fellow officers, or the validity of warrants, or anything else that predicates an arrest. Why would they start with this?
samrus 13 hours ago [-]
What about cops and legislators? They thing AI is infallible and thats very convenient for them since they can thus not mandate cops having to double check tmwhat the AI suggests
dpkirchner 1 days ago [-]
We can barely convince powers thar be that eye-witness testimony is unreliable, after all.
harshreality 23 hours ago [-]
Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.
To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
The models all have disclaimers that state the inverse. People just gradually lose sight of that.
[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.
latexr 22 hours ago [-]
> To the extent people trust AI to be infallible, it's just laziness and rapport (…) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
Why it happens is secondary to the fact that it does.
> The models all have disclaimers that state the inverse. People just gradually lose sight of that.
Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.
> Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.
I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!
You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.
the_af 22 hours ago [-]
Where did you see this claim about AI-based AI detection?
bl4ckneon 1 days ago [-]
I think you missed many important points.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
3eb7988a1663 1 days ago [-]
It was worse than that, the reporting from an earlier story[0]
...Unable to pay her bills from jail, she lost her home, her car and even her dog.
There is not a jury in the country that will side against the woman. I am not even sure who will make the best pop culture mashup - John Wick or a country song writer?
(Also, what happened to journalism - no Oxford comma?)
Yes, finding out how badly wrong you were is never fun. Of course the lack of ubiquitous Oxford comma use is itself and separately displeasing.
krferriter 1 days ago [-]
AP Style is simply wrong on this, then.
kbelder 4 hours ago [-]
Well, omitting the Oxford comma is the traditionally correct thing to do. I use the Oxford comma, it makes sense, but it is new. A hundred years ago it would have been considered an error by nearly every editor.
segmondy 1 days ago [-]
You have more faith in the country than I do.
3eb7988a1663 21 hours ago [-]
Normally, I would be a bit more grim, but people love their animals. I pray even the staunchest authoritarian would see the injustice of losing a dog.
deaux 18 hours ago [-]
You're not aware of Noem killing her dog by shooting it in the face, lining up three horses and shooting them, while being proud of it all?
konfusinomicon 4 hours ago [-]
iirc the dog was a dangerous animal and had attacked people and animals uprovoked quite a few times. didnt hear about it so not sure about the horses, but typically people dont just execute horses when they aren't injured or at risk of living out a traumatic existence. who knows, media spins and all that jazz, but I wouldn't hesitate to end a liability of a dangerous dog or a horse in suffering that had no chance of recovery, however reluctant id feel in the moment
frankharv 1 days ago [-]
Indeed let out on Christmas Eve with no money 1000 miles from your homeland.
Where your home was lost to foreclosure because one JUDGE did not look at the paperwork.
There should be a way to personally sue somebody when they don't do their job.
Protecting the innocent. The JUDGE failed badly here.
Flimsy evidence would mean no warrant. Do your basic investigation please...
Rubberstamping JUDGE caused this.
Why are they not named? Like they are a spectator. Infact they are the cause.
fc417fc802 22 hours ago [-]
TBF isn't it rather unreasonable that our system permits your home to be foreclosed while you're detained prior to a hearing?
Also rather unreasonable to arrest someone who is clearly neither violent nor a flight risk. You could literally hold the trial via video conference at that point and there would be no downside.
FireBeyond 20 hours ago [-]
At the risk of sounding like more of an anarchist (irony, autocorrect went with absurdist which isn’t entirely wrong either) than I might usually feel, that all depends on who you believe the system is for and works for? If you believe it’s “capitalism” as been so often proven, then it could be said that it’s entirely “reasonable”.
rootusrootus 18 hours ago [-]
> depends on who you believe the system is for and works for
We are still enough of a democracy to blame ourselves for this. We could choose that the system is of the people, by the people, for the people. I think too many of us simply don't agree with that, except in the narrow situation where we are talking about ourself.
deaux 18 hours ago [-]
We could just overcome the tens of billions shoved into our faces aimed at undermining it and brainwashing us, and choose that the system is of the people?
The deck is so unbelievably stacked against it.
Another thing: many people hav e been permitted to vote in let's say 40 elections (at different levels), out of which maybe 1 had a candidate that indeed supported a "system that is of the people", and 39 didn't. Gets tough then doesn't it.
redeeman 1 days ago [-]
anyone in the chain of responsibility should be punished so severely that they will be still crying about it in 2030
cogman10 23 hours ago [-]
The real problem here is she'll get money, who knows how much, but that ultimately does nothing to actually address the problems in the system.
Effectively it just raises taxes to cover the cost of these failed prosecutions.
Everytime one of these cases happens, a cop and a prosecutor should be out of a job permanently. Possibly even jailed. The false arrest should lose the cop their job and get them blacklisted, the prosecution should lose the prosecutor's right to practice law.
And if the police union doesn't like that and decides to strike, every one of those cops should simply be fired. Much like we did to the ATC. We'd be better off hiring untrained civilians as cops than to keep propping up this system of warrior cops abusing the citizens.
FireBeyond 20 hours ago [-]
> The false arrest should lose the cop their job and get them blacklisted
There is actually a federal register for LEOs that have been terminated for cause or resigned to avoid termination.
The police unions that operate in the jurisdictions that employ 70% of US police have negotiated into their CBAs that the register “cannot be used for hiring or promotional decisions”. Read into that what you will.
cogman10 20 hours ago [-]
I'm generally pretty for unions, but the police union is one that's a complete cancer on society. It pretty much solely exists to make sure cops are free to harm the public without any sort of accountability.
rootusrootus 18 hours ago [-]
Agreed. And I think we really, really need to put more effort into a "police the police" organization. Someone who has power only over the police, who the police do not have power over, to act as a check.
1718627440 9 hours ago [-]
We might call this the administration of the executive. Maybe we can vote for that or something.
rootusrootus 18 hours ago [-]
> police unions
... test my support for the idea of unionization. I have even said in the past that I think public sector unions are especially important because their boss (the people) are the most capricious and malicious of all.
Maybe we could find a way to put guardrails on what they could and could not negotiate into a contract. Wages, benefits, basic job environmental conditions, stuff like that -- okay. But administrative policies which exist to prevent bad behavior should be non-negotiable.
mrguyorama 14 minutes ago [-]
It's not the police union's fault that there is literally zero pushback against them.
Somehow Teacher unions have near zero power but cops can collectively bargain for the right to murder people to get a paid vacation.
It isn't because they have a union. Most of them don't have more than a high school diploma and minimal training. You can replace them with ease. A strike shouldn't even be considered a threat. They often can't strike, and their normal threat is work to rule, ie follow the law.
It isn't the police union that keeps judges from throwing the book at cops. It isn't the police union that keeps 40% of the country rabidly insistent that gently reforming police would turn this country to ash. It isn't the union that forces them to die in car crashes far more often than they ever face lethal violence.
A union isn't magically powerful and never can be. The employer can always just replace the members. Funny how that keeps unions in check for such skilled jobs as Teachers and Bureaucrats and Nurses and ATC employees, but for people who usually have just a high school diploma and a few weeks of training suddenly it's impossible call the strike's bluff? I hear TSA bodies are desperate for work.
It's a narrative. Police unions are allowed to exist to encourage you to hate unions. Police unions have correctly identified that nobody even attempts to push back against them and are simply doing their job: Advocating for their members. You aren't required to accept a Union's terms. America is chock full of better trained private security that would be happy to scalp a police force.
Hell, police departments are often run by political candidates. Why don't the pro-union ones just get voted out by supposedly anti-union people?
NL807 15 hours ago [-]
>No, challenge the entire system.
Agree in principle. But people like her does not have the resources, financially and emotionally to go through the legal system again. Unless there are charitable lawyers who are willing to do it on her behalf for free.
themafia 24 hours ago [-]
> Whether it's AI that flagged her
It absolutely was. There's no question of this. Now we need to ask how was the system marketed, what did the police pay for it, how were they trained to use it?
> anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm.
Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.
> we are all guilty until cleared.
This is not at a phenomenon that started with AI. If you scratch the surface, even slightly, you'll find that this is a common strategy used against defendants who are perceived as not being financially or logistically capable of defending themselves.
We have a private prison industry. The line between these two outcomes is very short.
LocalH 23 hours ago [-]
>Legally that amounts "hearsay" and cannot have any value.
How is that hearsay if she's directly testifying to her own whereabouts?
Hearsay would be if someone else was testifying "she was in X location on july 10th between 3 and 4pm", without the accused being available for cross
Borealid 23 hours ago [-]
No!
"I was at the library" is firsthand testimony.
"I saw her at the library" is firsthand testimony.
"I saw her library card in her pocket" is firsthand testimony.
"She was at the library - Bob told me so" is hearsay. Just look at the word - "hear say". Hearsay is testifying about events where your knowledge does not come from your own firsthand observations of the event itself.
Larrikin 18 hours ago [-]
You don't know what hearsay means
jmye 21 hours ago [-]
> Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.
I just want to understand your argument: you believe that any alibi provided is hearsay, and has no legal value, and that they can't even take the statement in order to validate it? That's your position?
themafia 18 hours ago [-]
The condition here being she was already arrested. You don't arrest someone first and then try to establish their alibi second. That would be an investigation which would be prior to getting a warrant which would allow you to arrest someone. You will never talk yourself out of an arrest, you might talk yourself out of an investigation.
You can offer your story to the police but the fact that you did or what you said to them will not come into evidence in court. You cannot call the officer to the stand and then ask them to repeat in court what you said. That would be "hearsay." So, for a lot of reasons, if you're already arrested, you probably don't even want to tell them any of that. It can only be used against you and never for you. Get your lawyer and have them ready the case to prove that alibi for you.
1718627440 8 hours ago [-]
> but the fact that you did or what you said to them will not come into evidence in court.
What?? Isn't it that everything you say can be used in court? Aren't interrogations and arrests recorded?
themafia 49 minutes ago [-]
It can be used in court _against you_.
You're never going to get your statements made in an interrogation into the record as exculpatory evidence.
The purpose of the interrogation is to find _other crimes_ you are also guilty of and charge you with those.
The police are not going to build a case against you, arrest you, and then immediately try to destroy their own case.
There's some real Hollywood confusion here.
There are two legal issues here. First is fighting the false arrest. Your statements will not help you here. Second is a civil rights violation case. The police negligence, if it can be established, is the basis of your case.
In either scenario your stated alibi is not meaningful.
tmpz22 1 days ago [-]
IANAL but AFAIK custodial interrogation triggers Miranda, lawyers, and those awful awful civil liberties we’re trying to get rid of.
Better just to apply Musk or Altman software to the problem and avoid it entirely.
garethsprice 1 days ago [-]
The vendor they used, Clearview AI, does not allow you to request data deletion unless you live in one of the half-dozen states that legally mandate it.
I have suddenly becomes very interested in New York's S1422 Biometric Privacy Act.
clcaev 22 hours ago [-]
For IL residents the policy requires collection and retention of your biomarkers. Presumably there is a law enforcement exclusion implicitly or explicitly, eg search via administrative warrant.
To get your data deleted in the states that require it you have to submit a photo of yourself which I really don't want to do for a sketchy company with ties to evil billionaire Peter Thiel.
csomar 17 hours ago [-]
Would that data be admissible in a lawsuit if you'd already submitted a deletion request?
balderdash 16 hours ago [-]
lol you have to give them your picture to delete your picture
dawnerd 1 days ago [-]
Sadly this is really the only tool we have right now. Just have to keep spamming them with delete requests because once they delete it’ll end up back in their database eventually.
advisedwang 22 hours ago [-]
For me the worst thing in this case is that a JUDGE signed off on an arrest warrant with only a clearview match linking Ms Lipps to the crime.
A judge and the warrant process are supposed to be the safeguard against police doing shady stuff (like relying on an AI hit to decide who commit a crime). But if the judges can't be bothered...
tlogan 1 days ago [-]
This is a weak or misleading story about AI.
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day).
Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
Can someone clarify how that process works?
suzzer99 1 days ago [-]
In the US there are no consequences for people in power failing to follow procedures, laws or regulations - except for being told to stop doing whatever illegal thing they're doing, and possibly getting sued way down the line, which gets paid by taxpayers.
tlogan 1 days ago [-]
From reading more into the case, it seems the issue may be related to how her lawyer handled the case.
They probably did “identity challenge” arguing that she is not the right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.
Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.
suzzer99 21 hours ago [-]
Anyone involved in this who didn't immediately raise a giant stink to get this woman out of jail is partially at fault imo.
frankharv 1 days ago [-]
Yes but a judge issued the warrant in the first place.
Cops did not do a proper investigation and the judge green-lighted it.
It is all on the JUDGE or possibly a magistrate who approved a faulty warrant.
The judge failed the poor woman. FIRE him.
Then sue Clearview for big bucks.
tlogan 1 days ago [-]
The judge likely issued the warrant based on the detective’s sworn testimony. In most cases, a judge does not have the ability or detailed knowledge to independently verify whether the detective completed all necessary checks.
This situation likely resulted from either sloppy investigative work or an honest mistake: the detective believed her booking photo matched the individual captured on camera.
what's with the weird obsession all over the thread that it is the JUDGE who is the only person at fault here?
1718627440 8 hours ago [-]
The police was wrong, but the judge is the check that is supposed to prevent the police errors from being actually committed.
Sure, when the junior deletes the production database you are also angry at the junior, but you also ask why the junior got permission to do that.
ciupicri 23 hours ago [-]
Because the police or the prosecutor or whatever can ask for whatever they want, but it's up to the judge to refuse their stupid claims. Though the others should get some blame too.
wl 23 hours ago [-]
Especially considering the judge is the only person involved in this who is completely immune from being sued.
elektronika 19 hours ago [-]
Under qualified immunity cops are all but completely immune to being sued.
wl 8 minutes ago [-]
They really aren’t. Qualified immunity is probably too strong, but litigants get past it all the time.
On the other hand, judges have absolute immunity for actions taken in the course of their judicial duties.
1 days ago [-]
lovich 24 hours ago [-]
It’s the same poster, I assumed they were ai at first but the account is from 2017.
Some people are just weird
AnthonyMouse 1 days ago [-]
[deleted]
everforward 1 days ago [-]
This isn’t how it works, you can invoke your right to a speedy trial at any point you want. You can spend 2 months waiting and then invoke it if you want.
The timer starts from when you invoke it, though.
The 2 issues, which she may be caught in, are that it’s “speedy” from the perspective of a court, and that it really means “free from undue delays”.
There is no general definition of a speedy trial, but I think the shortest period any state defines is a month (with some states considering several months to still be “speedy”).
A trial can still be speedy even past that window if the prosecution can make a case that they genuinely need more time (like waiting for lab tests to come back).
It’s basically only ever not speedy if the prosecution is just not doing anything.
gamblor956 1 days ago [-]
You get charged with something and if you want to have the trial right now, before you have any idea what's going on, then you can insist, which basically nobody does because it's pretty crazy to go in blind
Actually most criminal defense attorneys recommend not waiving your speedy trial rights. Yes, the defense goes in blind. But so does the prosecution, and they're the ones that have to make a case.
The usual result for defendants that don't waive their speedy trial rights is an acquittal if the case goes to trial (between 50-60%), which doesn't sound like a lot but prosecutors are expected to win >90% of their trials. Additionally, in many counties they don't have sufficient courtrooms to handle all the criminal trials within the speedy trial timeframe, so if the trial date comes and a courtroom is not available the case is dismissed with prejudice. Nonviolent misdeameanors are the lowest priority for a courtroom (and by that I mean even family law cases have priority over nonviolent misdos in most counties), so those cases are frequently dismissed a day or two before the trial date. Consequently, most prosecutors will offer better and better plea bargains as the trial date approaches.
This is even more true for murders, which is why murder suspects don't usually get charged for a year or two after the crime.
AnthonyMouse 1 days ago [-]
Apparently I set up a unit test for Cunningham's Law today.
SpicyLemonZest 1 days ago [-]
> The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
As the article gestures towards, challenging the extradition can greatly extend the timeline, from 30 days after the arrest to 90 days after a formal identity hearing. Which isn't fair and isn't intuitive, but is unfortunately a long-standing part of the system. (Even worse, this kind of mistaken identity can't be challenged in an extradition hearing; the question isn't whether she's the person who committed the crime but whether she's the person identified in the warrant.)
tlogan 24 hours ago [-]
That is my assumption. I assume who ever was representing her made a mistake and challenged the warrant and that caused delay in the extradition.
strictnein 1 days ago [-]
I wish I could find the link, but I believe she was in jail on parole violation, unrelated to anything that the "AI" flagged her on.
Supermancho 1 days ago [-]
Her picture was used as part of a fake id card, in the commission of a crime. The fuzzy camera footage looked like her (from stills I've seen) and her picture was on the fake ID. Those 2 circumstantial items were, apparently, enough to have a warrant issued.
They picked her up in TN and held her for 4 months, even after:
The ND police knew the ID was fake and the person using it was not her.
The ND police knew she had been in TN before, during, and after the crime.
She is still technically a suspect, even after all of this has come out.
tlogan 24 hours ago [-]
Ok. The mistake was made by North Dakota police (and they blame AI - the AI just gave them a possible match. Whatever.).
What I still do not understand is why she spent nearly six months in a Tennessee jail. That part remains unclear and needs further explanation.
p_l 23 hours ago [-]
From the first time the story surfaced, for spurious reasons[1] she was booked as fugitive, and that made it so that there was "no need" for normal timeframe of hearing.
[1] The reason being that she was found in Tennessee while being searched for a crime in another state, thus allowing them to treat it as interstate fugitive from a crime scene
1 days ago [-]
zoklet-enjoyer 1 days ago [-]
She was not
Source: I live in Fargo and have been following this story closely. Everyone here is pissed
frankharv 1 days ago [-]
Thanks for clarifying.
I wonder who is slandering her more... WOW
Maybe the citys insurance carrier hired a FIRM...
They will be taking a hit.
frankharv 1 days ago [-]
That is the first I have heard of that.
A small unexplained blurb in this article.
Already in jail on parole violation..
Maybe she objected to the extradition order without good counsel.
"I aint never been to N.Dakota". She found out the hard way how the law works..
What about the banks being hit. Surely they have good cameras. This was bad mojo. I would think a Wells Fargo/BoA has a unit for this stuff.
Finincial crimes handled like this. The banks will be sued too I suspect..
Deep pockets settle out.
georgemcbay 1 days ago [-]
> It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
This is how it should work, but I still think it is important to discuss these failures in the context of AI risks.
One of the largest real-world dangers of AI (as we define that now) is that it is often confidently wrong and this is a terrible situation when it comes to human factors.
A lot of people are wired in such a way that perceived confidence hacks right through their amygdala and they immediately default to trust, no matter how unwarranted.
This isn't the first time this month I've read about someone suffering consequences of mistaken identity after their facial recognition said they look like someone who committed a crime. I'm sure this is starting to happen at an alarming rate.
The fundamental problem is that among the 350 million people living in the United States, there are a lot of pairs of people who look pretty darn similar. It used to be impractical to ask a question like "who in the US looks like the person in this security footage", and so as a matter of practicality, once you found someone who looks like the suspect, you probably also have other evidence, even if it's pretty weak, linking them to the crime.
But with AI, you can ask "who in the US looks like this person", and so we need to re-calibrate what it means if all you know is that someone looks like a suspect. I am of the opinion that "looks like someone," in the absence of any other evidence, is reasonable suspicion, but not probable cause, that you are the person you look like. Reasonable suspicion is enough for the police to stop you on the street and ask for your ID, but not enough to arrest you. There are other data points that alone might not even be reasonable suspicion, but could be combined with "looks like someone" to make probable cause, such as "was near the place at the time the crime happened".
AI isn't really the problem, even whether or not the AI's determination that two people look alike is valid or reviewed by a human isn't the problem. The problem is assuming that because two people look alike they must be the same person, even if you have no other evidence of them being the same person.
oopsiremembered 1 days ago [-]
Money quote from someone quoted in the article:
"[I]t’s not just a technology problem, it’s a technology and people problem."
I can't. I just can't.
bryanrasmussen 1 days ago [-]
I've been hearing "it's not just... it's a" touted as an AI sign recently, personally I think it's an AI sign because it's a human thinking shortcut sign, and AI copies it, but it would be funny if AI wrote the article and then hallucinated this specific money quote.
oopsiremembered 1 days ago [-]
I doubt this happened here, but FWIW, AI does have a habit of "cleaning up" (read: hallucinating) interview transcript quotes if you ask it to go through a transcript and pull quotes. You have to prompt AI very specifically to get it to not "clean up" the quotes when you ask it to do that task.
b112 1 days ago [-]
And why not?
If you look at examples of people quoting on the internet, lots are out of context, paraphrased, or made up.
AI is just mimicking what it has seen.
bryanrasmussen 9 hours ago [-]
I often wake up at night from dreams of a crying AI yelling at me "I learned it from watching you, alright?!"
lucasfin000 23 hours ago [-]
The actual scariest part isn't that the AI got it wrong... it's that nobody felt the need to verify the AI. A tip from an anonymous caller can get investigated and found out if its true or not, and a match from a facial recognition system apparently does not. People haven't built better investigative tools they've just built better ways to skip around the investigation.
23 hours ago [-]
dnaranjo 22 hours ago [-]
[dead]
mememememememo 1 days ago [-]
Wow thought the bar for probable cause for an arrest warrant would be much higher. Especially to drag soneone from another state.
cortesoft 21 hours ago [-]
It’s a classic example of the base rate fallacy. The judge sees that a system with a seemingly high accuracy rate (like 99.999% accurate) has flagged a person, and they assume that means the person is highly likely to be guilty.
However, the system uses a dragnet approach, and is checking against millions of people. If you are checking 300 million people, that 99.999% accuracy check is going to find 3,000 people, and AT LEAST 99.96% of those people are going to be innocent.
This is why we can’t have wide, automated surveillance.
internetguy 23 hours ago [-]
Insane. Not even an apology. And they ask why we should respect the police.
I’m starting to believe that the internet will become a Dark Forest soon.
shevy-java 21 hours ago [-]
So cops used AI to attempt to investigate a crime. But, there was no crime - the arrest was wrong. Why can cops excuse themselves here for delegating their responsibilities (protecting society, allegedly that is) onto software? AI may also be written by some corporations to "tweak" this or that, see this foreign-looking guy being more likely to be AI-investigated. This is like the movie Minority Report - but stupid. IMO the courts should conclude that cops should not be allowed to use AI without having a prior, independently verified objective reasoning for any investigation. This mass sniffing that is currently going on is very clearly illegal. The current orange guy does not care about the law; see flock cameras aka spy cameras employed by the government on all car drivers at all times.
cortesoft 21 hours ago [-]
What? There was absolutely a crime?
mike741 19 hours ago [-]
You're absolutely right! There was a crime. I appreciate the course correction—it’s a significant oversight on my part. I've updated our previous plan to better reflect that a crime occurred. You're under arrest.
cindyllm 20 hours ago [-]
[dead]
giardini 1 days ago [-]
This has been posted at least twice before on HN.
1 days ago [-]
jqpabc123 1 days ago [-]
AI is a liability issue waiting to happen. And this is just another example.
gtowey 1 days ago [-]
It's the opposite, it's absolution from liability. "The AI did it" is the ultimate excuse to avoid accepting responsibility and consequences.
jqpabc123 1 days ago [-]
Courts are already refusing to accept this excuse.
Good to know it's not a fait accompli yet, but I wont be surprised to see corporations pushing for this hard.
Hizonner 1 days ago [-]
... which is why the institutions that assign responsibility and consequences need to make it really clear that excuse won't fly. With illustrative examples.
xtajv 6 hours ago [-]
It occurs to me that software engineering is just about the only engineering field which is neither licensed nor bonded nor insured.
I wonder if AI / shadow IT will change that.
jqpabc123 6 hours ago [-]
I wonder if AI / shadow IT will change that.
I doubt it.
Computing has traditionally been all about math and logic. This is really all that a binary logic computer is capable of. When applied to this purpose, it can offer highly accurate results at very low cost.
Current AI is an attempt to branch out from simply calculating into decision making. But it does so in the worst possible way --- using probability and statistics (aka guesswork) instead of logic and reasoning. In other words, AI offers questionable results at high cost.
As this article shows, relying on guesswork is a legal liability issue waiting to happen in many (if not most) operating environments.
xtajv 4 hours ago [-]
Heh, I wasn't suggesting that AI would actually replace decision-making. Rather, I wonder whether attempts to use AI in this way would result in such publicly-embarassing and catastrophic outcomes that software engineers might decide to organize professional guardrails about it.
I fully agree, this seems like a legal liability issue waiting to happen.
garyfirestorm 1 days ago [-]
It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.
happytoexplain 1 days ago [-]
There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.
The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
mikkupikku 1 days ago [-]
When somebody uses a tool to hurt somebody, they need to be held accountable. If I smack you with a hammer, that needs to be prosecuted. Using AI is no different.
The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.
tovej 1 days ago [-]
Systems are also a tool. Whoever institutes and helps build the system that systematically results in harm is also responsible.
That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]
Nonsense. The manufacturer, distributor, and vendor of a hammer are not liable for its misuse. We already litigated and then legislated this regarding guns in the US.
As much as I detest Clearview and Thiel the fault for this incident falls squarely on the justice system.
tgv 1 days ago [-]
This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.
fc417fc802 22 hours ago [-]
The tool is unethical, not broken. And unfortunately remains legal for the time being. To that end it's a social or political problem that can be fixed.
cyanydeez 1 days ago [-]
The tool, like Google search, is likely biased towards returning results regardless of confidence.
jqpabc123 1 days ago [-]
Used incorrectly will lead to errors.
Only one small little problem --- there is no way to tell if you are using it "correctly".
The only way to be sure is to not use it.
Using it basically boils down to, "Do you feel lucky?".
The Fargo police didn't get lucky in this case. And now the liability kicks in.
nkrisc 1 days ago [-]
Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.
jqpabc123 1 days ago [-]
Yes. But doing the investigation negates much of the incentive for using AI.
Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.
nkrisc 1 days ago [-]
In this case it sounds as though AI could have been used to generate preliminary leads. When someone calls a tip line with information, police don’t just take their word for it, they investigate it. They know that tips they receive may be incorrect. They should have done the exact same here, but they didn’t.
I’m very opposed to AI in general, but this one is clearly human failure.
The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.
jqpabc123 1 days ago [-]
...but this one is clearly human failure.
Absolutely.
The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.
So in the same way that the failure rolled down hill, liability needs to roll back up.
bornfreddy 1 days ago [-]
AI can provide leads. Someone still needs to verify them and decide.
jqpabc123 1 days ago [-]
Generating and verifying bad leads costs money. Not verifying bad leads can cost much more.
At some point, you have to decide if wasting good money on bad intel makes sense.
SpicyLemonZest 1 days ago [-]
The article says that the Fargo police claimed to have done "additional investigative steps independent of AI". (Perhaps they're lying, or did a poor job because they thought the extra steps were a formality.)
nkrisc 9 hours ago [-]
Given the actual outcome it’s hard to imagine what they actually did. It would be less embarrassing for them if they had said they did no additional investigating.
SpicyLemonZest 6 hours ago [-]
It's not even the right question, really. If they found some crazy coincidence that genuinely seemed to corroborate the identification, it's still not OK that this woman was dragged across the country. They rightly identify that the initial AI scan was wrong to do even if everything that followed was by the book. Our law enforcement processes were developed in a context where this kind of error was much harder because there was no routine way to scan every person in the United States for people who look like your suspect.
jfengel 1 days ago [-]
Now the "qualified" immunity kicks in.
jqpabc123 1 days ago [-]
We will find out. But relying on AI is likely to cost the city of Fargo in one way or another. They say they have already stopped using AI and returned to good old fashioned human investigation.
Look, I'm generally considered AI's most vociferous detractor.
But...
> there is no way to tell if you are using it "correctly".
This simply isn't true, at least in cases like this.
I know common sense isn't really all that common, but why would you give more credence to an untested tool than an untested crack-addled human informant?
The entire point of the informant, or the AI in this instance, is to generate leads. Which subsequently need to be checked.
jqpabc123 24 hours ago [-]
There is no "correct" way to use AI in order to avoid bad results. The only prudent approach is to assume all results are bad until proven otherwise.
But this approach negates much of the incentive to pay for questionable results.
zephen 21 hours ago [-]
> The only prudent approach is to assume all results are bad until proven otherwise.
As is true with results from people.
> But this approach negates much of the incentive to pay for questionable results.
I'm not sure that follows. Even the crack-addled human informant has always been paid for questionable results.
jqpabc123 20 hours ago [-]
As is true with results from people.
People as untrustworthy as AI often fail to maintain their jobs.
MattDaEskimo 1 days ago [-]
What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument
skeeter2020 1 days ago [-]
AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.
suzzer99 1 days ago [-]
Dynamite is a tool. But we don't hand it out to anyone who wants to play with it.
mikkupikku 1 days ago [-]
We used to until quite recently. Anybody could buy dynamite at the hardware store. We had to end this because of criminals using it to hurt people.
suzzer99 1 days ago [-]
I admit I was surprised to see you could buy dynamite in a hardware store until 1970.
1 days ago [-]
kbelder 3 hours ago [-]
I remember my Dad yelling at me to "put the dynamite back in the truck" when I was a kid.
jqpabc123 1 days ago [-]
Look for AI to follow a similar trajectory over time.
GaryBluto 1 days ago [-]
Impossible at this point. You cannot download dynamite.
mikkupikku 1 days ago [-]
Yes, regulation is inevitable.
jfengel 1 days ago [-]
Regulation is impossible. The AI barons literally control the federal government, so not even state regulations get tried.
jfengel 1 days ago [-]
Except this time the criminals are police.
AngryData 1 days ago [-]
They are far more often than anyone wants to admit. That's how we got 25% of the world's prison population.
rootusrootus 1 days ago [-]
AFAIK the actual cause for our high incarceration rate is that we have longer sentences. The conviction rate, for example, as compared to the UK is similar.
hrimfaxi 1 days ago [-]
Unlike hammers people preface things with "claude says", etc. I never see that kind of distancing with tools that aren't AI.
jeremie_strand 1 days ago [-]
[dead]
mistM 21 hours ago [-]
[dead]
ValveFan6969 1 days ago [-]
[dead]
casey2 1 days ago [-]
[flagged]
suzzer99 1 days ago [-]
I would say much more likely that it was because she was poor and couldn't afford a good lawyer.
AngryData 1 days ago [-]
This, she likely had a shitty public defender that did the bare minimum requirements because they were catering to paying clients. The state was playing hardball because they wanted to make a profit off the poor person with a shitty defense and the public defender was sitting on the bench at a teeball tournament because they werent getting paid enough and didn't want to try.
IncreasePosts 1 days ago [-]
What? Women are much more sympathetic figures when it comes to crime and punishment. And there are 10x more men in prison in america than women. If you were trying to "introduce" some nefarious law enforcement system to the US you would use it on undesirable men first (drug addicts and gang members)
jstanley 1 days ago [-]
You think they deliberately chose to do this to a woman? Why?
1 days ago [-]
cyanydeez 1 days ago [-]
Probably just reading the room, with States like texas making abortions illegal and allowing random citizens from enforcing that.
Famously, abortions are a woman thing.
Anyway, looking through the facts, it's just some random woman. There's better evidence that these facial recognition systems are much worse at minorities rather than genders.
Although you can probably interpret the facts differently, we've seen how any search function gets enshittified: Once people get used to searching for things, they tend to select something that returns results vs something that fails to return results.
Rather than the user blaming themselves, they blame the searcher. As such, any search system overtime will bias towards returning search (eg, Outlook), rather than accuracy.
So if these systems easily miss certain classes of people, women, minorities, they'll more likely be surfaced as inaccurate matches rather than men who'll have a higher confidence of being screened out.
That's how I interpret this 2 second commment.
renewiltord 1 days ago [-]
[flagged]
1 days ago [-]
rootusrootus 1 days ago [-]
Has it not been fairly common to require police officers to have a bachelor’s degree? Or an associate’s? I think recently that has been relaxed but I’ve lived in places where it was absolutely a requirement.
I don’t think they’re as stupid as you suggest.
voakbasda 1 days ago [-]
Police departments are known to avoid hiring people that get high marks in school, under the principle that such individuals will become bored with the job and quit. They literally look for average people with average intelligence: C students.
Now factor in the slow decline of our educational institutions, where grade inflation has systematically diminished the credibility of a degree. I would wager that many C students today would have failed out completely 30 years ago.
In that light, it is not surprising that people are seeing ICE agents behave like brown shirts. No one in power wants those people asking any kind of hard questions about what they are being ordered to do.
1 days ago [-]
llbbdd 1 days ago [-]
Having a degree is a very low bar for intelligence.
lovich 24 hours ago [-]
Police departments won the right to discriminate _against_ intelligence in 1997 on the Jordan vs The City of New London case[1].
I've been called an intellectual snob before, because I tended to look down on people unfairly. I've even been tested (with a real test, not some online crap) as having a fairly high IQ. So I find it interesting that I'm now being accused of thinking too highly of the ~50% of the population with IQ between 85 and 115.
In any case I stand by my assessment. Someone with a 100 IQ is perfectly capable of being a competent, well behaved police officer. And while your article suggests that lethality increases with lower IQs, I do think some of the biggest assholes I've ever had the displeasure of interacting with were legitimately brilliant otherwise. I wouldn't want them given power over others.
zephen 17 hours ago [-]
> Someone with a 100 IQ is perfectly capable of being a competent, well behaved police officer.
Sure. But, depending on which source you use, 104 or 98.4 is average, and the standard deviation is between 11 and 14. If we use the most generous of those numbers, that would still mean that 21% of the police have an IQ below 93.
And while a policeman with an IQ below 93 might even manage to do most things OK, I submit that the amount of training necessary to get them to understand the limitations of AI is almost certainly much higher than what they have received to date.
Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
[1]: https://news.ycombinator.com/item?id=23628394
The incentive is to prosecte and prove the charges.
Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.
The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.
Minimum 1 year of jail time for grossly wrongful arrests that could be avoided with standard procedure or investigation tactics that were not applied.
What we really need is a change in police culture.
Is the termination of your career and/or potential retraining and social embarrassment not already an incentive to cover up?
Similar practices are used in law enforcement, but the legal implications are seemingly more severe
until then, there's a simple rule which works well: never talk to a cop. Or at least say the minimum number of words possible, give them nothing to use against you. Present ID if they ask for it, but never admit anything. If they persist, "lawyer". That has worked for me.
The police today have zero incentive to serve the public, they have zero skin in the game and can literally get away with murder.
Any time you hear the call for "law and order", that is the audience that supports the current system, because they like it like this.
I'm curious, what exactly do you mean by "self-insured"?
(Is the idea to combine literal insurance underwriting for retirement planning with a monetary incentive system for ongoing work performance)?
The truth is much more complicated and involves politics. For example Seattle (and possibly other cities?) enacted a law that involves paying damages for being wrong in the event of bringing certain types of charges. But that has resulted in some widely publicized examples where the prosecutor erred by being overly cautious.
And to nobody’s surprise, failure to pay this bill is in itself a Class B felony…
Sure they could round those people up pretty easily just by following up on any contact with the system that they have, but why, for what, to cost the state more money that will likely never be repaid? Especially when sticking a body on DUI detail is hugely in the black. They'll just let that debt, it's accruing interests and the threat of further incarceration linger on the books indefinitely. If the person ever gets their life together they'll have to pay it or face incarceration.
I'm sure someone somewhere has written a DB query to select from outstanding balance where <exists in some other DB that is a proxy for people who have money to pay> and prioritize those cases.
I have extended family in Florida. The system absolutely can and does and will push the issue. There’s a reason that it’s a crime not to pay for your incarceration even if you have a finding of factual innocence against you.
Source: my tiny keyhole view into the system. The parties involve always have have discretion to downgrade stuff to something else, or not pursue it at all and are incentivizes.
You say this like Florida doesn't have both the most private prisons in the country, and the most inmates held in such facilities.
"The system" doesn't care. Florida has, repeatedly, shown a willingness to cut back on education and healthcare.
And private prisons have repeatedly been shown to be a hotbed of corruption.
I don't get it, if they only care about prosecuting and proving the case, wouldn't they go by the bodycam evidence? They didn't prove the case. Maybe if their incentive was to prosecute and prove the charges, they'd go by the obvious evidence. Or am I missing something here?
To me the scariest part of this as a process is how many times (I’d casually estimate at least 75%) it is blindingly obvious that the prosecutor has not read the statement of charges or officer statements until everyone is in front of the judge. I get on one hand this judge seems to often be handling probable cause hearings but so many of these should never have resulted in any paperwork being turned in to the prosecution, let alone anyone having to show up in court.
That’s seems to be in the realm of poissibility here if I am understanding things correctly (imo)
A police interaction can escalate to ruinous heights within seconds due to no fault of your own. Remember that cop that got scared by an acorn falling and started shooting at random? I don't care how many "good cops" there are, I'm not rolling the dice on encountering an acorn cop.
A month ago or so people on HN discussed facial recognition when looking victims and perpetrators in child exploitation material, and people were complaining that meta did not allow this fast enough. Neither the article or the people in that discussion draw any connection that the issues in this article could happen. People seemingly want to think that the lesson is "Never go back to North Dakota", as that is a much easier lesson than considering false positives in detection algorithms and their impact on a legal system that is constrained in budget, time, training and incentives.
We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.
You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.
They don't validate the results of their fellow officers, or the validity of warrants, or anything else that predicates an arrest. Why would they start with this?
To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
The models all have disclaimers that state the inverse. People just gradually lose sight of that.
[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.
Why it happens is secondary to the fact that it does.
> The models all have disclaimers that state the inverse. People just gradually lose sight of that.
Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.
https://www.youtube.com/watch?v=Xj4aRhHJOWU
I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!
You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
(Also, what happened to journalism - no Oxford comma?)
[0] https://news.ycombinator.com/item?id=47356968
Where your home was lost to foreclosure because one JUDGE did not look at the paperwork.
There should be a way to personally sue somebody when they don't do their job. Protecting the innocent. The JUDGE failed badly here.
Flimsy evidence would mean no warrant. Do your basic investigation please... Rubberstamping JUDGE caused this.
Why are they not named? Like they are a spectator. Infact they are the cause.
Also rather unreasonable to arrest someone who is clearly neither violent nor a flight risk. You could literally hold the trial via video conference at that point and there would be no downside.
We are still enough of a democracy to blame ourselves for this. We could choose that the system is of the people, by the people, for the people. I think too many of us simply don't agree with that, except in the narrow situation where we are talking about ourself.
The deck is so unbelievably stacked against it.
Another thing: many people hav e been permitted to vote in let's say 40 elections (at different levels), out of which maybe 1 had a candidate that indeed supported a "system that is of the people", and 39 didn't. Gets tough then doesn't it.
Effectively it just raises taxes to cover the cost of these failed prosecutions.
Everytime one of these cases happens, a cop and a prosecutor should be out of a job permanently. Possibly even jailed. The false arrest should lose the cop their job and get them blacklisted, the prosecution should lose the prosecutor's right to practice law.
And if the police union doesn't like that and decides to strike, every one of those cops should simply be fired. Much like we did to the ATC. We'd be better off hiring untrained civilians as cops than to keep propping up this system of warrior cops abusing the citizens.
There is actually a federal register for LEOs that have been terminated for cause or resigned to avoid termination.
The police unions that operate in the jurisdictions that employ 70% of US police have negotiated into their CBAs that the register “cannot be used for hiring or promotional decisions”. Read into that what you will.
... test my support for the idea of unionization. I have even said in the past that I think public sector unions are especially important because their boss (the people) are the most capricious and malicious of all.
Maybe we could find a way to put guardrails on what they could and could not negotiate into a contract. Wages, benefits, basic job environmental conditions, stuff like that -- okay. But administrative policies which exist to prevent bad behavior should be non-negotiable.
Somehow Teacher unions have near zero power but cops can collectively bargain for the right to murder people to get a paid vacation.
It isn't because they have a union. Most of them don't have more than a high school diploma and minimal training. You can replace them with ease. A strike shouldn't even be considered a threat. They often can't strike, and their normal threat is work to rule, ie follow the law.
It isn't the police union that keeps judges from throwing the book at cops. It isn't the police union that keeps 40% of the country rabidly insistent that gently reforming police would turn this country to ash. It isn't the union that forces them to die in car crashes far more often than they ever face lethal violence.
A union isn't magically powerful and never can be. The employer can always just replace the members. Funny how that keeps unions in check for such skilled jobs as Teachers and Bureaucrats and Nurses and ATC employees, but for people who usually have just a high school diploma and a few weeks of training suddenly it's impossible call the strike's bluff? I hear TSA bodies are desperate for work.
It's a narrative. Police unions are allowed to exist to encourage you to hate unions. Police unions have correctly identified that nobody even attempts to push back against them and are simply doing their job: Advocating for their members. You aren't required to accept a Union's terms. America is chock full of better trained private security that would be happy to scalp a police force.
Hell, police departments are often run by political candidates. Why don't the pro-union ones just get voted out by supposedly anti-union people?
Agree in principle. But people like her does not have the resources, financially and emotionally to go through the legal system again. Unless there are charitable lawyers who are willing to do it on her behalf for free.
It absolutely was. There's no question of this. Now we need to ask how was the system marketed, what did the police pay for it, how were they trained to use it?
> anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm.
Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.
> we are all guilty until cleared.
This is not at a phenomenon that started with AI. If you scratch the surface, even slightly, you'll find that this is a common strategy used against defendants who are perceived as not being financially or logistically capable of defending themselves.
We have a private prison industry. The line between these two outcomes is very short.
How is that hearsay if she's directly testifying to her own whereabouts?
Hearsay would be if someone else was testifying "she was in X location on july 10th between 3 and 4pm", without the accused being available for cross
"I was at the library" is firsthand testimony.
"I saw her at the library" is firsthand testimony.
"I saw her library card in her pocket" is firsthand testimony.
"She was at the library - Bob told me so" is hearsay. Just look at the word - "hear say". Hearsay is testifying about events where your knowledge does not come from your own firsthand observations of the event itself.
I just want to understand your argument: you believe that any alibi provided is hearsay, and has no legal value, and that they can't even take the statement in order to validate it? That's your position?
You can offer your story to the police but the fact that you did or what you said to them will not come into evidence in court. You cannot call the officer to the stand and then ask them to repeat in court what you said. That would be "hearsay." So, for a lot of reasons, if you're already arrested, you probably don't even want to tell them any of that. It can only be used against you and never for you. Get your lawyer and have them ready the case to prove that alibi for you.
What?? Isn't it that everything you say can be used in court? Aren't interrogations and arrests recorded?
You're never going to get your statements made in an interrogation into the record as exculpatory evidence.
The purpose of the interrogation is to find _other crimes_ you are also guilty of and charge you with those.
The police are not going to build a case against you, arrest you, and then immediately try to destroy their own case.
There's some real Hollywood confusion here.
There are two legal issues here. First is fighting the false arrest. Your statements will not help you here. Second is a civil rights violation case. The police negligence, if it can be established, is the basis of your case.
In either scenario your stated alibi is not meaningful.
Better just to apply Musk or Altman software to the problem and avoid it entirely.
https://www.clearview.ai/privacy-and-requests
I have suddenly becomes very interested in New York's S1422 Biometric Privacy Act.
A judge and the warrant process are supposed to be the safeguard against police doing shady stuff (like relying on an AI hit to decide who commit a crime). But if the judges can't be bothered...
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
Can someone clarify how that process works?
They probably did “identity challenge” arguing that she is not the right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.
Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.
Cops did not do a proper investigation and the judge green-lighted it.
It is all on the JUDGE or possibly a magistrate who approved a faulty warrant.
The judge failed the poor woman. FIRE him.
Then sue Clearview for big bucks.
This situation likely resulted from either sloppy investigative work or an honest mistake: the detective believed her booking photo matched the individual captured on camera.
Her booking photo from a prior arrest can be found here: https://mugshots.com/US-States/Tennessee/Carter-County-TN/An...
Do we have recording of the suspect they used for the match?
https://www.grandforksherald.com/news/north-dakota/ai-error-...
Sure, when the junior deletes the production database you are also angry at the junior, but you also ask why the junior got permission to do that.
On the other hand, judges have absolute immunity for actions taken in the course of their judicial duties.
Some people are just weird
The timer starts from when you invoke it, though.
The 2 issues, which she may be caught in, are that it’s “speedy” from the perspective of a court, and that it really means “free from undue delays”.
There is no general definition of a speedy trial, but I think the shortest period any state defines is a month (with some states considering several months to still be “speedy”).
A trial can still be speedy even past that window if the prosecution can make a case that they genuinely need more time (like waiting for lab tests to come back).
It’s basically only ever not speedy if the prosecution is just not doing anything.
Actually most criminal defense attorneys recommend not waiving your speedy trial rights. Yes, the defense goes in blind. But so does the prosecution, and they're the ones that have to make a case.
The usual result for defendants that don't waive their speedy trial rights is an acquittal if the case goes to trial (between 50-60%), which doesn't sound like a lot but prosecutors are expected to win >90% of their trials. Additionally, in many counties they don't have sufficient courtrooms to handle all the criminal trials within the speedy trial timeframe, so if the trial date comes and a courtroom is not available the case is dismissed with prejudice. Nonviolent misdeameanors are the lowest priority for a courtroom (and by that I mean even family law cases have priority over nonviolent misdos in most counties), so those cases are frequently dismissed a day or two before the trial date. Consequently, most prosecutors will offer better and better plea bargains as the trial date approaches.
This is even more true for murders, which is why murder suspects don't usually get charged for a year or two after the crime.
As the article gestures towards, challenging the extradition can greatly extend the timeline, from 30 days after the arrest to 90 days after a formal identity hearing. Which isn't fair and isn't intuitive, but is unfortunately a long-standing part of the system. (Even worse, this kind of mistaken identity can't be challenged in an extradition hearing; the question isn't whether she's the person who committed the crime but whether she's the person identified in the warrant.)
They picked her up in TN and held her for 4 months, even after:
The ND police knew the ID was fake and the person using it was not her. The ND police knew she had been in TN before, during, and after the crime.
She is still technically a suspect, even after all of this has come out.
What I still do not understand is why she spent nearly six months in a Tennessee jail. That part remains unclear and needs further explanation.
[1] The reason being that she was found in Tennessee while being searched for a crime in another state, thus allowing them to treat it as interstate fugitive from a crime scene
Source: I live in Fargo and have been following this story closely. Everyone here is pissed
I wonder who is slandering her more... WOW
Maybe the citys insurance carrier hired a FIRM...
They will be taking a hit.
Maybe she objected to the extradition order without good counsel.
"I aint never been to N.Dakota". She found out the hard way how the law works..
What about the banks being hit. Surely they have good cameras. This was bad mojo. I would think a Wells Fargo/BoA has a unit for this stuff.
Finincial crimes handled like this. The banks will be sued too I suspect.. Deep pockets settle out.
This is how it should work, but I still think it is important to discuss these failures in the context of AI risks.
One of the largest real-world dangers of AI (as we define that now) is that it is often confidently wrong and this is a terrible situation when it comes to human factors.
A lot of people are wired in such a way that perceived confidence hacks right through their amygdala and they immediately default to trust, no matter how unwarranted.
https://news.ycombinator.com/item?id=47356968
The fundamental problem is that among the 350 million people living in the United States, there are a lot of pairs of people who look pretty darn similar. It used to be impractical to ask a question like "who in the US looks like the person in this security footage", and so as a matter of practicality, once you found someone who looks like the suspect, you probably also have other evidence, even if it's pretty weak, linking them to the crime.
But with AI, you can ask "who in the US looks like this person", and so we need to re-calibrate what it means if all you know is that someone looks like a suspect. I am of the opinion that "looks like someone," in the absence of any other evidence, is reasonable suspicion, but not probable cause, that you are the person you look like. Reasonable suspicion is enough for the police to stop you on the street and ask for your ID, but not enough to arrest you. There are other data points that alone might not even be reasonable suspicion, but could be combined with "looks like someone" to make probable cause, such as "was near the place at the time the crime happened".
AI isn't really the problem, even whether or not the AI's determination that two people look alike is valid or reviewed by a human isn't the problem. The problem is assuming that because two people look alike they must be the same person, even if you have no other evidence of them being the same person.
"[I]t’s not just a technology problem, it’s a technology and people problem."
I can't. I just can't.
If you look at examples of people quoting on the internet, lots are out of context, paraphrased, or made up.
AI is just mimicking what it has seen.
However, the system uses a dragnet approach, and is checking against millions of people. If you are checking 300 million people, that 99.999% accuracy check is going to find 3,000 people, and AT LEAST 99.96% of those people are going to be innocent.
This is why we can’t have wide, automated surveillance.
https://news.ycombinator.com/item?id=47356968
https://youtu.be/lPUBXN2Fd_E
https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...
I wonder if AI / shadow IT will change that.
I doubt it.
Computing has traditionally been all about math and logic. This is really all that a binary logic computer is capable of. When applied to this purpose, it can offer highly accurate results at very low cost.
Current AI is an attempt to branch out from simply calculating into decision making. But it does so in the worst possible way --- using probability and statistics (aka guesswork) instead of logic and reasoning. In other words, AI offers questionable results at high cost.
As this article shows, relying on guesswork is a legal liability issue waiting to happen in many (if not most) operating environments.
I fully agree, this seems like a legal liability issue waiting to happen.
The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.
That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]
[1] https://stavroulapabst.substack.com/p/techxgeopolitics-18-te...
As much as I detest Clearview and Thiel the fault for this incident falls squarely on the justice system.
Only one small little problem --- there is no way to tell if you are using it "correctly".
The only way to be sure is to not use it.
Using it basically boils down to, "Do you feel lucky?".
The Fargo police didn't get lucky in this case. And now the liability kicks in.
Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.
I’m very opposed to AI in general, but this one is clearly human failure.
The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.
Absolutely.
The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.
So in the same way that the failure rolled down hill, liability needs to roll back up.
At some point, you have to decide if wasting good money on bad intel makes sense.
https://www.lawlegalhub.com/how-much-is-a-wrongful-arrest-la...
But...
> there is no way to tell if you are using it "correctly".
This simply isn't true, at least in cases like this.
I know common sense isn't really all that common, but why would you give more credence to an untested tool than an untested crack-addled human informant?
The entire point of the informant, or the AI in this instance, is to generate leads. Which subsequently need to be checked.
But this approach negates much of the incentive to pay for questionable results.
As is true with results from people.
> But this approach negates much of the incentive to pay for questionable results.
I'm not sure that follows. Even the crack-addled human informant has always been paid for questionable results.
People as untrustworthy as AI often fail to maintain their jobs.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument
Famously, abortions are a woman thing.
Anyway, looking through the facts, it's just some random woman. There's better evidence that these facial recognition systems are much worse at minorities rather than genders.
Interesting biases are own-gendeR: https://pmc.ncbi.nlm.nih.gov/articles/PMC11841357/
Racial bias:
https://mitsloan.mit.edu/ideas-made-to-matter/unmasking-bias...
Miss rates:
https://par.nsf.gov/servlets/purl/10358566
Although you can probably interpret the facts differently, we've seen how any search function gets enshittified: Once people get used to searching for things, they tend to select something that returns results vs something that fails to return results.
Rather than the user blaming themselves, they blame the searcher. As such, any search system overtime will bias towards returning search (eg, Outlook), rather than accuracy.
So if these systems easily miss certain classes of people, women, minorities, they'll more likely be surfaced as inaccurate matches rather than men who'll have a higher confidence of being screened out.
That's how I interpret this 2 second commment.
I don’t think they’re as stupid as you suggest.
Now factor in the slow decline of our educational institutions, where grade inflation has systematically diminished the credibility of a degree. I would wager that many C students today would have failed out completely 30 years ago.
In that light, it is not surprising that people are seeing ICE agents behave like brown shirts. No one in power wants those people asking any kind of hard questions about what they are being ordered to do.
They literally aim to be dumber than average.
[1] https://en.wikipedia.org/wiki/Wonderlic_test
I'll just leave this here:
https://abcnews.com/US/court-oks-barring-high-iqs-cops/story...
Really? Maybe your perception of the "average" person is colored by where you live and who you interact with.
In any case, the dumber they are, the more lethal they are.
https://www.sciencedirect.com/science/article/abs/pii/S01602...
In any case I stand by my assessment. Someone with a 100 IQ is perfectly capable of being a competent, well behaved police officer. And while your article suggests that lethality increases with lower IQs, I do think some of the biggest assholes I've ever had the displeasure of interacting with were legitimately brilliant otherwise. I wouldn't want them given power over others.
Sure. But, depending on which source you use, 104 or 98.4 is average, and the standard deviation is between 11 and 14. If we use the most generous of those numbers, that would still mean that 21% of the police have an IQ below 93.
And while a policeman with an IQ below 93 might even manage to do most things OK, I submit that the amount of training necessary to get them to understand the limitations of AI is almost certainly much higher than what they have received to date.