Facial Recognition Leads To False Arrest Of Black Man In Detroit

  • Here is a part that I personally have to wrestle with:

    > "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.

    When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.

    Not too long ago, I wrote a comment here about this [1]:

    > The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

    > When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.

    Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.

    But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

    [1]:https://news.ycombinator.com/item?id=21339530

  • This story is really alarming because as described, the police ran a face recognition tool based on a frame of grainy security footage and got a positive hit. Does this tool give any indication of a confidence value? Does it return a list (sorted by confidence) of possible suspects, or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?

    The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?

  • This is a classic example of the false positive rate fallacy.

    Let's say that there are a million people, and the police have photos of 100,000 of them. A crime is committed, and they pull the surveillance of it, and match against their database. They have a funky image matching system that has a false positive rate of 1 in 100,000 people, which is way more accurate than I think facial recognition systems are right now, but let's just roll with it. Of course, on average, this system will produce one positive hit per search. So, the police roll up to that person's home and arrest them.

    Then, in court, they get to argue that their system has a 1 in 100,000 false positive rate, so there is a chance of 1 in 100,000 that this person is innocent.

    Wrong!

    There are ten people in the population of 1 million that the software would comfortably produce a positive hit for. They can't all be the culprit. The chance isn't 1 in 100,000 that the person is innocent - it is in fact at least 9 out of 10 that they are innocent. This person just happens to be the one person out of the ten that would match that had the bad luck to be stored in the police database. Nothing more.

  • He wasn't arrested until the shop owner had also "identified" him. The cops used a single frame of grainy video to pull his driver's license photo, and then put that photo in a lineup and showed the store clerk.

    The store clerk (who hadn't witnessed the crime and was going off the same frame of video fed into the facial recognition software) said the driver's license photo was a match.

    There are several problems with the conduct of the police in this story but IMHO the use of facial recognition is not the most egregious.

  • > "I picked it up and held it to my face and told him, 'I hope you don't think all Black people look alike,' " Williams said.

    I'm white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.

    So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.

    This is known as the cross-race effect and it's only something I became aware of in the last 5-10 years.

    Add to that the fallibility of human memory, and I can't believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?

    https://en.wikipedia.org/wiki/Cross-race_effect

  • There is just so much wrong with this story. For starters:

    The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.

    Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.

    This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.

    That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is not probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.

    The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.

    Finally, the original NPR article states that

    > "Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them," said Jameson Spivack

    [1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-re...

    [2] https://www.nytimes.com/2020/06/24/technology/facial-recogni...

  • It isn't just facial recognition, license plate readers can have the same indefensibly Kafka-esque outcomes where no one is held accountable for verifying computer-generated "evidence". Systems like in the article make it so cheap for the government to make a mistake, since there are few consequences, that they simply accept mistakes as a cost of doing business.

    Someone I know received vehicular fines from San Francisco on an almost weekly basis solely from license plate reader hits. The documentary evidence sent with the fines clearly showed her car had been misidentified but no one ever bothered to check. She was forced to fight each and every fine because they come with a presumption of guilt, but as soon as she cleared one they would send her a new one. The experience became extremely upsetting for her, the entire bureaucracy simply didn't care.

    It took threats of legal action against the city for them to set a flag that apparently causes violations attributed to her car to be manually reviewed. The city itself claimed the system was only 80-90% accurate, but they didn't believe that to be a problem.

  • Since the NPR is a 3 minute listen without a transcript, here's the ACLU's text/image article: https://www.aclu.org/news/privacy-technology/wrongfully-arre...

    And here's a 1st-person account from the arrested man: https://www.washingtonpost.com/opinions/2020/06/24/i-was-wro...

  • From ACLU article:

    Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation.

    I fear this is going to be the norm among police investigations.

  • > Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people.

    The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.

    Would have just been "this edge case is not an edge case at all, axe it."

    Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.

  • The discussion about this tech revolves around accuracy and racism, but the real threat is in global unlimited surveillance. China is installing 200 million of facial recognition cameras right now to keep the population under control. It might be the death of human freedom as this technology spreads

    Edit: one source says it is 400 million new cameras: https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...

  • Reminds me of this-

    Facial recognition technology flagged 26 California lawmakers as criminals. (August 2019)

    https://www.mercurynews.com/2019/08/14/facial-recognition-te...

  • Another reason that it's absolutely insane that the state demands to know where you sleep at night in a free society. These clowns were able to just show up at his house and kidnap him.

    The practice of disclosing one's residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There's absolutely no reason that an ID should be gated on the state knowing your residence. It's none of their business. (It's not on a passport. Why is it on a driver's license?)

    [1]: https://www.newsweek.com/dmv-drivers-license-data-database-i...

  • Perhaps we, as technologists, are going about this the wrong way. Maybe, instead of trying to reduce the false alarm rate to an arbitrarily low number, we instead develop CFAR (Constant false alarm rate) systems, so that users of the system know that they will get some false alarms, and develop procedures for responding appropriately. In that way, we could get the benefit of the technology, whilst also ensuring that the system as a whole (man and machine together) are designed to be robust and have appropriate checks and balances.

  • I don't think using the facial recognition is necessarily wrong to help identify probable suspects, but arresting someone based on a facial match algorithm is definitely going too far.

    Of course really I blame the AI/ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.

  • A few things I just don't have the stomach for as an engineer, writing software that: - impacts someones health - impacts someones finances - impacts someones freedoms

    Call me weak, but I think about the "what ifs" a bit too much in those cases. What if my bug keeps them from selling their stock and they lose their savings? What if the wrong person is arrested, etc?

  • I think that your prints, DNA, and so forth must be, in the interests of fairness, utterly erased from all systems in the case of false arrest. With some kind of enormous, ruinous financial penalty in place for the organizations for non-compliance, as well as automatic jail times for involved personnel. These things need teeth to happen.

  • any defence lawyer with more than 3 brain cells would have an absolute field day deconstructing a case brought solely on the basis of a facial recognition. What happened to the idea that police need to gather a variety of evidence confirming their suspicions before an arrest is issued. Even a state prosecutor wouldn't authorize a warrant based on such flimsy methods.

  • The company that developed this software is Dataworks plus, according to the article. Name and shame.

  • And then in some states employers are allowed to ask have you eve been arrested (never mind convicted of any crime) on employment application. Sure, keep putting people down. One day it might catch up with China's social scoring policies.

  • Is that different from somebody getting arrested based on mistaken eyewitness.

  • What is a unique use case for facial recognition that cannot be abused and has no other alternative solution?

    Even the "good" use cases like unlocking your phone have security problems because malicious people can use photos or videos of your face and you can't change your face like you would a breached username and password.

  • I've got to be honest: I'm getting the picture the police here aren't very competent. I know I know, POSIWID and maybe they're very competently aiming at the current outcome. But don't they just look like a bunch of idiots?

  • In this particular case, computerized facial recognition is not the problem.

    Facial recognition produces potential matches. It's still up to humans to look at footage themselves and use their judgment as to whether it's actually the same person or not, as well as to judge whether other elements fit the suspect or not.

    The problem here is 100% on the cop(s) who made that call for themselves, or intentionally ignored obvious differences. (Of course, without us seeing the actual images in question, it's hard to judge.)

    There are plenty of dangers with facial recognition (like using it at scale, or to track people without accountability), but this one doesn't seem to be it.

  • > Even if this technology does become accurate (at the expense of people like me), I don’t want my daughters’ faces to be part of some government database.

    Stop using Amazon Ring and similar doorbell products.

  • The pandemic has accelerated the use of no-touch surfaces specially at places like an airport that are more inclined to now use a face recognition security kiosk. What's not clear is the vetting process for these (albeit controversial) technologies. What if Google thinks person A is an offender but Amazon thinks otherwise. Can they be used as counter evidence? What is the gold standard for surveillance?

  • NPR article about the same, if you prefer to read instead of listen: https://www.npr.org/2020/06/24/882683463/the-computer-got-it...

    I'll be watching this case with great interest

  • Sadly, there's plenty more where that came from.

  • And now the poor guy has an arrest record. Which wouldn't be a problem in reasonable legislations, where it's nobody's business whether you've been arrested or not, as long as you've not been convicted.

    But in the US, I've heard that it can make it harder to get a job.

    I believe I'm starting to get a feel for how the school to prison pipeline may work.

  • Wait until you hear about how garbage and unscientific fingerprint identification is.

  • In alot of police departments around the world, the photo database used is the drivers license database.

    There is clothing available that can confuse facial recognition systems. What would happen if, next time you go for your drivers license photo, you wore a T shirt designed to confuse facial recognition, for example like this one? https://www.redbubble.com/i/t-shirt/Anti-Surveillance-Clothi...

  • I would love to see police trying to take a crack at this from the other side of things. Instead of matching against a database, Set up a style GAN and come up with a mask of the original photo or video to isolate just the face and have the discriminator try to match the face. Then at the end you can see the generated face with a decent pose and more importantly look through the range of generated faces that result in a reasonable match to give a somewhat decent idea of how confident you should be about any identification.

  • While this case is bad enough, mistakes like this are not the biggest concern. Mistakenly arrested people are (hopefully) eventually released, even though they have to go through quite a bit of trouble.

    The consequence that is much worse would be mass incarceration of certain groups, because the AI is too good at catching people who actually did something.

    This second wave of mass incarceration will lead to even more single parent families and poor households, and will reinforce the current situation.

  • How does computerized facial recognition compare in terms of racial bias and accuracy to human-brain facial recognition? Police are not exactly perfect in either regard.

  • It's supposed to be a cornerstone of "innocent until proven guilty" legal systems that it is better to have 10 guilty people go free than to deprive a single innocent person of their freedom. It seems like the needle has been moving in the wrong direction on that. I'm not sure it that's just my impression on things, or if it's because there's more awareness with the internet/social networking of issues...

  • No mention of whether a judge signed a warrant for the arrest. In what world can cops just show up and arrest you on your front lawn based on their hunch?

  • If it's statistically proven to not work with black people then I think the only options are

    1) Make it avoid black people, i.e. they aren't stored in the database and aren't processed when scanned.

    2) Put a 5 year hiatus on commercial / public use.

    Either of these things are more acceptable than too many false positives. #1 is really interesting to me as a thought experiment because it makes everyone think twice.

  • This technology will never be ready to use like this.

    Similarly we shouldn’t collect vast databases of fingerprints or DNA and search them for every crime.

    Why? Because error rates are unavoidable. There is some uncertainty, and in large enough numbers you will find false matches with perfect DNA matching.

    We must keep our senses and use these technologies to help us rather than find the hidden bad guy.

  • well, I'm going to take something out of my chest. every time I shared a project here using machine learning people always gave me crap. saying my models were simplistic, or I did something wrong or the solution didn't work 100% of the time. well, I studied ML back in college. the basics, the algorithms that started all, linear regression, perceptron, adaline, knn, kmeans... and guess what? ML doesn't work 100% of the time. I always wanted to see how people would react when a car driven by ml hits something or when they based an important decision based on the classification of an nn. ML should be used along side human intelligence not by itself. you don't blindly trusts a black box.

  • Due process should not be abandoned in favour of automation. This was police negligence as much as it was a software mismatch.

    One more thing, the article was being to dramatic about the whole incident.

  • The worst part is they use facial recognition which finds someone that looks like the suspect, and then they put the guy in a lineup and have him identified by the victim. Wtf?

  • The prosecutor and the police chief should personally apologize to his daughters, assuming that would be age appropriate.

  • I've been thinking this sort of event has become inevitable. Tech development and business models support extending the environments in which we collect images and analyze them. Confidence values lead to statistical guilt. I wrote about it here if interested: https://unintendedconsequenc.es/inevitable-surveillance/

  • > In Williams' case, police had asked the store security guard, who had not witnessed the robbery, to pick the suspect out of a photo lineup based on the footage, and the security guard selected Williams.

    Great job police

  • Boston just banned facial recognition, as have San Francisco, Oakland and a bunch of other cities.

    You can join this movement by urging your local government officials to follow suit.

  • A human still confirmed the match right? That makes this not a facial recognition issue but something else.

  • TOTAL FAIL.

  • All that rigamaroll for $3800 worth of crap? They should just switch it up and start entrapping people like the FBI does. Then at least they would have perhaps one leg to stand on.

  • Understandable, all black men look the same.

  • sounds like this guy is about to get a big payday.