Pre-crime: White House reportedly looking to use consumer tech to detect violent shooters before they act

Reaction to this on social media is torn between “No way would the freedom-loving people of the US of A ever agree to this!” and “Uh, of course we’ll agree to it. We place microphones in our homes that transmit the things we say directly to Big Tech companies’ servers and we pay them for the privilege.”

I mean, we’re idiots who have traded privacy for connectivity. Sure we’ll go along with it, especially if the consent required is nothing more onerous than agreeing to some app’s terms of service.

The story is vague on details but it sounds like they’re going to gather data from volunteers and then do … something with it. Develop behavioral algorithms that detect the stirrings of violent intent, I assume. What’s unclear is how and where those algorithms would be applied. There’s no point to building pre-crime tech if it can only be used on people who consent to its usage. All the baddies would simply opt out. Application will necessarily need to be non-consensual, or sort of sub-consensual — packaged into everyday tech, in other words, with the consent provisions tucked away in paragraph 371 of the TOS.

Imagine if you couldn’t use an Apple or Google product anymore without consenting to let the federal government monitor your behavior for signs of homicidal rage.

The attempt to use volunteer data to identify “neurobehavioral signs” of “someone headed toward a violent explosive act” would be a four-year project costing an estimated $40 million to $60 million, according to Geoffrey Ling, the lead scientific adviser on HARPA [Health Advanced Research Projects Agency] and a founding director of DARPA’s Biological Technologies Office…

The idea is for the agency to develop a “sensor suite” using advanced artificial intelligence to try to identify changes in mental status that could make an individual more prone to violent behavior. The research would ultimately be opened to the public.

HARPA would develop “breakthrough technologies with high specificity and sensitivity for early diagnosis of neuropsychiatric violence,” says a copy of the proposal. “A multi-modality solution, along with real-time data analytics, is needed to achieve such an accurate diagnosis.”

The document goes on to list a number of widely used technologies it suggests could be employed to help collect data, including Apple Watches, Fitbits, Amazon Echo and Google Home. The document also mentions “powerful tools” collected by health-care provides like fMRIs, tractography and image analysis.

The name of the program is “SAFE HOME,” i.e. “Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes.” The ghost of George Orwell is reading that over my shoulder, muttering, “Geez, take it down a notch.”

When the SWAT team kicks down your door and starts ransacking your home to find your secret arsenal of machine guns because you watched the wrong video on YouTube and ends up finding nothing, do you at least get to sue the government for damages? Or do you waive that right too when you agree to the terms of service?

If there was tech that could detect the first inklings of bad behavior the Chinese would already have it, I suspect, but maybe they’re working on this too. Trump reportedly has reacted “very positively” to the idea, per WaPo, which is on-brand: If the feds can do something to impose law and order on criminals, there are no privacy or civil-liberties concerns that are going to impede that mission in his mind. Besides, what’s being proposed here is really just a logical extension of “red flag” laws. Those laws depend on family members and police to make a subjective judgment about whether someone is a threat to others and to suspend their Second Amendment rights temporarily if they can convince a judge of it. SAFE HOME would purport to render a scientific judgment about whether someone’s a threat before their right to a gun is taken away. That might actually result in fewer people being accused than would be the case under “red flag” laws.

But it would also depersonalize the process and give the feds a patina of “science” in infringing on people’s rights. Who are we to question the algorithm?

My only comfort as we lurch towards this brave new world is that Congress can’t come together to pass a farking thing anymore apart from must-pass appropriations bills when a shutdown is bearing down on them. Surely the hardcore anti-gun left and Trumpish law-and-order right wouldn’t find common ground on pre-crime legislation, would they?