Ryan DeLarme
May 11th, 2022
The level of invasive surveillance taking place each second around the globe is staggering, and that’s only referring to what we actually know about. Chances are that you or somebody you know has been assessed and assigned a threat score based on your associations, activities, and viewpoints.
It was reported back in 2016 that officers in Fresno, California have been using a controversial new software called “Beware”, which surveils citizens and calculates a citizen’s “Threat Score”.
It may sound like something out of a dystopian nightmare world, but this type of surveillance has been around for a long time. With the rapid growth of technology, they are finding new ways to keep tabs not only on you in the present, but also on your probable future.
Intrado, the company that created the threat-scoring software, says that Beware:
“sorts and scores billions of publicly-available commercial records in a matter of seconds – alerting responders to potentially dangerous situations while en route to, or at the location of, a 9-1-1 request for assistance.”
The Washington Post was granted access to the Fresno Police Department’s $600,000 Real-Time Crime Center where Beware is being used. The Post reported:
On 57 monitors that cover the walls of the center, operators zoomed and panned an array of roughly 200 police cameras perched across the city. They could dial up 800 more feeds from the city’s schools and traffic cameras, and they soon hope to add 400 more streams from cameras worn on officers’ bodies and from thousands from local businesses that have surveillance systems.
The cameras were only one tool at the ready. Officers could trawl a private database that has recorded more than 2 billion scans of vehicle licenses plates and locations nationwide. If gunshots were fired, a system called ShotSpotter could triangulate the location using microphones strung around the city. Another program, called Media Sonar, crawled social media looking for illicit activity. Police used it to monitor individuals, threats to schools and hashtags related to gangs.
In the same fashion that the Harris Corporation keeps the details of its Stingray cell-site simulators and trackers under wraps, Intrado considers the finer details of Beware’s calculating threat scores to be a “trade secret.”
Intraldo may not be inclined to disclose exactly how the threat score is calculated, but we know that it includes information from arrest reports, databases, web searches, and yes, even social media posts. We all learned from the Cambridge Analytica scandal that today data is more sought after than gold, and we are quickly learning that data probably isn’t only harvested for marketing purposes.
This type of technology is not limited to just law enforcement. Social Services seems to be getting in on the action as well, with computer algorithms attempting to predict which households are more likely to be found guilty of child abuse and neglect. Presumably, all it would take is an AI bot targeting a household for potential neglect for a family to be investigated, potentially landing the child in foster care.
According to the Associated Press, once an incident of potential neglect is reported to a child protection hotline, the report is run through a screening process that pulls together “personal data collected from birth, Medicaid, substance abuse, mental health, jail and probation records, among other government data sets.” The algorithm then calculates the child’s potential risk and assigns a score of 1 to 20 to predict the risk that a child will be placed in foster care in the two years after they are investigated. “The higher the number, the greater the risk. Social workers then use their discretion to decide whether to investigate.
What they consider potential neglect can include everything from inadequate housing to poor hygiene, which are far cries from verified physical or sexual abuse. Who knows what factors could play a role in this going forward, things like the January 6th Committee suggest that it’s not too far-fetched something like political affiliation could become a factor in determining whether or not you maintain guardianship over your own children.
The potential ramifications of how these programs could be used are unnerving. What’s worse is that, as we said before, this is only the tech we know about – who knows what kind of classified software is being used by Governments and the militaries of the world.
Ryan DeLarme is an American journalist navigating a labyrinth of political corruption, overreaching corporate influence, a burgeoning censorship-industrial complex, compromised media, and the planned destruction of our constitutional republic. He writes for Badlands Media and is also a Host and Founder at Vigilant News. Additionally, his writing has been featured in American Thinker, the Post-Liberal, Winter Watch, Underground Newswire, and Stillness in the Storm. He’s also writes for alt-media streaming platforms Dauntless Dialogue and Rise.tv. Ryan enjoys gardening, kung fu, creative writing and fighting to SAVE AMERICA