How China’s Pre-Crime Algorithms Work – and Their Fatal Flaw

In a previous article, we looked at how george orwell is dystopian 1984 looks less and less like fiction as the Chinese Communist Party harnesses the capabilities of AI and Big Data to surveil its entire population. But beyond monitoring citizens’ movements in real time, the CCP also hopes to predict crimes and protests before they happen.

In a follow-up story in the New York TimesPaul Mozur, Muyi Xiao, and John Lui watch how the CCP also brings the dystopian world of Philip K. Dick it is Minority report (2002) to real life, with one difference: rather than human “precogs” that can predict the future, the CCP relies on algorithms that can interrogate large swathes of data for patterns of behavior.

Tiananmen Square Watch

The Time Remarks that the algorithms – which would prove controversial in other countries – are often presented as triumphs by the Chinese government:

The new approach to surveillance is partly based on data-driven police software from the United States and Europe, a technology that rights groups say has encoded racism in decisions such as the most guarded quarters and the prisoners who obtain a conditional release. China is going to the extreme of exploiting nationwide data repositories that allow police to operate with opacity and impunity.

Paul Mozur, Muyi Xiao and John LuiAn Invisible Cage: How China Controls the Future” at New York Times (June 25, 2022)

Chinese provincial authorities have released tender documents for algorithms that can flag patterns of behavior that indicate potential future crime. However, in China under Communist Party rule, crimes can include practicing a banned religion or organizing a protest, if Beijing views the religion or protest as a threat to stability. Monitoring tools automatically flag certain groups whether or not they have a criminal history, including Uyghurs, other ethnic minorities, migrant workers and people with mental illness.

Wi-Fi sniffers, for example, will intercept phone communications, and phone apps like a Uyghur-Chinese dictionary will be flagged automatically. Additionally, monitoring tools have a list of people to ignore, known as the “Red List,” which The Time said are mostly government officials.

Surveillance in areas surrounding Beijing is particularly extensive.

The algorithm targets those who draw attention to social issues

The system collects data on legions of Chinese petitioners, a general term in China that describes people who attempt to file complaints against local officials with higher authorities.

It then rates petitioners based on how likely they are to travel to Beijing. Going forward, the data will be used to train machine learning models, according to a procurement document.

Paul Mozur, Muyi Xiao and John LuiAn Invisible Cage: How China Controls the Future” at New York Times (June 25, 2022)

Beijing becomes particularly concerned when discontent appears organized and widespread. The government doesn’t want people gathering in the capital or drawing attention to an issue. For this reason, the Cyberspace Administration of China censors online dissatisfaction with government failures. The Time and China FileThe study of the procurement documents by showed:

Many people who petition do so about government mishandling of a tragic accident or negligence in the case – it all goes into the algorithm. “Increase a person’s early warning risk level if they have lower social status or have been through a major tragedy,” the procurement document reads.

Paul Mozur, Muyi Xiao and John LuiAn Invisible Cage: How China Controls the Future” at New York Times (June 25, 2022)

Recently, several protests have been suppressed – sometimes violently – and then censored online. An “erased” demonstration called for boycotts on mortgage payments until the developers finish their unfinished projects. The other was for a rural bank freezing of clients’ savings accounts..

How accurate is algorithmic prediction anyway?

Science fiction is premonitory here. In Minority Report, three people with superhuman abilities can predict when a murder (in the movie) Where any crime (in the previous short story) will happen. The police can then make an arrest on a “pre-crime” charge and thus prevent the murder. However, things get weird when one of the officers is framed for the future murder of someone he doesn’t know. Dick called the story behind the movie “Minority Report” because two of the precogs’ three reports were used by the computer to predict the crime. So whoever didn’t predict the crime provided the minority report.

As the story unfolds in the short story, the cop realizes that the precogs’ reports aren’t as similar as people thought. The computer system gathers the similarities in the reports and assumes that this is the correct assessment of the future. There are more twists in the film and the short story, but both stories deal with the question of safety and freedom in a techno-dystopian world of the near future.

Our world has no “precogs” who can see possible futures. Instead, we have algorithmic systems that rely on historical trends, which is one of the origins of algorithmic bias. Algorithms are only predictive to the extent that human behavior matches the historical data used to train the algorithm. They leave no room for people who break out of these patterns, either by choice or because of their unique experiences. Algorithms, for example, do not take into account changes in infrastructure or the work of government departments and charities that can give people the opportunity to break out of established patterns.

In this sense, perhaps, a better analogy with science fiction than Minority report (2002) is isaac asimovit is Foundation trilogy, which inspired the recently launched Apple TV series, Foundation. In the book series Hari Seldon, a psychohistorian, uses statistical analysis of the past to predict the future. His calculations predict the company’s downfall, which he hatches a plan to avoid. The plan spans millennia, during which time society discovers the holographic messages it left behind. Everything goes according to his predictions until the Mule appears.

The Mule is an anomaly, a mutant who can read and even manipulate the minds of others. The Mule represents the problem of real world attempts to use algorithms and big data to predict the future. Algorithms cannot predict abnormalities in human behavior.

The Fatal Flaw of Algorithmic Prediction

Although an algorithm can, in most cases, predict human behavior, an even more fundamental flaw inhibits algorithmic attempts at prediction: Algorithms only work when their surveillance subjects don’t know how they work. In Asimov’s story, the Achilles’ heel of Hari Seldon’s plan to save society is that society cannot know of two groups of scientists and engineers, known as the Foundations, who live off of each other. and other parts of the galaxy. The Mule, who knows them, therefore seeks to thwart Seldon’s plan and rule over one of the Foundation’s planets.

Likewise, in real life, predictive algorithms don’t work when a person is “gaming the system”. A subject who knows what the algorithm is looking for or how it works can manipulate it. This is why the architects of the algorithms try to keep its function (and sometimes its existence) secret.

Chinese citizens are willing to compromise their privacy and accept some surveillance in the name of stability and security. But over the past five years, the CCP has had to suppress more and more dissatisfaction among the people. As summarized by Maya Wang, senior China researcher at Human Rights Watch the current scene, “It’s an invisible cage of technology on society.”


You can also read:

Big Brother is watching you (and trying to read your mind) (Gary Smith)

China takes total surveillance of every citizen very seriously (Heather Zeiger)

and

Biggest Data Seizure Ever Stolen in Shanghai State Mass Surveillance (Heather Zeiger)

Sharon D. Cole