Algorithmic Force & Fascism

April 26, 2015

Algo_Fascism_Inline

A new apparatus of governance is assembling around big data and its algorithmic processing. The data produced through our daily encounters and interactions is becoming the focus for new ways to develop policy and enforce behaviour change. The raw material for these aspirations is the ‘volume, velocity and variety‘ of big data, the granular stream of data points generated by everyday activities and accumulated by technology corporations. In the past, this data has been processed for purely commercial ends, from the early use of data mining to find correlations in supermarket purchases to Facebook’s exploitation of the social graph for marketing analytics. Today, advocates are promoting the same methods as the way to get traction on tricky social issues, an approach sometimes known as algorithmic regulation. If massive data processing can create effective online services and eliminate bugs, why not apply these methods to government; after all, the numbers ‘speak for themselves‘ and there’s a ready-made policy approach that uses behavioural insights to modify government’s interactions based on metrics. To understand the deeper dangers behind this risk-reduction philosophy means digging a bit deeper into the way data is processed to produce correlations and predictions.

Big data is strictly big by virtue of being too big for the machines; at least, too big for single computers or servers to process. The corollary in human terms is that it’s also too big to get your head around; there is no way to interpret it directly. The primary methods of sensemaking with big data are data mining and machine learning; data mining looks for patterns in the data, such as associations between variables and clusters, while machine learning enables computers to get better at recognising these patterns in future data. Hence, big data can be processed to produce predictions, whether they are related to car insurance and the likelihood of certain drivers to have a crash, or the susceptibility of certain individuals to be the source of a terrorist attack. The practice of algorithmic preemption is becoming visible in policing. In Chicago, an algorithmic analysis predicted a heat list of 420 individuals likely to be involved in a shooting, using risk factors like previous arrests, drug offences, known associates and their arrest records. They received personal warning visits from a police commander. The key shift here is from causation to correlation: from evidence of a crime to a probability based on the matching of data variables. This, I believe, will lead to the production of spaces of enforcement outside the law, or what are known as ‘states of exception’.

States of exception are states of affairs where law, rights and political meaning to life are suspended. The term was developed by Giorgio Agamben to question the legal basis of events such as a declaration of martial law or the introduction of emergency powers, or the creation of spaces like Guantanamo Bay. His analysis starts from the emergency measures of the First World War, reaching an apotheosis in the Third Reich. On the latter, he highlights that Nazi Germany was never a dictatorship; the constitution continued, but the Nazis implemented their programme through increasing the scope of states of exception outside of the law. How is this related to big data and algorithms? One signature of a state of exception is that it acts with ‘force-of’: it has the force of the law even when not of the law. Scaling back, let us consider how our daily lives are becoming modulated by algorithmic processing. It turns out that your chances of getting a payday loan from Wonga are already determined (invisibly, rapidly) by analysis of varied data, including social media. In Massachusetts, you may find that your driver’s license has been revoked because a facial recognition algorithm has falsely matched you with another driver. The multiplication of machinic decisions based on opaque assumptions is worrying enough, and I have examined elsewhere the emergence of algorithmic states of exception along with some general suggestions about resistance. Here I want to raise the alarm specifically about the overlap of algorithmic force and the politics of the far right.

We can observe that, in many parts of Europe at least, the far right is on the rise both on the streets and in terms of political representation. What if we are creating a new apparatus of governance that is particularly suitable for the implementation of these agendas? One beachhead could be housing policy; with UKIP, for example, linking housing rights to the residency of grandparents. Concerns have already been expressed in the USA that big data processing will lead to the return of “redlining”, the racial segregation of housing outlawed by the Fair Housing Act of 1968. Now everything from Facebook friendships to Foursquare check-ins could be mobilised to infer ethnic origin, but not in a way that is easy to point a finger at. Big data algorithms cannot deconstruct their own reasoning into human terms but simply produce correlations. Moreover, the underlying data structures are themselves slippery; as more adept database technologies, such as NoSQL, replace hierarchically structured databases it becomes easier to reinterpret data that was collected for a completely different purpose. Today your listening preferences are processed in ‘the cloud’ to suggest what else you might like; tomorrow they become part of a distant reading of your ethnicity or politics. Implementing policies through algorithmic states of exception blindsides structural oversight and possibly even popular opposition. But the resonance of the new apparatuses with the right wing is more than bad luck, for at least two reasons: the centrality of big business, and the affinity of ideology to governance based on correlations.

The historical connection of fascism to big business is a well-researched phenomenon, starting with Daniel Guerin’s book in 1936. Who amongst us would truly trust Google or Facebook to firewall regressive uses of their data if government made it part of an accommodation, and do we need to read all the NSA and GCHQ slides leaked by Edward Snowden to know the answer? But deeper than that, I suggest, is the potential affinity of mechanisms based on correlations and a far right ideology. As the historian Roger Griffin has observed, a common core to all forms of fascism is a rebirth of the nation from its present decadence, and a mobilisation to deal with those elements of culture and population that are the sources of the contamination. A programme for the automated elimination of undesirability is exactly the pattern offered by algorithmic regulation. The danger in this case, the situation of far right governance, is not only the usual tendency of big data and algorithmic processing to produce false positives with real world impacts through processes that lack accountability. It is also that the fluidity of the vision that can be read into the correlations is a welcome mat for a politics that has already read the world through paranoid correlations, has already judged the categories that should be blamed, and is ready to implement that through the levers at hand. The prospect is a pinball machine of social policy with the algorithmic and progressive excision of citizenship. These ghosts are already among us, in the form of asylum seekers regarded as having ‘no recourse to public funds’. If we are to anticipate this, we should be asking ourselves before it’s too late: how do we develop an anti-fascist approach to algorithms?

By Dan McQuillan | @danmcquillan

 

Creative Commons LicenceThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.