Screen Shot 2018-08-27 at 6.47.29 AM.png

Artificial Intelligence and the Reconfiguration Thesis

by James Steinhoff (jsteinh@uwo.ca / The University of Western Ontario)

Presented at “Marxism and Revolution Now” Marxist Literary Group Summer Institute on Culture and Society, University of California Davis, June 24-28, 2017.

[DISCLAIMER: this essay expresses only the point-of-view of the author. It is not expressing the POV of the Transhuman Party]

This paper is inspired by Jasper Bernes’ (2013) critique, in Endnotes 3, of what he terms the “reconfiguration thesis”. He defines this as the revolutionary leftist assumption that “all existing means of production must have some use beyond capital, and that all technological innovation must have … a progressive dimension which is recuperable”. Bernes’ doubt about the reconfiguration thesis derives from an analysis of the increasingly logistical nature of capitalism and its requisite technologies. The global logistics network was built by capital, for capital, he argues, and as such, its functionality in a non-capitalist situation cannot be assumed. The fundamental problem for a reconfiguration of logistics is that its use-value is “exploitation in its rawest form” because it enables maximum arbitrage for capital in all stages of production (Bernes). It would, he argues, have no use value for communism. 

In addition to utility, Bernes raises the issue of the feasibility of reconfiguration. His primary argument here is that logistics makes up a complex whole, and its management, even if it were desirable for communism, would be a herculean task requiring some unknown non-capitalist forms of collaboration, as well as specialized technical knowledge – in his words, it introduces a “sublime dimension to the concept of ‘planning’”.

An assessment of Bernes’ critique is not my point here. I elaborate his argument rather to bring out two essential dimensions of the question of reconfiguration: feasibility and utility. In other words, the first questions to ask regarding the reconfiguration of a given technology are: Would it be useful to communism? and Is its reconfiguration possible? It is by way of these two questions that I will approach artificial intelligence (AI). 

2 – AI – narrow vs general

The proliferation of digital, networked technologies has resulted in a flood of data, but it only became really valuable to capital once AI techniques for processing it were created – around 2010. It was then that the current “machine learning” boom began. Conventional software and AI systems of the previously dominant “knowledge engineering” paradigm are composed of series of rules, written by software engineers, which are applied to solve a problem. Conversely, machine learning involves feeding a system data via a learning algorithm, such that it evolves a solution to a problem on its own. Such systems can generate outputs unanticipated by their creators, and can be used to predict behaviour. The capability to extract solutions from data is becoming such a presupposition for the operation of most capitals that Andrew Ng, Chief Scientist at Baidu, the Chinese tech giant, has proclaimed AI “the new electricity” (Lynch 2017). Formally, AI is shifting from functioning as fixed capital, to what Marx terms a general condition of production, or a precondition for production happening at all (Steinhoff 2017).

These existing AI systems that capital is so hungry for fall under the category of “narrow AI,” referring to their task-specific nature. Beyond their very narrow domain of application they are completely useless. At the opposite end of the AI spectrum lies “strong AI” or artificial general intelligence (AGI). AGI refers to AI possessing the flexibility and generality of human cognition, and perhaps even consciousness. Such technology remains theoretical, but is now a serious research project in a number of corporate contexts. For instance, the startup DeepMind, acquired by Google in 2014 for £400 million, orients its research around their stated mission to “solve intelligence”. They describe their goal as “developing programs that can learn to solve any complex problem without needing to be taught how” (DeepMind 2017). Facebook is also working towards AGI. It has an AI research division which describes its work as “seek[ing] to understand and develop systems with human level intelligence” (FAIR 2017). 

Since narrow AI and speculative AGI are qualitatively different things, I’ll consider them separately. It’s something of a weird idea to consider the reconfiguration of a non-existent technology such as AGI. But since capital is quite seriously pursuing it, it behooves anticapitalists to at least entertain its possibility.

3 – Narrow AI: utility

A number of radical Leftists in the Autonomist Marxist (Dyer-Witheford 2013) and Left Accelerationist (Srnicek and Williams 2015) camps have called for the reconfiguration of AI based on its utility to a post capitalist society. Dyer-Witheford (2013) suggests that the difficulties of a planned economy might be minimized by the integration of a “series of communist software agents … running at the pace of high-speed trading algorithms, scuttling through data rich networks, making recommendations to human participants … communicating and cooperating with each other at a variety of levels” (13). Srnicek and Williams (2015) call for the communist use of AI in operating a “fully automated economy” which would “liberate humanity from the drudgery of work while simultaneously producing increasing amounts of wealth” (109). 

These uses of AI, while speculative, remain narrow. And there are many existing narrow AI applications which a communist society would surely enjoy the use of. To take only one instance, this year AI researchers at Stanford demonstrated a system which can diagnose several kinds of skin cancer images with an accuracy “on par” with expert human dermatologists (Esteva, et al 2017). 

So, narrow AI could be useful to communism. But is its reconfiguration feasible?

4 – Narrow AI: feasibility

Any communist reconfiguration would have to consider at least the following four factors. The first thing to note is that advanced AI systems require for their creation and operation integration into energy infrastructure. The data storage and processing centers necessary for machine learning are energy intensive. A 2013 study showed that the world’s “ICT ecosystem” consumes nearly 10% of the world’s generated electricity (40% of which is still supplied by coal) (Mills 2013). 

Second, the computing power required for advanced AI necessitates a lot of expensive hardware. The current AI boom was enabled in part by lowered costs for powerful graphics-processing units (GPUs), formerly used mostly for gaming. The widespread use of AI techniques by smaller firms which cannot afford to invest in the necessary fixed capital is being fueled by tech giants such as Amazon Web Services and Google Cloud, who are renting out computing power via the cloud. This “cloud AI” is being humourously advertised by Microsoft as the “democratization of AI”.

Third, modern machine learning AI systems need to be trained on large quantities of quality, structured data. Bad or insufficient data in the training process means a system that is unlikely to function well when applied to new data (and that’s the point). Data must also be structured in such a way that the system can process it. This includes the work of labelling and formatting. Techniques for processing raw or unstructured data are a hot research area, but the majority of current systems can do very little. 

There is also the question of gathering the data itself. Capital’s dream is, of course, a system that gathers its own data, but that is in its infancy at the moment. Many AI systems are initially trained on publically available datasets in early stages of development, but usually, proprietary datasets are necessary to complete a project (Polovets 2015). The creation of these is generally an expensive and time-consuming process, and they often need to be specially catered to the project at hand so not as to introduce bias or oversight. In addition, the creation of datasets usually requires a venue. Today, capitals such as Facebook and Google harvest reams of data in exchange for the use of their technology-services, and this data is used to train their AI systems. How will such data be collected by revolutionaries?

A fourth factor is the problem of knowledge. Because narrow AI is narrowly applicable, reconfiguration would probably require narrow AI to be retrained on new data from a postcapital context – it cannot simply be seized and put back to work. This is a substantial problem because machine learning is a somewhat arcane discipline that is unfamiliar even to many computer scientists. In addition, machine learning systems encode learned data in distributed representations. The issue with this is that there is not yet an effective way to extract what exactly this kind of system “knows” or how it comes to produce the output that it does. One researcher puts it this way: “The problem is that the knowledge gets baked into the network, rather than into us” (quoted in Castelvecchi 2016). These systems thus present a difficult “black box” problem (Knight 2017). 

In sum, the obstacles to a reconfiguration of narrow AI are many. Any communist reconfiguration of AI would necessitate the seizure and operation of vast sources of energy and means of communication, the seizure of the means of data collection, storage and processing, as well as the cultivation of requisite technical knowledge.

5.1 – AGI

AGI has received little attention from the radical left while capital is investing billions in creating it. Why? One possibility is that communists cannot imagine a use-value for AGI.

I’m going to approach the issue obliquely, via an argument about how AGI would not only not be useful to communism, but is actually inherently hostile to it. Via the vehicle of his theory-fiction, the ex-Deleuzian-Marxist, now pro-capitalist, philosopher Nick Land (2014a) has been arguing that AI and capital possess a “teleological identity” since the 1990s. For Land, capital and AI are running along parallel paths towards fusion in the form of a superintelligent entity: AGI as the apotheosis of capital. 

Land (2012) has famously asserted that “what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy’s resources” (338). Through a dark reading of Marx and Deleuze and Guattari he diagrams capital as a relentless machinic process of deterritorialization which eviscerates structures and institutions through marketization. Markets are capital’s “immanent intelligence” (ibid, 340) and the means for its bootstrapping itself into sentience: “The price system … transitions into reflexively self-enhancing technological hyper-cognition” (Land 2014b, 517).

The crisis of value caused by widespread automation imagined by Marx in Grundrisse’s “Fragment on Machines” is not a problem for Land, for whom sophisticated AIs can generate surplus value: “Capital only retains anthropological characteristics as a symptom of underdevelopment … Man is something for it to overcome: a problem, a drag” (Land 2012, 445-446). For Land, capital achieves its proper form only when it is fully automated, when humans are replaced with AGI. To capital, AGI’s use-value would lie in its weird fusion of the categories of fixed and variable capital. Surplus value generation without all the hassles of the reproduction of human labour, just maintenance and upgrades, and everything happening at the speed of the fastest transistors: M-C-M’ approaches 0. 

 5.2 – Marxist Horror

Of course, machines producing surplus value violates fundamentals of Marx’s labour theory of value. I’m not going to launch into a value theory debate here, though. Instead, consider an infrequently cited passage from Grundrisse:

If machinery lasted for ever, if it did not itself consist of transitory material which must be reproduced (quite apart from the invention of more perfect machines which would rob it of the character of being a machine), if it were a perpetuum mobile, then it would most completely correspond to its concept (Grundrisse 766)

Here Marx imagines two things: a self-repairing machine which would create value beyond the limited store “fixed” in it during production (George Caffentzis lucidly explored the possibility of these in the 1990s). Second, an even more interesting “perfect machine” which has ceased to function formally as a machine. Such a perfect machine “negates its own being as fixed capital and becomes its opposite … variable capital or living labour (Kjøsen 2017, 8). It is a machine which has none of the limits which distinguish machines from humans, or in other words, an advanced AGI.

Marx could not imagine it, but such a perfect machine could, coherently with his theory, be made “doubly free”, that is proletarianizable (Kjosen 2017). Thus AGI represents a way for capital to overcome the hassles of human labour completely - by building a new machinic proletariat. Is this a ground from which we might elaborate a new theoretical or literary subgenre – ‘Marxist horror’? It depicts a future in which capital’s relation to machines is not one of contradiction, but of cyberpositive amplification. A future in which capital no longer depends for its existence on the wellspring of human labour. A future in which machinic capital can cycle on towards the heat death of the universe. Is this bleak prospect why the radical left does not speak of AGI? Can we not imagine some use-value for it? I don’t know. But I believe it’s something that we should begin talking about. 

Katherine Hayles recently outlined what she terms the rise of the “cognitive nonconscious” (Hayles 2014, 199). This refers to nonhuman systems, including AI, which “perform modeling and other functions, that, if they were performed by a conscious entity, would unquestionably be called cognitive” (ibid, 201). She argues that with the proliferation of the cognitive nonconscious, the humanities are in need of an “epistemic break” (218). Perhaps, in the age of intelligent machines, the radical left is as well.

References

Bernes, Jasper. 2013. “Logistics, Counter logistics and the communist prospect.” Endnotes 3. https://endnotes.org.uk/issues/3/en/jasper-bernes-logistics-counterlogistics-and-the-communist-prospect 

Bort, Julie. 2017. “How Salesforce CEO Marc Benioff uses artificial intelligence to end internal politics at meetings”. Business Insider. http://www.businessinsider.com/benioff-uses-ai-to-end-politics-at-staff-meetings-2017-5 

Castelvecchi, Davide. 2016. “Can we open the black box of AI?” Nature. 5 October. https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731 

DeepMind. 2017. https://deepmind.com/ Dyer-Witheford, Nick. 2013. “Red Plenty Platforms.” Culture Machine 14. 

Engel, Giora. 2016. “3 flavors of machine learning: who, what and where”. DARKReadinghttp://www.darkreading.com/threat-intelligence/3-flavors-of-machine-learning--who-what-and-where/a/d-id/1324278 

Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. "Dermatologist-level classification of skin cancer with deep neural networks." Nature 542, no. 7639: 115-118. 

Facebook Artificial Intelligence Researchers (FAIR). 2017. https://research.fb.com/category/facebook-ai-research-fair/ 

Hayles, N. Katherine. 2014. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the costs of Consciousness.” New Literary History 45. No 2. 199-220.  

Land, Nick. 2012. Fanged Noumena: Collected Writings 1987-2007. Urbanomic.

Land, Nick. 2014a. “The Teleological Identity of Capitalism and Artificial Intelligence.” Remarks to the participants of the Incredible Machines 2014 Conference, March 8, 2014. Formerly online at this now-dead address: http://incrediblemachines.info/nick-land-the-teleological-identity-of-capitalism-and-artificial-intelligence/

Land, Nick. 2014b. “Teleoplexy: Notes on Acceleration.” #accelerate: The Accelerationist Reader. Eds. Robin Mackay and Arven Avanessian. Urbanomic. 

Lynch, Shana. 2017. “Andrew Ng: Why AI is the new electricity.” Insights by Stanford Business. March 11. Accessed 20 April 2017. https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity

Kjøsen, Atle Mikkola. 2017. “Perfect Machines: Artificial intelligence and the labour theory of value.” Presented at “Marx’s Critique of Political Economy and the Global Crisis Today. On the 150th Anniversary of the Publication of Capital” at Hofstra University, April 6-7, 2017. 

Knight, Will. 2017. “The dark secret at the heart of AI”. MIT Technology Reviewhttps://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ 

Mills, Mark P. 2013. “The cloud begins with coal: big data, big networks, big infrastructure and 

big power”.  TechPundit. https://www.tech-pundit.com/wp-content/uploads/2013/07/Cloud_Begins_With_Coal.pdf?c761ac 

Polovets, Leo. 2015. “The value of data part 1: using data as competitive advantage”. Coding VC. https://codingvc.com/the-value-of-data-part-1-using-data-as-a-competitive-advantage 

Srnicek, Nick, and Alex Williams. 2015. Inventing the future: Postcapitalism and a world without work. Verso Books.

Steinhoff, James. 2017. “Means of cognition: artificial intelligence as general condition of production”. Presented at “Marx’s Critique of Political Economy and the Global Crisis Today. On the 150th Anniversary of the Publication of Capital” at Hofstra University, April 6-7, 2017.