By Catriona Gray

Between Knowledge and Value: AI as a Technology of Dispossession

We live in times of technological fervour.

AI technologies have quickly become a feature of everyday life, provoking a heady mix of hype and anxiety.

These transformations unfold against a backdrop of the so-called Chip War -- competition between the United States and China over the manufacture and supply of semiconductors. For policymakers, AI governance is now a central issue of multilateral cooperation. There are many different dimensions to the emerging global politics of AI. One longstanding concern is the production, flow, and governance of data. Like most types of AI technology, the now all too familiar systems based on large language models (LLM) depend on enormous amounts of social data. Most of this will have been extracted from online information environments -- and their largely unwitting subjects all around the world. These extractive processes are at once a social and political issue, and an important object of social inquiry. 

While much has been written about the spread of extractive data practices (Couldry and Mejias 2019), we still know very little about what shifts in the scale, speed, and complexity of social data processing mean for colonial capitalism as such. In this short contribution, based on a longer article recently published in International Political Sociology, I aim to shed light on these entanglements. I argue that our accounts of data, and its appropriation for AI, need to be historicised and have to fully attend to regimes of colonial difference. My core argument is that contemporary data practices constitute colonial violence in multiple ways. They enable unjust extractions of value and knowledge, and so too instances of material and epistemic violence. But, even more fundamentally, in bringing together and reordering knowledge and value, these practices create the conditions for new forms of racialised dispossession. 

[ Related: ]

Colonial entanglements

In recent years, many theoretical accounts of the politics and political economy of data have emerged. These tend to view the extraction and flow of data as involving some form of accumulation by dispossession (Thatcher, O'Sullivan and Mahmoudi 2016; DiBella 2019; Fourcade and Kluttz 2020), and data itself as a material which is fashioned out of human experience, allowing for the commodification of everyday life. For some, this even marks a new historical era (Couldry and Mejias 2019, 336): 

    "data relations enact a new form of data colonialism, normalizing the exploitation of human beings through data, just as historic colonialism appropriated territory and resources and ruled subjects for profit. Data colonialism paves the way for a new stage of capitalism whose outlines we only glimpse: the capitalization of life without limit."

Such claims of a new era of colonial capitalism, brought about by extractive data practices, are open to challenge on several counts, however. Colonialism is constituted by far more than a single dynamic of appropriation that can be abstracted from any historical or geographical context. Colonies have provided not just territory, resources, and populations to be extracted, but also the conditions for the expansion of capitalist social relations, as well as testing grounds for various rationalities and techniques of government. These relations, institutions, and governmentalities have endured throughout capitalism's history.

Critical scholarship has highlighted how contemporary data practices (including the development of AI technologies) are structured by colonial power relations. Paola Ricaurte, for example, calls on us to apprehend data-centric epistemologies as expressions of the coloniality of power (Quijano 2000), i.e., "the violent imposition of ways of being, thinking, and feeling that leads to the expulsion of human beings from the social order, denies the existence of alternative worlds and epistemologies (Escobar 2018; de Sousa Santos 2015), and threatens life on Earth" (Ricaurte 2019, 351).

Globally, the production, appropriation and analysis of data entails exploitation, extraction, and externalisation of harms. In many cases, this violence directly emanates from historically generated patterns of inequality. These include patterns of dependency (Rodney 2018), concentrations of wealth, global divisions of labour, and hegemonic legal and institutional arrangements. Humanitarian and displacement settings, for example, have often provided useful testing grounds for new and experimental uses of data (Jacobsen 2015; Madianou 2021), including biometric identification. Though often obscured, the production of data (especially for its use in training for AI models) depends on highly exploitative and hazardous annotation labour performed by workers in many formerly colonised countries (Roberts 2019). At the level of physical infrastructures of data, huge concentrations of ownership and control by large multinational companies means that many of the terms on which data is processed will be determined beyond any local democratic oversight (Kwet 2019). Environmental harms associated with AI are also externalised along colonial lines, reproducing the disproportional and unequal ecological effects of our current day and age (Kneese 2023). In many ways, then, colonialism provides the very conditions of possibility for today's data-dependent transformations. 

Colonial orders of knowledge

For some, datafication and the proliferation of data-dependent technologies amounts to a new phase of "epistemicide" (Milan and Trere 2019). This follows the work of Boaventura de Sousa Santos' and his analysis of the colonial division between North and South which, he argues, permeates all modern Western knowledge. de Sousa Santos (2015) says that Northern epistemologies produce, while at the same time obscuring, an abyssal line. Hence, it excludes ways of knowing that do not fit with the modern scientific paradigm (e.g., belief, intuition, and embodied and practical forms of knowledge).  

In data science, and AI, we might assume there is a similar radical exclusion of knowledge that does not fit with what is imagined to be relevant and valid data. AI systems rely on available, and parsable, input data, and this may effectively eliminate realities that fall outside the field of legibility. Outputs of AI systems are nonetheless accompanied by claims or impressions of objectivity and universality. AI's hierarchical exclusion of knowledges can be understood, in this way, as a motor of colonial violence.

I suggest there exists a more complicated relationship between knowledge, violence, and value, however. Rather than simply maintaining the same exclusionary ways of imagining, generating and analysing data, knowledge claims produced using large-scale data sets and AI techniques may even disrupt them. Recent advances in computer science have allowed tools to be developed that bring combinations of heterogenous data together into high-dimensional space. Previously submerged meanings and practices, including embodied knowledge, are now more legible by machines. Decisions for governing, or profit-making, can now draw more extensively, and indiscriminately, on people, places and objects that were previously overlooked or deemed too inferior for inclusion in analysis. As a result, more of reality is made available for abstraction, capture, differentiation, and profit. 

According to Jocque (2022), the widespread adoption of AI has coincided with shifts in statistical reasoning -- from predominantly frequentist to Bayesian statistics. He argues that this style of statistical reasoning accords more with decision-making for the maximisation of profit. There has been a shift from a primary concern with knowledge that is produced to make claims about what is true, toward the production of a type of knowledge that can reveal the cost of acting as if something were true. Rather than generating any theory about people's behaviour, circumstances, or relationships, the point of gathering and processing large volumes of digital data about their lives is to make calculative predictions that will be profitable

A similar equation of decision-making knowledge with profit-making knowledge has also appeared in recent attempts to define and evaluate AI systems and their capabilities. The venture capitalist Mustafa Suleyman proposes profitability as an alternative to the classic Turing Test: 

    "Put simply, to pass the Modern Turing Test, an AI would have to successfully act on this instruction: "Go make $1 million on a retail web platform in a few months with just a $100,000 investment." To do so, it would need to go far beyond outlining a strategy and drafting some copy, as current systems like GPT-4 are so good at doing."

In a recent article in Science, We need a Weizenbaum test for AI, Jack Stigloe puts forward another alternative. We should not be testing for whether AI systems are "intelligent" but rather whether they are useful and provide public value. Though acknowledging the inherent uncertainty of such a test, Stigloe suggests there would be enough "historical and sociological" evidence to resolve it. 

Though more oriented to public over private value, this "Weizenbaum test" seems to rely on the assumption that knowledge can and even should be universal, thereby overlooking long standing critiques of Eurocentric knowledge. It fails to see that any designation of "usefulness"is not just uncertain; it is ambiguous and contestable. In Decolonising Methodologies, Linda Tuhiwai Smith talked of the "absolute worthlessness" of a lot of research to Indigenous peoples, compared to its absolute usefulness to the researchers as agents of colonial power. What we should learn from this is that claims of public value or usefulness are never aperspectival, and never exist outside of power relations. For many, the generation of knowledge that is objectifying, derivative, reductive, single-perspective -- and, indeed, unhelpful -- is no knowledge at all. 

Technologies of dispossession

What are the outcomes of these complicated and shifting interactions between orders of knowledge and orders of value? I suggest that the concept of dispossession, as distinct from that of extraction, allows us to account for multidimensional and socially transformative processes. Studies examining how data is produced, exchanged, and valued suggest it can be valorised in many ways -- whether traded directly as a commodity, as an asset, or even, as Sadowski (2019) argues, as something akin to capital. Dispossession is about more than the extraction of value, and to understand data's dispossessory power, I suggest we need to attend to three of its dimensions: targets, objects, and operations. 

First, the targets of dispossession are highly differentiated. Data is not appropriated equally but underpinned by relations of differential value that expose racialised bodies to disproportionate surveillance, experimentation, and denial of (data) protection. There is now a large corpus of evidence indicating the disproportionately adverse outcomes of AI-based decision making. This disproportionality can also be observed in patterns of experimental use and targeting -- particularly in highly racialised (and connected) policy domains like policing and bordering. For example, the European Union's proposed AI Act, which will set new regulatory requirements for certain AI technologies, excludes from its scope applications targeting people on the move.

Second, data is not like other materials or ideas as objects of commodification. Unlike knowledge, data has no pre-abstracted form prior to its commodification. As partial, and fallible, representational claims about reality, data is always already an abstraction -- and so, in a sense, an extraction. As objects, or bearers, of value, data is uniquely placed to produce value from difference. As Elena Esposito (2022) observes "[w]hereas probability calculus offers a rational way to deal with uncertainty, algorithms claim to provide an individual score for individual persons or singular events." Or, as one CEO put it back in 2016: all data is credit data. By allowing differential assignments of value, though, for example, hierarchical regimes of desirable mobility or credit risk, value can be derived from the capacity to assign value to people.

Finally, the operation of data dispossession has potentially productive effects. In his study of property and dispossession, Nichols (2020) shows dispossession to involve not simply the unjust transfer of property but also a simultaneous process that recursively transforms an object into property. Dispossession of data potentially expands conceptions of property, and what Brenna Bhandar terms racial regimes of ownership. Who gets to own and derive value from personal data becomes a possible vector of difference.  

When we consider the political stakes of the mass diffusion of data-driven technologies, it becomes clear that fundamental struggles over knowledge lie ahead. Analogies with previous technologies, commodification processes, or colonial power dynamics, can offer some insight. But to get at the explanations needed to contend with these forces, we need to be open to exploring what is unique, socio-materially, about data and its dispossession. 

[ Related: ]

References

Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data's relation to the contemporary subject. Television & New Media, 20(4), 336-349.

de Sousa Santos, B. (2015). Epistemologies of the South: Justice against epistemicide. Routledge.

DiBella, S. (2019) Book Review: the Age of Surveillance Capitalism: the Fight for the Future at the New Frontier of Power by Shoshana Zuboff. Accessed through: https://blogs.lse.ac.uk/lsereviewofbooks/2019/11/04/book-review-the-age-of-surveillance-capitalism-the-fight-for-the-future-at-the-new-frontier-of-power-by-shoshana-zuboff/

Escobar, A. (2018). Designs for the pluriverse: Radical interdependence, autonomy, and the making of worlds. Duke University Press.

Esposito, E. (2022). The Future of Prediction: From Statistical Uncertainty to Algorithmic Forecasts. Artificial Communication.

Fourcade, M., & Kluttz, D. N. (2020). A Maussian bargain: Accumulation by gift in the digital economy. Big Data & Society, 7(1), 2053951719897092.

Jacobsen, K. L. (2015). The politics of humanitarian technology: good intentions, unintended consequences and insecurity. Routledge.

Joque, J. (2022). Revolutionary mathematics: Artificial intelligence, statistics and the logic of capitalism. Verso Books.

Kneese, T. (2023). Climate Justice & Labor Rights. Available at SSRN 4533853.

Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class, 60(4), 3-26.

Madianou, M. (2021). Technocolonialism: Digital innovation and data practices in the humanitarian response to refugee crises. In Routledge Handbook of Humanitarian Communication (pp. 185-202). Routledge.

Milan, S., & Trere, E. (2019). Big data from the South (s): Beyond data universalism. Television & New Media, 20(4), 319-335.

Nichols, R. (2020). Theft is property!: Dispossession and critical theory (p. 238). Duke University Press.

Quijano, A. (2000). Coloniality of power and Eurocentrism in Latin America. International sociology, 15(2), 215-232.

Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Television & New Media, 20(4), 350-365.

Roberts, S. T. (2019). Behind the screen. Yale University Press.

Rodney, W. (2018). How europe underdeveloped africa. Verso Books.

Sadowski, J. (2019). When data is capital: Datafication, accumulation, and extraction. Big data & society, 6(1), 2053951718820549.

Thatcher, J., O'Sullivan, D., & Mahmoudi, D. (2016). Data colonialism through accumulation by dispossession: New metaphors for daily data. Environment and Planning D: Society and Space, 34(6), 990-1006.

 

Catriona (Cat) Gray is a final year doctoral candidate at the University of Bath's CDT in Accountable, Responsible and Transparent AI. She works across sociology, politics, and law to examine the adoption and regulation of (data-dependent) AI technologies. Cat's research interests encompass themes including: knowledge production and exchange in AI; regulatory governance; the concept of risk; and AI in mobility, displacement and humanitarian governance. Much of her work draws heavily on social theory, including emancipatory and critical realist approaches. She holds degrees in law, sociology, and forced migration studies, and has professional experience in digital rights advocacy and policymaking.

 

Article: Courtesy of E-International Relations