Coding Care – Towards a Technology for Nature
Text contribution, published by Hatje Cantz, 2023. Edited by Sabine Himmelsbach and Chus Martinez
Excerpt from the text, from Dream 2: The More-than-Human Uncanny
So is the uncanny ‘natural’, or is it ‘human-made’? The question appears absurd in 2021, especially when viewed in the light of innumerable testimonies from representatives of Indigenous nations and communities the world round. In an entangled, continuous, vitalist paradigm, there is no opposition between these two poles, because, simply put, humans are entirely part of nature. Settler-colonial destructions of environments, lives and livelihoods are all connected with one another, with the effect that the natural disasters that follow are both caused by human action, and they are the instantiations of a lively, animate Earth, speaking ‘back’.
Here, we are reminded of Dipesh Chakrabarty’s insight, in his 2009, essay “The Climate of History: Four Theses” and the subsequent book-length The Climate of History in a Planetary Age, that the discipline of history, understood as contingent, human historiography, needs to make space for the planetary in the Anthropocene – whether it intends to or not; that it is inevitable that, at this juncture in history, we exist both in human time and in deep time – both in human space and in planetary and interplanetary ecologies, and all at the same time.
What of our technologies, then? Artificial Intelligence strikes us as both uncanny, disquieting and somewhat ridiculous when it breaks out of the conventions of human predictability and into what looks a lot like nonsense. Such is the case with GPT3, a new language-generating AI released in 2020. Powerful as it is, this new model is far more capable than more rudimentary chatbots at rendering language propositions that are uncannily close to human language.
So what is the “event” at play here, when GPT3, like other chatbots, also begins to descend into senselessness or worse, hate speech, racist or sexist statements? Computer scientist Yejin Choi calls language-generating AIs “a mouth without a brain”, that is to say, that algorithms can produce statements, but not understand the moral or ethical matrix within which these are being produced. Yet this view itself is also an exoneration of the anthropogenic nature of the issue: an algorithm trained on enormous amounts of data will respond and react to a set of human inputs. The algorithms that determine our interactions with social media will determine a path of communication that will generate extremes of languages, of positionalities, because their ultimate aim is not, as they would have it, to connect people together, but to maximise time of engagement.
Could we imagine an Artificial Intelligence of repair, of forms of vitalisms that would bring us closer to our entanglements with more-than-human beings?