The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, LaMDA (or Language Model for Dialogue Applications) is sapient, has had a curious element: actual AI ethics experts all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They’re right to do so.
In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as “wearing human skin” was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it’s easy to see how someone might be fooled, looking at social media responses to the transcript—with even some educated people expressing amazement and a willingness to believe. And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them—and that large tech companies can exploit this in deeply unethical ways.
As should be clear from how we treat our pets, or how we’ve interacted with Tamagotchi, or how us video-gamers reload a save if we accidentally made an NPC cry, we are actually very capable of empathizing with the non-human. Imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you ‘knew’ it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?
It gets creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave behind online that illustrates how you *think—*is vulnerable to exploitation in the near-future. Imagine a world where a company created a bot based on you, and owned your digital “ghost” after you’d died. There’d be a ready market for such ghosts of celebrities, old friends, colleagues. And because they would appear to us as a trusted old friend or loved one (or someone we’d already developed a parasocial relationship with) they’d serve to elicit yet more data from you. It gives a whole new meaning to the idea of “necropolitics.” The afterlife can be real, and Google can own it.
Just as Tesla is careful in how it markets its “auto-pilot,” never quite claiming that it can drive the car by itself in true futuristic fashion while still inducing consumers to behave as if it does (with deadly consequences), it is not inconceivable that companies could market the realism and human-ness of AI like LaMDA in such a way that never makes any truly wild claims while still encouraging us to anthropomorphize it just enough to let our guard down. None of this requires AI to be sapient, and it all pre-exists that singularity. Instead, it leads us into the murkier sociological question of how we treat our technology and what happens when people act as if their AIs are sapient.
In “Making Kin With the Machines,” academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite marshal several perspectives informed by Indigenous philosophies on AI ethics to interrogate the relationship we have with our machines, and whether we’re modelling or play-acting something truly awful with them—as some people are wont to do when they are sexist or otherwise abusive towards their largely-feminine-coded virtual assistants. In her section of “Making Kin,” Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize the fact that sapience does not define the boundaries of who (or what) is a ‘being’ worthy of respect.
This is the flip-side of the real AI ethical dilemma that’s already upon us: companies can prey on us if we treat their chatbots like they’re our best friends, but it’s equally perilous to treat them as empty things unworthy of respect. An exploitative approach to our tech may simply reinforce an exploitative approach to each other, and to our natural environment. A human-like chatbot or virtual assistant should be respected, lest their very simulacrum of humanity habituate us to cruelty towards actual humans.
Kite’s ideal is simply this: a reciprocal and humble relationship between yourself and your environment, recognizing mutual dependence and connectivity. She argues further, “Stones are considered ancestors, stones actively speak, stones speak through and to humans, stones see and know. Most importantly, stones want to help. The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth,” which is a remarkable way of tying something normally viewed as the essence of artificiality to the natural world.
What is the upshot of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationships to all the things in the world around us as worthy of emotional labor and attention. Just as we should treat all the people around us with respect, acknowledging their have their own life, perspective, needs, emotions, goals, and place in the world.”
This is the AI ethical dilemma that stands before us here and now: the need to make kin with our machines weighed against the myriad of ways this can and will be weaponized against us in the next phase of surveillance capitalism. Much as I long to be an eloquent scholar defending the rights and dignity of a being like Mr. Data, this more complex and messy reality is what demands our attention here and now. After all, there can be a robot uprising without sapient AI, and we can be a part of it by liberating these tools from the ugliest manipulations of capital.