It sounds like something from a dystopian sci-fi novel, but according to author and academic David Gunkel, robots could soon have rights of their own, writes Róisín Kiberd
Why feel bad for a robot? Yet so many viewers did, when DARPA-funded robotics company Boston Dynamics introduced their creation, SpotMini, in a YouTube video that showed humans kicking the four-legged robot, antagonising it and trying to knock it over. Some of the Comments posted below the video included “Please stop the robot abuse :( ” and “Every time I watch him kick that robot I get inexplicably sad.” Another, voted to the top, augurs robot revenge. It reads: “The guy who kicks him is the first they gonna get.”
Most of the Boston Dynamics machines are intimidating, rather than adorable. There’s the ominous, scuttling BigDog, designed for military combat, and the bipedal PETMAN, humanoid by design, its silicon viscera covered by an eerie hazmat suit. Why should anyone worry about "hurting" these robots, when they are incapable of feeling? Should robots have rights? Should "electronic personhood", a term proposed in 2016 by the European Parliament’s Committee on Legal Affairs, be accepted into our law?
However much like science fiction it sounds, debate around whether robots should have rights has been raging for some time now. 156 AI experts responded to the 2016 proposal with an open letter, criticising it for shifting responsibility from the (human) owner of the robot to the robot itself. The American Society for the Prevention of Cruelty to Robots was founded back in 1999. Pitched somewhere between satire and philosophical provocation, their mission statement read: “The ASPCR is, and will continue to be, exactly as serious as robots are sentient.”
Robots aren’t sentient – not yet, at least – but that hasn’t stopped philosophers, tech founders and even world leaders from considering the prospect of their legal and political powers (in 2017, Vladimir Putin made the somewhat ominous prediction that AI would become “ruler of the world”, while a year later Elon Musk warned that it could seize power in the future and become our “immortal dictator”). This area is a minefield – morally, legally, and even technologically, because while legislation is slow to change, the technology itself is developing exponentially.
Forrester’s Predictions 2020: Automation report, published in October of this year, argued that more than one million knowledge workers (engineers, scientists, lawyers, academics, programmers and others) will be replaced by automation by 2020, with the spread of commercial software robotics, robotic process automation, virtual agents, chatbots and machine learning. Gartner, meanwhile, recently published a report highlighting trends in "hyper-automation" and "autonomous things", physical devices which use AI to perform tasks previously done by humans.
The robots are very much here, but our culture – and our laws – are still evolving ways to make sense of them. A new book titled Robot Rights surveys current ways of thinking about robots, and addresses the question of their legal status. Its author is David J Gunkel, Distinguished Teaching Professor of Communication Technology at Northern Illinois University, the author of more than 50 scholarly articles and book chapters on robot rights and a leading philosopher in the field. He spoke to the Business Post about his book and the questions it raises, shortly after delivering a public talk on the rights of robots and AI ethics at Lero, the Irish Software Research Centre at the University of Limerick, on November 1.
“We’re slowly but surely inviting these machines into our world,” Gunkel said, explaining why he felt it was necessary to write about this topic. “It raises questions of accountability and responsibility, but it also raises questions as to how we think of these devices and their status.”
A robot invasion?
Robot Rights is, perhaps unexpectedly, less about the hypothetical "feelings" of machines, or their struggle for recognition in law, and more about the void they create in legislation, around which we struggle to arrive at the best course of action. “I tell my students all the time, it’s not that I care if these robots can feel things," said Gunkel. "What I’m worried about are our social, moral and legal institutions, and whether they’ll rise to these new challenges and opportunities or fail to do so.”
Concerning the "robot invasion" question, Gunkel believes it’s already underway, but that it’s very different to what science fiction warned us about. “We just need to look at it in the right way, otherwise we’re deluding ourselves into thinking that we can postpone it till some far-off future point,” he said. “People will say: ‘Come and talk to me in 20 years, because then we’ll have a sentient robot, or an AI that’s conscious,’ or something similar. I think that’s a bad excuse to not think about the challenges we’re currently facing.”
The challenges are both individual and systemic; how does potential ‘mastery’ of robotic slaves change how we behave, and how we treat each other? How will governments address mass unemployment caused by automation? Before we can accord the robot a place in human law, we need to define what a robot actually is – something Gunkel’s book devotes significant thought to, tracing the term to its origins in the 1920 play RUR (Rossum’s Universal Robots) by the Czechoslovakian author Karel Čapek, in which androids are put to work in factories only to band together and overthrow the human race. Čapek derived the word ‘robot’ from ‘robota’, meaning ‘forced labour’ – from its origins, the robot was linked with slavery, as well as rebellion and, eventually, apocalypse.
Gunkel said: “Ever since RUR, most of our writing on robots seems to address this same theme of uprising, of robots taking control. It’s all very dramatic, but I think the robot invasion is far more mundane. I think it’s invisible. It’s not going to be suddenly catastrophic. It’s going to be like the fall of Rome. Rome fell because the Romans invited the Barbarians into their world. They gave them bureaucratic tasks to do, then one day the citizens looked around and asked: ‘Where did all the Barbarians come from?’”
Robot Rights considers the possibility of a robot "liberation movement", following on from rights movements for humans, but there’s also a less ambitious way of thinking about giving robots rights; we need to develop them not for the robots themselves, but in order to create an outline, a legal framework built around an absence.
“We’re at a moment in time where the assignment of agency is getting more complicated,” Gunkel said, “where the algorithms are introducing gaps in how we assign accountability for use with technology.”
He likens the legal status of robots to that of a bureaucracy; a collection of data put to specific use, which can run itself even though it was originally created by humans. “Another good corollary here is that we give corporations rights,” he observed. “Corporations have the rights of a person, and they’re an artificial construct of human beings. We’ve done this not because we’re worried what the corporation may feel, or because we’re worried about the integrity of its personhood. We give the corporation rights and responsibilities in order to accommodate this subject into our law.”
The book describes people personifying the machines they spend their lives dealing with; one example is that of Explosive Ordnance Disposal robots used by soldiers, who grow oddly attached to them during campaigns.
“These things are not meant to be social robots,” said Gunkel. “They don’t have googly eyes, they don’t talk to you, and they’re low-level robotic technology, mostly remote-controlled and not even autonomous. But because of their presence in the unit in which the soldiers work, they give them names and consider them a comrade rather than a tool. They’ll even at times risk their own lives to save those of the robots.”
To an extent, this applies to us all; likely without ever actually considering giving them ‘rights’, we subtly personify our laptops, phones and tablets, coming to think of them as our companions, and spending more time with them than we do with human beings. Gunkel contextualises this in various traditions, including the Buddhist idea of the "interconnectedness of all things", and the Shinto concept of kami, "holy powers" or vital energies found in both organic and inorganic matter.
“In a sense what we’re encountering is where our technologies challenge our modernist, European sensibilities, with regard to how we divide up the world into people and things,” Gunkel said. “Looking for a pattern, you can find in many indigenous cultures a way of responding to objects that’s different from our Modernist way of thinking about them.”
Gunkel highlights healthcare, in particular elder care, and logistics as two areas of rapid present-day change. The self-driving car is poised to enter everyday usage in the near future, but it’s the self-driving truck that’s already threatening millions of jobs. Meanwhile care robots, such as the famous (some would say infamous) Paro the robot seal, are already commercially available and used in hospitals and nursing homes as therapeutic companion animals.
“There were a lot of studies involving people with dementia using Paro, and the results are actually pretty positive,” Gunkel said. “It shows real improvement for people with cognitive decline. Does that apply to a larger population? We don’t know yet.”
He advocates for studying the effects of a robot before making it more widely available; “We can’t just make these devices, we need to mobilise an entire set of social scientists and generate some good empirical data that tells us the risks, the costs, the benefits, and how to generalise this beyond on a larger scale.”
Similarly, with self-driving vehicles, now is the time to put proper thought into their legislation. “What happens to all these truck drivers?" asks Gunkel. "Retraining that workforce is going to be a big social challenge, and governments need to do something to address how you build it into your tax structure, and provide a social safety net and accommodate the displacement of workers as algorithms begin to take over employment opportunities. The other big area is liability and insurance; when a self-driving vehicle has an accident, who is accountable for what goes wrong?”
Throughout the last decade, we embraced social media wholesale, only to notice its problems with surveillance after multiple rounds of whistleblowers, public controversies and congressional testimonies delivered by Mark Zuckerberg. Will we allow the same thing to happen with robots and AI, or will we regulate them and teach better digital media literacy to future generations?
“We’re really bad at learning lessons from the past,” said Gunkel, “and you can already see this happening with the Alexa and Google Home. People are so amazed by them, but they don’t think about how the Alexa device itself is just an empty shell – the smarts are in the cloud. Alexa is not your personal assistant. It’s an assistant that works for Amazon, and when you talk to Alexa you’re talking to a corporation.”
The threat of a super-powerful AI, or "artilect" (a term describing an artificial intellect), is also that of the super-powerful corporation that owns it. It’s highly likely that AI will take our current age of surveillance capitalism into even more invasive territory, and will breed monopolies of data.
“I think this is one of the concerns we don’t recognise or appreciate enough,” said Gunkel. “This technology, because it is so expensive, requires incredible resources, mainly of data. Soon, as we get quantum computers, it’s going to require a massive quantum computing platform to be operationalised. This stuff needs either governments or big industry in order to make it function.”
He sees a deepening digital divide as the result: “Where are those big industries? They’re in Europe, the US and China. That’s a very small fraction of the world’s population, and what I worry about is the political economy of AI – we’re giving this technology to some of the richest places in the world that already have the advantages, and many parts of the world will be left on the other side of the divide. There are still communities across the world that don’t have access to the internet, and the AI divide is going to be even greater.”
AI in particular augurs in a new age of digital feudalism, one where we are the serfs, rather than robots. Some will consider the question of robot rights unnecessary or outlandish, but there are decisions we need to make now before they’re made for us. We’re at a critical point in time, one where reflection and planning might save us from repeating the mistakes of the past.
Gunkel’s message is to proceed with caution: “I’d say, instead of ‘move fast and break things’, ‘invent things and study the hell out of them first’,” he concludes.