07.05.2025
Do we have an obligation to give rights to robots?
A debate that might be called interesting, if it were not, in fact, downright terrifying, is taking place in legal/sociological/ethical doctrine these days. Both sides have valid arguments on both sides. The terrifying thing is that we have ended up having to have such a debate, which seemed like science fiction until not so many years ago.
But back to the two working hypotheses. One camp argues that Artificial Intelligence - especially if it also takes a humanoid form - having qualities comparable, and in some respects clearly superior to human beings, should be protected by conferring individual rights. Specifically, each member of this new "species" should be able to enjoy rights that are currently generically perceived as "human rights". We are not necessarily discussing the spectrum of these rights, because the essential distinction between the two camps is the very recognition of the difference between a machine and a being. The other camp regards any form of Artificial Intelligence as a machine, a tool, albeit a more evolved one, and humanoid robots are, in this camp's view, a kind of walking computer, nothing more. There would therefore be no basis for a discussion of individual rights either.
Some academic voices advocate extending the protection of human rights only to those humanoid robots that provide emotional support to lonely people or offer assistance in the form of personalised education to children, in other words the kind of humanoid robots that from the point of view of human perception can most easily be assimilated to a human surrogate. Other scholars argue that such rights should be granted to all forms of humanoid robots capable of learning and creating, while others go even further and argue that such rights should also be granted to forms of evolved Artificial Intelligence that do not take humanoid forms. Consider, for example, forms of artificial intelligence that study and memorise works created by human beings and creation patrons to automatically generate creative artefacts such as music, digital artworks and stories. Such a complex of algorithms, whether stored in a humanoid form or not, learns by analysing numerous examples and deriving a general pattern or rule from them. Once the learning is complete, it can independently apply these insights to new situations. Does it have intellectual property rights over its own creation?
For others, the idea of recognising the right of a humanoid robot to marry a human being - and, consequently, to enjoy all the rights guaranteed by human family law - is a realistic scenario for the near future, given the increasing alienation of human beings in modern society and the need to have a companion with superior intelligence, but without the human metis. From the same perspective, it also advocates overcoming the anthropocentric nature of current copyright laws, which disqualify, for example, the creative processes of Artificial Intelligence, considering them inferior in degree and importance to those of humans. In line with this very progressive outlook, some propose recognising primary freedom for creative robots, as they can participate intellectually in the life of the 'citizenry' to an extent at least equal, if not superior to humans.
Such a scenario starts from the idea of what is called "Super Artificial Intelligence", to distinguish it from the applications and programmes that exist today. In essence, it refers to a system that is self-aware, able to solve problems, learn and plan for the future. It also includes human-like cognitive abilities and personality display, including the ability to learn like humans and possess the dimension of imagination, thus thinking beyond problem solving to consider future needs. In this scenario, Artificial Intelligence will have superpowers, surpassing current levels of human intelligence, allowing it to train other computers while being aware of its own limitations. Robot-rights advocates are already thinking about what "life" might look like for future beings, such as intelligent synthetic humanoid robots, alongside so-called "basic" humans (i.e. us, the spineless us). Some argue that the future Artificial Intelligence will not only be entitled to human rights, but should have a higher moral and legal status than human beings. This line of reasoning is based "on a particular view of personhood according to which cognitive capacities (e.g., rationality, intelligence, autonomy, self-awareness) are most decisive in determining the moral status of different species, such as human beings and animals, as well as within each species".
Applied to the existing human rights treaties, it is argued that rights such as Article 2 of the European Convention on Human Rights, which states that: '[e]veryone's right to life shall be protected by law' or Article 14, on discrimination based on 'status' are already open to synthetic persons and statuses. From this perspective, to deny personhood and human rights to "an intelligent being who proves itself in possession of the requisite qualities - including sentience, self-awareness, moral appreciation and narrative identity - would render the basis of our understanding of personhood meaningless." (see here, p. 483). The posthumanist perspective inherent in claims about robot rights related to existing and future Artificial Intelligence has as a common factor that it radically challenges traditional views about the human element in the protection of human rights.
The conservative camp is sceptical about morally grounded robot rights based on the properties of Artificial Intelligence. Artificial Intelligence, both existing and future, does not inevitably have inherent characteristics and abilities. We are obviously not referring to cyborgs (humans augmented by mechanical components), but to Artificial Intelligence without human corporeality, such as algorithms or androids (made of a flesh-like material to look human, but without human parts). Neither generative algorithms nor super-intelligent humanoid robots, which could form long-term relationships with humans, can exist or survive without human input. Artificial Intelligence is created by humans and, in the view of the conservative camp, dependent on humans. Intelligent systems "are never fully autonomous, but always human-machine systems that operate with harnessed human power and environmental resources. They are socio-technical systems, human all the way through - from the training data to the societal uptake after deployment." Developers of social robots actively choose to integrate them into human social environments and design them to look and behave like humans. Both designers and users "tend to anthropomorphise such robots as they interact with them, assigning them anthropomorphic characteristics such as personality, vitality and so on".
This shows that the human abilities of robots come from human decisions, even though AI decisions are unpredictable and hard to track, giving AI systems their so-called "black box" nature - and even though recent studies indicate that AI agents are capable of fighting for their own survival and can "strategically introduce subtle mistakes into their responses, even while trying to defilter their mechanisms". This dependence on external origin contrasts with the moral justification of human rights for human beings, which denies that human rights depend on external recognition or group membership. While humans come to life through procreation and are also influenced by external factors such as education and material resources, human property differs from the external influence of Artificial Intelligence because the human capacity to choose and free oneself from domination is inherent, not externally programmed.
One way out between these two extreme positions is to accept that robots are tools, but that they are what are called 'tools in relation', connected to people and the social-cultural domains in which they operate. In other words: when people care about something, that thing has only a "derived moral status". This understanding of artificial intelligence as a 'tool in relation' emphasises the primacy of the human being, since it is the human interest in being able to enter into and maintain such social relations with an artificial intelligence entity that determines the very existence of such entities.
Bibliography
- Müller, V., Bostrom, N.: Future progress in artificial intelligence: a survey of expert opinion. In: Müller, V.C. (ed.) Fundamental Issues of Artificial Intelligence, pp. 552-572. Springer, Berlin (2014)
- Kurzweil, R.: The Singularity Is Near. Duckworth Overlook, London (2005)
- Bostrom, N.: Superintelligence: Paths, Dangers. Strategies. Oxford University Press, Oxford (2014)
- Gordon, J.-S.: Human rights. In: Pritchard, D. (ed.) Oxford Bibliographies in Philosophy. Oxford University Press, Oxford (2016
- Gordon, J.-S.: Artificial moral and legal personhood, pp. 1-15. AI & Society (2020)
- Singer, P.: Speciesism and moral status. Metaphilosophy 40(3-4), 567-581 (2009)
- Singer, P.: The Expanding Circle: Ethics, Evolution, and Moral Progress. Princeton University Press, Princeton (2011)
- Cavalieri, P.: The Animal Question: Why Non-Human Animals Deserve Human Rights. Oxford University Press, Oxford (2001)
- Donaldson, S., Kymlicka, W.: Zoopolis. A Political Theory of Animal Rights. Oxford University Press, Oxford (2013
- Atapattu, S.: Human Rights Approaches to Climate Change: Challenges and Opportunities, Routledge, New York (2015)
- Gunkel, D.J.: Robot Rights. MIT Press, Cambridge (2018)
- Gellers, J.: Rights for Robots. Artificial Intelligence, Animal and Environmental Law. Routledge, London (2020)
- Gordon, J.-S.: What do we owe to intelligent robots? AI oc. 35, 209-223 (2020)
- Miller, L.F.: Granting automata human rights: challenge to a basis of full-rights privilege. Hum. Rts. Rev. 16(4), 369-391 (2015)
An article by Victor Dobozi (vdobozi@stoica-asociatii.ro), Partner, STOICA & ASOCIAȚII.
