06.02.2025
Will super-robots be able to claim human rights?
In the context of artificial intelligence (AI), technological developments in computer science and robotics make it quite likely that we will see the emergence of autonomous robots with capabilities comparable to those of human beings within the next few decades. If robots become as intelligent as humans (or even more intelligent), this development will raise dificult questions about their moral and legal status, as well as the possibility of recognizing rights in their favor, regardless of their artificial nature.
For example, if we establish at some point that robots can become subjects of law, based on certain criteria such as rationality, intelligence, autonomy, consciousness (even artificially created), self-awareness and sentience, then human beings could be forced to recognize moral and legal rights for them, including what we now call 'human rights'.
This step, however, would have enormous implications for our future relations with robots. They would no longer be merely our inanimate instruments; instead, human beings would be morally obliged to recognize their legal status and corresponding rights and treat them accordingly.
At this point, it is appropriate to distinguish between "machines", i.e. to distinguish between artificial narrow intelligence ("ANI"), which can already be attributed to applications or complexes of machines that are excellent at accomplishing a particular task - such as playing chess or GO - and artificial general intelligence (AGI), which can be considered as the foundation of any human-like machine, as it would give such a machine the ability to become an intelligent person that understands and learns like humans, allowing it to evolve, develop and improve itself.
Having reached such a point in the development of intelligence, machines capable of perfecting themselves to the level of achieving self-awareness could not be opposed to the desire for rights by the pale argument that they are not made of flesh and blood (the DNA objection).
Human rights are commonly described as universal moral rights that exist independently of any recognition by the state. The function of legal human rights is of utmost importance, as it reinforces the demand for moral rights for all countries that have signed the Universal Declaration of Human Rights, the legal basis for the claim of universal moral rights. One of the most important questions of moral and legal philosophy is to determine which members of the community of human beings should be recognized as having these rights. The idea that all human beings should enjoy the full spectrum of human rights has been questioned by many philosophers, who argue that rights and duties should rather concern persons and not human beings as such. Some contemporary discussions on the application of human rights even try to include animals (the great primates), or even the environment, among the beneficiaries of human rights. In this context, the legitimate question arises whether intelligent robots might not be worthy of human rights protection. The argument of the doctrinalists who have developed the subject is based on establishing the ontological difference between human beings and automated machines, which in turn is based on three assumptions: man "came into being", he has no constructor (like animals), man constructed an entity (machine), and the machine was constructed for a specific purpose.
Based on the above assumptions, human beings (and animals) are purposeless beings (the first assumption) and all automated machines (here in the very broad sense of applications or complexes of artificial intelligent applications, with or without a physical body), regardless of their capabilities (e.g. sentience, intelligence and consciousness), are necessarily (a) constructed and/or (b) constructed for a given particular purpose (the second and third assumptions). The ontological difference is based on the fact that human beings have no purpose, what some doctrinaires call "existential normative neutrality", whereas robots are never "existentially normatively neutral", since someone else constructs them and gives them a purpose.
Humans, by contrast, may have simply come into existence without having had any constructor or purpose given to them. Their rights are based on their own existence and their freedom, independent of any particular purpose. A machine must have a purpose - the purpose set by its constructor - even if that purpose were "merely to construct such an entity."
Human rights are recognized because the human species is such that, having come into this existence without a determined purpose, it must discover its own purpose, and rights are allowed specifically (or at least in part) to enable this discovery.
However, the idea of claiming that human beings - who have come into existence without any particular purpose - enjoy rights, as opposed to beings who have been constructed and given a particular task or purpose, independent of their level of cognitive ability (which is likely to be much higher in machines) may be a misleading distinction and one that undermines the fact-value distinction.
My bet in the medium term - it won't be very long before these questions become pressing - is that humans will not remain the sole arbiters of assigning rights to other intelligent beings that they have, in fact, created. After all, why should we not be the inferior ones, denied our rights?
The article is written by Veronica Dobozi – Partner.