cosmopolis rivista di filosofia e politica
Cosmopolis menu cosmopolis rivista di filosofia e teoria politica

Ethics in the Age of Robotics Revolution

LUCA GRION
Articolo pubblicato nella sezione Robotics and Public Issues.

For a long time, philosophical reflection focused on the issues related to the technical dimension. In this context, the relationship between man and machine drew special attention because of its ambivalent nature: on one side, the machine is seen as an instrument through which man can emancipate himself from hard and repetitive work; but, on the other side, it is felt as a potential enemy that undermines man, making him useless and antiquated. With regard to this, according to tradition, Ned Ludd emblematically protested against the power loom, which, according to him, was responsible for widespread unemployment and scarce salaries. This episode took place in 1779, at the time of the Industrial Revolution, when the spread in use of the steam engine was fostering an exponential expansion of production capacity in the manufacturing field but, at the same time, causing a drastic distortion of the social fabric.
Moving from the general consideration regarding the relationship between man and machine to the specificity of the one between man and robot, the situation does not change; actually, it seems to radicalise the dialectical conflict between the radiant and encouraging side of an increasingly powerful technology (which helps man, relieving him from heavy duties) and its dark side, which was often interpreted by science fiction literature (especially in its dystopic declension). Thus, the need for a conscious use of machines, and the urgent need for responsible management of technical equipment, emerge. Hans Jonas (1972; 1979) raised very important points regarding this issue, inviting us to focus on the need to develop an ethics of technology able to cope with the new challenges that man has to face.
From this point of view, reflection in ethics focuses mainly on man and on the responsibility of those who produce and use these technical discoveries. This opens the way to what we could call: “ethics in the age of robots”, whose protagonist is man, while the machine (robot) is considered a simple product. Some speak of “technoethics” just to narrow the specificity of this area of reflection; they use this term to identify the need to individuate a kind of professional deontology directed to those who design, produce and use robots (Galvan 2003). Thus, the object of reflection in ethics is new (robotics), but the way to face the issue (deontological method) is “classical”. Things, however, are more complicated than they may seem at first glance (Galvan 2003; Verruggio - Operto 2006; Verruggio 2006; Salvini - Laschi - Dario 2007).
The pervasiveness of technology opens the way to further issues. I’ll try to list a few: if robotic devices were incorporated in human bodies – if bionics development lead to the inception of real cyborgs – in which way would our conception of ethics be modified? And how would our self-representation change? Finally: if machines became intelligent, sensitive and independent like man, then would they become free, conscious and responsible? If this were the case, they would also be entitled to have rights and this would require the development of an ethics of robots.
In the following pages, I will try to briefly outline the different aspects of this problem.


1. Man and the robot

The etymology of the word “robot” is extremely interesting. Karel Čapek, Czech playwright, created the term, using it for the first time in R.U.R (Russum’s Universal Robot), a sci-fi drama in which the protagonist pursues the utopia of a world free from physical fatigue (Čapek 1920). To achieve this ideal, the protagonist builds androids made of organic material. Androids – in other words, anthropomorphic robots – that the author calls “robot”, from the Czech word “robota” which means “hard work”, “slavery”. Therefore, the robot is a technical product that can carry out heavy work without the control and guidance of a human being, releasing him from physical fatigue as, in ancient times, slaves released free citizens from the burden of manual work. Modern slaves, designed to receive and execute orders; passive servants to serve man’s desire of happiness. This is the ideal.
But soon, the utopia of a life free of physical fatigue turns into dystopia. In Čapek’s drama, the introduction of robots causes a series of tragic and unwanted consequences: first, indolence and vice spread, and birth rate declines. Then, slaves rebel against their creators, killing them. What is then represented is a tragic allegory of the consequences of man’s refusal of the limits imposed by his own finite condition.
Certainly, this is not the only case in which literature faces the ambivalence of the relationship between man and technology. Long before Čapek’s text, the same topic was covered by both Greek tragedy (a valid example could be Prometheus’ myth, symbol of the “challenge to the sky”) and Jewish tradition, in which we can find the legend of the Golem (Henry 2013). The Golem is a clay giant which, according to the legend, is brought to life by writing the word “emet” (“truth”) on its forehead; the Golem could otherwise be deactivated by removing the “e” in emet, thus changing the inscription from ‘truth’ to ‘death’ (met meaning “dead”). In this case as well, the artificial being is created to release man from fatigue; it is an obedient servant who carries out heavy duties and who defends the Jewish from the attacks of their persecutors. However, the creature ends up rebelling against its inventor, he who presumed to dominate its life. This is what happens to the rabbi of Prague Judah Loew ben Bezalel, who, according to legend, moulded the Golem from the mud of the Vltava river, to make it a faithful servant to his orders, but he soon lost control over it.
Finally, in this list of what looks like the beginning of robotics, we should certainly mention Mary Shelley’s Frankenstein, a dramatic metaphor of the human desire to defeat death (Shelley 1818). Similarly to Čapek’s drama and the legend of the Golem, the creature rebels against its inventor who was misled by the illusion of having control over it, transforming the utopia of an almost guaranteed safe and happy life for man into its exact opposite.
Why mention these three literary topos in the context of a reflection on roboethics? Because they have in common the relationship between man and technology focusing on the role of the creator/user, and thus opening the way to a reflection on the responsibility of those who design, produce and use technological products upon which they can’t exercise real control. The real problem is the anxiety for dominion that transpires in those who see in technology the most effective answer to the fears of the human heart (the fear of a weak and finite condition, the fear of the mystery of life and the inevitability of death). So, in this way, roboethics (conceived as “ethics in the time of robots”) leads the way to a radical reflection on the meaning and value of the human condition (Grion 2012c).


2. The man-robot

Roboethics can also be conceived according to a different thematic perspective. There are indeed completely new issues that arise from the incorporation of technological devices and from the consequent hybridisation of man-machine. I am not referring to science fiction, but to a series of matters which characterize everyday life in the present time. Some examples may help focus on this issue.
In the past few years, one of the most innovative areas of computer science research focused on the rehabilitation potential of Brain Machine Interfaces (Lucivero - Tamburrini 2007; Warwik -Battistella 2007). These devices may be divided into two large families: first of all, the interfaces designed to restore the sensorial skills of a subject (input BMI) and second, the interfaces aimed at restoring the motor skills of a patient (output BMI).
Examples of the first type of interfaces are the cochlear and retinal implants, whose aim is to overcome the interruption of nerve signals that carry the information from the sensory system to the brain. Examples of the second type of interfaces are the Neuro-Motor Prostheses, designed to guide the movement of prosthesis or mechanical limbs using the ability of the subject to consciously activate certain cortical areas to which specific movements of the robotic device are correlated (different cognitive tasks – making a sum or imagining oneself moving in a room – activate specific cerebral areas that are correlated to individual commands ordained to robotics devices).
Both these forms of man-machine interaction stimulate the nervous system and can modify its structure in a more or less accentuated way. The ethical question related to this case is connected to the long-term consequences of those stimulations and the quality of the changes that they may induce in patients undergoing this kind of implant. We may wonder if the interaction between the BMI and the central nervous system can modify mental activity. Aside from these issues, there are others that focus on the quality of the information transmitted by the BMI and on problems of acceptance – and self acceptance – that these implants may generate in patients who use them.
In the sphere of rehabilitation, one of the most remarkable sectors revolves around the techniques used to treat diseases such as Parkinson, epilepsy or Tourette syndrome by the grafting of stimulators implanted deep in the brain (deep brain stimulation). It is a surgical intervention during which a small electrode, connected to an electrical pulse generator positioned in the area under the clavicle, is grafted in the subthalamic nucleus of the brain. The aim of this device is to interrupt the neurophysiological transmissions that cause the tremor typical of Parkinson syndrome. The results are fascinating; it seems difficult to fear these technological applications that can help people regain a better quality of life (there are videos available on the web in which patients with Parkinson’s disease can even dance thanks to these innovative rehabilitation interventions). Although these operations are generally welcomed, the issue regarding the long-term consequences of such interventions on the human brain remains. A prolonged stimulation of the thalamus, for example, can cause emotional changes and, in general, an alteration of the patient’s personality. This possibility brings a series of doubts and questions about the concept of “authenticity”: is the patient undergoing such interventions authentically himself or, as in the case of the use of psychiatric drugs, does his personality change after treatment?
If we want to go deeper in the discussion regarding the possible consequences of such technological grafts on the human body, it is not so wrong to assume that being the object of such therapies could be considered an extenuating circumstance with respect to any crime committed. In fact, the culprit could argue that the action of the neuro-stimulator pushed him to commit the action of which he is accused, limiting and eliminating his freedom and, consequently, his personal responsibility. How can this problems be managed? Others problems in the relationship of man-machine hybridization emerge from the possibility of implanting RFID devices (Radio Frequency Identification Device) or subcutaneous GPS. An innovator in these kind of experiments is Kevin Warwick, who experimented some possible practical applications of such devices (lights control, opening doors, remote control of mechanical devices) and anticipated the chance of further possible applications, including the possibility of remote control of children, elderly and prisoners (Warwik 2003; Warwik - Battistella 2007). The aim of these tracking systems is the possibility to locate any missing people in an easier way (as in the case of kidnapping, or a patient affected by Alzheimer’s disease who left his house, or even remote surveillance of a prisoner who has been granted forms of detention other than prison). However, the ethical questions related to the lawfulness of such forms of violation of personal privacy are very strong and urge for careful reflection. Finally, there is the possibility that the man-machine hybridization could occur not just for medical rehabilitative purposes, but to enhance humans, providing them with higher faculties than their standard biological capabilities. So, we are inside the complex problem of human enhancement, the possibilities opened by technology for transcendence of the human condition (Aguti 2011; Bostrom 2005a, 2005b; More 1994, 2013, Grion 2012a, 2012b). Giuseppe O. Longo wrote very important pages with regard to these topics; he talks about the incipient homo technologicus in terms of “symbiotic”, which resulted from the transformation of the homo sapiens by technology (Longo 2003; Longo – Bonifati 2012). In conclusion, from an ethical perspective, there is the difficulty that emerges when one tries to establish a reasonable boundary between the lawfulness of rehabilitation and the disapproval surrounding human enhancement. There are people, such as Warwick, that explicitly embrace the possibility to close with the old and open the way to a new humanity enhanced by the technological revolution. “If you could be enhanced – asked the English scientist provocatively – would you have any problem attending humanity’s funeral?” (Warwik 2003; Warwik - Battistella 2007).


3. Robot and man

So far we have analysed the relationship between man and machine always focusing on the first of these two subjects. However, the necessity to consider the point of view of the second subject has also been highlighted, encouraging a debate regarding the possibility of considering robots as subjects with rights and not just as objects of use.
There are authors, such as the American Ray Kurzweil (2005, 2012), who are convinced that the rapid development of artificial intelligence will lead to the production of robots capable of self-learning and, therefore, of decision-making autonomy. I would like to point out a few issues. First of all, there is a reductionist approach, that is the belief that, to use an expression by Marvin Minsky (1986), the mind is simply what the brain does. Secondly, there is a functionalist assumption which tends to connect some of the classic categories of moral reflection (in particular, those of conscience and person) to the ability of the subject in question to exercise certain functions. Finally, there is an ethical point of view of utilitarian origin that connects the great notions of good and evil to the useful and the harmful (to the pleasant and the unpleasant).
What is the result of the interlace of these three coordinates in the context of a reflection concerning the future of robotics? First of all, we should think about the possibility to consider an artificial product a “person” if the former were able to manifest functions comparable to those which allow a human being to be a person. With regard to this, Kurzweil expects that, in the near future, machines will indeed seem aware and will be very persuasive when expressing their personal feelings (and of their world “in first person”). These machines will appear to have real emotions and be able to make their own decisions. As a result, according to the American futurologist, we will consider them fully aware people (Kurzweil 2012).
Today, we can find the precursors of these intelligent machines in the most advanced systems of machine learning, computer systems able to learn from their experience, by trial and error (Fornati et al. 2007). Therefore they are machines that implement their knowledge and their ability to respond to external impulses autonomously, gradually gaining new spaces of autonomous decision-making and action. An extremely fascinating example of machine learning is Watson, an artificial intelligence system created by IBM; Kurzweil extensively discusses it in his latest book (Kurzweil 2012). A machine able to study information on the web, understand questions in a natural language and reply in a proficient and accurate way to a multitude of questions, acquiring skills that make it capable of winning against men in flesh and blood, at a game such as the American TV show Joepardy!
Much more familiar examples of devices with self-learning skills accompany us in our everyday life; we only need to think about the devices for speech recognition, the web information filtering systems, home cleaning robots or automatic guidance systems. Other more sophisticated systems are not “smart tools”, but “artificial companions”: it is the case of robots designed for the company of children and elderly (for example, the AIBO dog produced by Sony) or robots for the assistance of elderly and disabled people. We are facing devices with autonomous decision-making and with the ability to simulate human intelligence and emotion. In the near future, those abilities, today at a beginner stage, could be developed in such a significant way to make it hard to distinguish between the behaviour of robots and humans (overcoming the Turing Test). If this were the case, we should ask ourselves if we should continue to consider those devices in the same way as cars and appliances or if it would be correct to begin to treat them with the dignity and respect that we ordinarily reserve to intelligent beings.
According to Kurzweil – in agreement with a perception that is gradually spreading –, since the robots will behave as (and work as) fully aware people, we should consider them as real people; therefore, they should be entitled to have the same rights that ordinarily human beings have. So, robots will not be simply object of an ethical reflection, but they will become ethical subjects, for which an ad hoc ethic, “the ethic of robots”, should necessarily be developed. With regard to this, Isaac Asimov – a classic in the context of science-fiction literature – represents a milestone in the reflection of roboethics, as his “three laws of robotics” foreshadow a first code of ethical behaviour for robots (Asimov 1950): 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law (Asimov also added the “Zeroth law”, that precedes the others and is complementary to the First Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm).


4. Conclusions

The research, however brief, in the varied and fascinating field of roboethics highlighted some aspects that I consider extremely important. First of all, there is the need to develop an ethical reflection that could keep up with the issues raised from the recent technological revolution and that could discern between the positive possibilities and the potential dangers that we could face. Secondly, it highlights the need to call attention to the concept of “human nature”, in order to understand the specificity of the human and to ensure a proper defence (Grion 2012c). It seems clear indeed that the ethical acceptability of a specific technical discovery rests on its compatibility with the dignity that a human being has; a dignity that is related to what protects and defends human existence, according to its essential coordinates. In this sense, an existentially radical one, the research around the issues related to roboethics highlights the need to raise a question on the essential architecture of the human condition and on the alternative between a “service” use of technics, as an invaluable tool for man’s growth, and a “manipulative” use of technics, intended to reshape human nature, encouraging a transcendence of the present condition [Aguti 2011]. The latter is a temptation which always goes along with the euphoria that accompanies the effects of technical power; but, as the dystopic myths mentioned earlier remind us, it more easily leads to the feared sub-human rather than the hoped post-human.


Bibliographical References

Aguti A. (2011) (a cura di), La vita in questione. Potenziamento o compimento dell’essere umano?, La Scuola, Brescia 2011.
Asimov I. (2003), Io robot (1950), Mondadori, Milano 2003.
Bostrom N. et al., The transhumanist FAQ
Bostrom N. (2005a), A History of Transhumanist Thought, in «Journal of Evolution and Technology», vol. 14, n. 1, 2005, pp. 1-25.
Bostrom N. (2005b), In Defence of Posthuman Dignity, in «Bioethics», vol. 19, n. 3, 2005, pp. 202-214.
Čapek K. (2006), R.U.R. (Rossum’s Universal Robots) (1920), Bevivino, Milano 2006.
Fornati A., Casalini S., Ferro M., Pioggia G., Sica M.L., De Rossi D. (2007), Robotic Action Learning. FACE: la responsabilità di imparare dalle azioni, in «Teoria», XXV, n. 2, 2007, pp. 74-82.
Galvan J.M. (2003), On Technoethics, in «IEEE-RAS Magazine», vol. 10, n. 4, 2003, pp. 58-63.
Grion L. (2012a), Postumanesimo: un neognosticismo?, in «Hermeneutica», 2012, pp. 333-352.
Grion L. (2012b) (a cura di), La sfida postumanista. Colloqui sul significato della tecnica, Il Mulino, Bologna 2012.
Grion L. (2012c), Persi nel labirinto. Etica e antropologia alla prova del naturalismo, Mimesis, Milano 2012.
Henry B. (2013), Dal golem ai cyborgs. Trasmigrazioni nell’immaginario, Belforte Salamon, Livorno 2013.
Jonas H. (2002), Il principio responsabilità (1979), Einaudi, Torino 2002.
Jonas H. (2011), Tecnologia e responsabilità. Riflessioni sui nuovi compiti dell’etica (1972), in Id., Frontiere della vita, frontiere della tecnica, Il Mulino, Bologna 2011.
Kurzweil K. (2012), How to Create a Mind: The Secret of Human Thought Revealed, Penguin, New York 2012.
Kurzweil K. (2008), La singolarità è vicina (2005), Apogeo, Milano 2008.
Longo G.O., Bonifati N. (2012), Homo immortalis. Una vita (quasi) infinita, Springer, Milano 2012.
Longo G.O. (2003), Il simbionte, Meltemi, Roma 2003.
Lucivero F. - Tamburrini G., Sul monitoraggio etico delle interfaccia cervello-macchina, in «Teoria», XXV, n. 2, 2007, pp. 27-40.
Minsky M. (1989), La società della mente (1986), Adelphi, Milano 1989.
More M. (2013), A Letter to Mother Nature, in More M., Vita-More N. (eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, John Wiley & Sons, Oxford 2013.
More M. (1994), On Becoming Posthuman, in «Free Inquiry», September 1994.
Salvini P., Laschi C., Dario P. (2007), Roboetica e Biorobotica: discussione di alcuni casi di studio, in «Teoria», XXV, n. 2, 2007, pp. 41-56.
Shelley M. (2013), Frankenstein (1818), Feltrinelli, Milano 2013.
Verruggio G., Operto F. (2006), Roboethics: a Bottom-up Interdisciplinary Discourse in the Field of Applied Ethics in Robotics, in «International Review of Information Ethics», vol. 6, December 2006, pp. 3-8.
Verruggio G. (2006), The EURON Roboethics Roadmap, "Humanoids’06": IEEE-RAS International Conference on Humanoid Robots, December 6, 2006, Genova (IT).
Warwick K., Battistella C. (2007), Quattro matrimoni e un funerale: problemi etici nel futuro dell’interfaccia cervello computer, in «Teoria», XXV, n. 2, 2007, pp. 19-26.
Clark A. (2003), Review of Warwick K., Natural-born cyborgs: Minds, technologies and the future of human intelligence, in «Trends in Cognitive Sciences», 2003, vol. 7, n. 12, 2003, pp. 524-525.



E-mail:



torna su