Submit manuscript...
eISSN: 2574-8092

International Robotics & Automation Journal

Mini Review Volume 4 Issue 3

Ethics in robotics and automation: a general view

Spyros G Tzafestas

School of Electrical and Computer Engineering, National Technical University of Athens, Greece

Correspondence: Spyros G Tzafestas, School of Electrical and Computer Engineering, National Technical University of Athens, Greece, Tel 0030-210-6524000

Received: April 02, 2018 | Published: June 27, 2018

Citation: Tzafestas SG. Ethics in robotics and automation: a general view. Int Rob Auto J. 2018;4(3):229-234. DOI: 10.15406/iratj.2018.04.00127

Download PDF

Abstract

Most robotics and automation scientists believe that many new aspects currently emerging in robotics and automation (R&A), and aspects that are expected to emerge in future, call for the development of new cultural, ethical and legal regulations that can face efficiently the most delicate issues that may arise in real practice. Over the last two decades the subject of ethics in R&A has received great attention and many important theoretical and practical results were derived in the direction of making robots and automation systems ethical. The purpose of this paper is to discuss the issue of ethics in robotics and automation, and outline major representative achievements in the field.

Albert Einstein: Relativity applies to physics, not to ethics.

Sholem Asch: Now, more than any time previous in human history, we must arm ourselves with an ethical code so that each of us will be aware that he is protecting the moral merchandise absent of which life is not worth living.

Rudolf Steiner: For everyone who accepts ethical norms, their actions will be the outcome of the principles that compose the ethical code. They merely carry out orders. They are a higher kind of robot.

Daniel H Wilson: We humans have a love-hate relationship with our technology. We love each new advance and we hate how fast our world is changing…..The robots really embody that love-hate relationship we have with technology.

Introduction

Ethics or moral philosophy is a branch of philosophy that studies in a systematic way, defends, and suggests concepts of right or wrong performance. The branches of philosophy are metaphysics/ontology, epistemology, teleology, ethics, aesthetics, and logic. The branches of ethics are meta-ethics, normative ethics and applied ethics. Robotics and Automation Ethics is the branch of applied ethics which investigates the social and ethical issues of robotics and automation in the broader sense which includes all kinds of automated systems through the use of computer, information, communication, and control science and technology, and develops ethical methods for resolving them via the exploitation of traditional and novel ethical theories (such as deontological, utilitarianism, value-based theory, case-based theory, etc.) In particular, Robot Ethics (Roboethics) covers the entire range of ethical issues related to robot design, operation, and use. Today the central aim of robotics research is to create robots that possess full autonomy, i.e., the capability of autonomous decision making. Here is exactly where the major robo ethics issues arise. Actually, present day robots are still not fully autonomous. They are partially autonomous. At the lowest end they possess low level (operational) autonomy (i.e., autonomous execution of programmed operations without any human intervention), and passing from medium level-autonomy (functional autonomy), they approach the level of full autonomy (at which there is not any human intervention in decision-making, planning/scheduling, functioning, and action performing). The same is true for the issues of ethics, where we have several levels of morality, namely:1

  1. Operational morality (moral responsibility lies entirely in the robot designer and user).
  2. Functional morality (the robot has the ability to make moral judgments without top-down instructions from humans, and the robot designers can no longer predict the robot’s actions and their consequences).
  3. Full morality (the robot is so intelligent that it is fully autonomously choosing its actions, thereby being fully responsible for them).

But could a robot be ethical? As argued by many authors, the minimum requirements for a robot to be ethical are:

  1. complete ability to predict the consequences of its own actions (or inactions),
  2. a set of ethical rules against which to test each possible action/consequence, so it can select the most ethical action,
  3. Legal authority to carry-out autonomous decision making and action, accompanied by associated liability (e.g.).1-9

Fundamental Ethics Questions in R&A

The field of R&A ethics was developed over the years by addressing fundamental general and specific philosophical questions:

General questions

  1. Are the general ethics principles sufficient for facing the issues raised by R&A? The total answer is NO!
  2. Is there a need of a specific ethics framework applied to R&A? The answer is yes!
  3. Is ethics applied to R&A an issue for the individual scholar or practitioner, the user, or third party? The answer here is ‘TO ALL”.

Specific questions

  1. Can we act ethically through, or with, robots and automated systems? If yes, how?
  2. Can we design robots to act ethically? If yes, how? Or, could robots be truly moral agents?
  3. Can we explain the ethical relationships between human and robots? If yes, how?
  4. Is it ethical to create artificial moral agents (machines/robots, software agents, automated systems)?
  5. How far can we go in embodying ethics in robots and automation?
  6. What are the capabilities that a robot should have in order to be characterized as a moral/ethical robot?
  7. How people should treat robots, and how should robots treat people?
  8. Should robots have rights?
  9. Should robots be considered as moral patients?
  10. Should moral/ethical robots and intelligent machines have new legal status?
  11. What role would robotics and automation have into our life of the future?
  12. Which type of ethical codes is correct for robots and machines?
  13. Who or what is responsible if a robot or other automated system causes harm?
  14. Who is responsible for actions performed by human-robot hybrid beings?
  15. Is the need to embed autonomy in a robot contradictory with the need to embed ethics to it?
  16. Are there any types of robot that should not be designed? Why?
  17. How do robots decide what the proper description of an action is?
  18. If there are multiple rules, how do robots deal with conflicting rules?
  19. Are there any risks to create emotional bonds with robots?
  20. Is it ethical to program robots to follow ethical codes?
  21. Is it ethical to create robotic nurses and soldiers?
  22. Which type of ethical codes is correct for robots and automated systems?
  23. Who is responsible for actions performed by human-robot hybrid beings?
  24. Is the need to embed autonomy in a robot contradictory with the need to embed ethics to it?
  25. Are there any types of robot that should not be designed? Why?
  26. How do robots decide what the proper description of an action is?
  27. If there are multiple rules, how do robots deal with conflicting rules?
  28. Are there any risks to create emotional bonds with robots?
  29. How can ethics and law be jointly applied in robotics and automation?
  30. How might society and ethics change with R&A?

To formulate a sound framework, all the above questions/issues should be properly addressed.

Short review of R&A ethics

The literature of R&A ethics is very vast. Our aim here is to provide a short review of some major contributions. The term robo ethics, for robot ethics, was firstly introduced by G. Veruggio in the First Symposium on Robo ethics held in San Remo, Italy (Jan/Feb. 2004), and the first ethical system in robotics was proposed by Asimov10, consisting of the so-called Asimov Laws. These deontological laws are anthropocentric (human-centered) in the sense that the role of robots is to operate in the human service, and imply that robots have the capability to make moral decisions in all cases. Roboethics concerns ethics that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, such as in healthcare or as killer robots of war, and how robots should be designed such as they act ethically. Very broadly, scientists and engineers look at robotics in the following ways:11

  1. Robots are mere machines (surely, very useful and sophisticated machines).
  2. Robots raise intrinsic ethical concerns along different human and technological dimensions.
  3. Robots can be regarded as moral agents, not necessarily possessing free will mental states, emotions, or responsibility.
  4. Robots can be conceived as moral patients, i.e., beings that can be acted for good or bad.

Veruggio defines robo ethics as follows:

“Roboethics is an applied ethics whose objective is to develop scientific/cultural/technical tools that can be shared by different social groups and beliefs. These tools aim to promote and encourage the development of ‘ROBOTICS’ for the advancement of human society and individuals, and to help preventing its misuse against humankind.” Actually, roboethics shares many ‘crucial’ areas with computer ethics, information ethics, communication technology ethics, automation ethics, management ethics, and bioethics. Galvan2 argues that robots possess an intrinsic moral dimension because technology is not an addition to mankind, but provide a way to distinguish man from animals.

Veruggio and Operto5 points-out that the principal positions of scientists and engineers about roboethics are:

Not interested in roboethics: These scholars argue that the action of robot designers is purely technical and does not have an ethical or social responsibility.

Interested in short-term ethical issues: These scholars advocate that certain ethical and social values should be adhered by robot designers in terms of good or bad.

Interested in long-term ethical issues: These scholars accept that robot designers have global and long-term moral responsibility (e.g., digital divide between societies).

Asaro4 describes how it is possible to make robots that act ethically, and how humans must act ethically and take the ethical responsibility on their shoulders, and discusses the question whether robots can be fully moral agents. Wallach10,12 describes the three typical approaches for creating ethical machines and robots, and artificial moral agents (AMAs) in general. These approaches are:

  1. Top-down approach in which the desired rules/laws/principles of ethical behavior are prescribed and embedded in the robot system.
  2. Bottom-up approach in which the robot develops its moral behavior through learning. This is analogous to how children learn morality (what is right or wrong) based on social context and experience from their family and human environment.
  3. Mixed approach in which proper combinations of the top-down and bottom-up approaches are followed.

The ethical concerns of robot use include the following:

Loss of privacy (guidelines should be developed to guard against robot misuse, e.g., when drones and robots collecting data enter our home).

Safety issues (when robots work closely with humans).

Liabilty issues (with regard to who is responsible for errors or faults/failures during robot operation).

Lin, Abney and Bekey8 present a number of contributions by world-wide researchers that address many of the questions listed above. Three comprehensive books on ethics of machines, robots, and information are the following: Capurro R et al.13-15 Two important books on the more general field of techno ethics are those of Galvan2 and Tavani.16

Branches of roboethics

The branches of roboethics are:

Medical roboethics or health care robotics ethics

This branch refers to medicine and health care assisted by robots.7,17,18 The initiation of medical ethics goes back to the work of Hippocrates who has formulated the well-known Hippocratic Oath, which requires a new physician to swear upon a number of healing gods that he will uphold a number of professional ethical standards. The fundamental ethical principles of medical roboethics involve first of all the principles of the Charter of Medical Professionalism, namely: Autonomy (The patients have the right to accept or refuse their treatment). Beneficence (The doctor should act in the best interest of the patient). Non-maleficence (The practitioner should “first not to do harm”). Justice (The distribution of scarce health resources and decision of who gets what treatment should be just.). Truthfulness (The patient should not be lied and deserves to know the whole truth). Dignity (The patient has the right to dignity).

Assistive roboethics/Ethics of assistive robots

Assistive robots constitute a class of service robots which is focused on the enhancement of the mobility capabilities of impaired people (people with special needs: PwSN) so as to attain their best physical and/or social functional level, and have the ability of independent living.7 Assistive robots/devices include the following:

  1. Assistive robots for people with impaired upper limbs and hands.
  2. Assistive robots for people with impaired lower limbs (wheelchairs, walkers).
  3. Rehabilitation robots for upper limb or lower limb.
  4. Orthotic devices.
  5. Prosthetic devices.

The issues of assistive roboethics have been a strong concern over the years the evaluation of assistive robots can be made along three main dimensions, namely: cost, risk, and benefit. Since these evaluation dimensions are contradictory we cannot get full points on all of them at the same time. Important guidelines for these analyses have been provided by World Health Organization (WHO) which has approved an International Classification of Functioning, Disability and Health (ICF).19 A framework for the development of assistive robots using ICF, which includes the evaluation of assistive technologies in users’ life is described.20 A full code of assistive technology was released in 2012 by the USA Rehabilitation Engineering and Assistive Technology Society (RESNA),21 and another code by the Canadian Commission on Rehabilitation Councelor Certification (CRCC) in 2002.22

Social roboethics or ethics of social robots

Sociorobots (social, socialized, socially assistive, socially interactive robots) are assistive robot that is designed to enter the mental and socialization space of humans. This can be achieved by designing appropriate high-performance human-robot interfaces: HRI (speech, haptic, visual). The basic features required for a robot to be socially assistive are:7,23.24

  1. Understand and interact with its environment.
  2. Exhibit social behavior (for assisting PwSN, the elderly, and children needing mental/socialization help).
  3. Focus its attention and communication on the user (in order to help the user achieve specific goals).

A socially interactive robot possesses the following additional capabilities:23,24

  1. Express and/or perceive emotions.
  2. Communicate with high-level dialogue.
  3. Recognize other agents and learn their models.
  4. Establish and/or sustain social connections.
  5. Use natural patterns (gestures, gaze, etc.).
  6. Present distinctive personality and character.
  7. Develop and/or learn social competence.

Well known examples of social robots are:

AIBO: a robotic dog (dogbot) able to interact with humans and play with a ball (SONY).

KISMET: a human-like robotic head able to express emotions (MIT).

KASPAR: a humanoid robot torso that can function as mediator of human interaction with autistic children.24

QRIO: a small entertainment humanoid (SONY).

Automous car roboethics

Autonomous (self-driving, driverless) cars are on the way. Proponents of autonomous cars and other vehicles argue that within two or three decades autonomously driving cars will be so accurate that will dominate in number human-driving cars.25,26 The specifics of self-driving vary from manufacturer to manufacturer, but at the basic level cars use a set of cameras, lasers and sensors located around the vehicle for detecting obstacles, and through GPS (global positioning systems) help them to move at a preset route. Currently there are cars on the road that perform several driving tasks autonomously (without the help of the human driver). Examples are: lane assist system to keep the car on lane, cruise control system that speeds-up or slows down according to the speed of the car in front, and automatic emergency braking for emergency stop to prevent collisions with pedestrians. SAE (Society of Automotive Engineers) International (www.sae.org/autodrive) has developed and released a new standard (J3016) for the “Taxonomy and definitions of terms related to on-road motor vehicle automated driving systems”.

War/military roboethics

Military robots, especially lethal autonomous robotic weapons, lie at the center of roboethics. Supporters of the use of war robots argue that these robots have substantial advantages which include the saving of the lives of soldiers, and the conduct of war more ethically and effectively than human soldiers who, under the influence of emotions, anger, fatigue, vengeance, etc., may over-react and overstep the laws of war. The opponents of the use of autonomous killer robots argue that weapon autonomy itself is the problem and not mere control of autonomous weapons could ever be satisfactory. Their central belief is that autonomous lethal robots must be entirely prohibited. The ethics of war attempts to resolve what is right or wrong, both for the individual and the states or countries contributing to debates on public policy, and ultimately leading to the establishment of codes of war.26,27 The three dominating traditions (doctrines) in the ‘‘ethics of war and peace are):28

  1. Realism (war is an inevitable process taking place in the anarchical world system).
  2. Pacifism or anti-warism (rejects war in favor of peace)
  3. Just war (just war theory specifies the conditions for judging if it is just to go to war, and conditions for how the war should be conducted).

The ethical and legal rules of conducting wars using robotic weapons, in addition to conventional weapons, includes at minimum all the rules of just war, but the use of semiautonomous/autonomous robots add new rules for firing decision, discrimination of lawful from unlawful targets, responsibility, and proportionality.27,28

Cyborg ethics

Cyborg technology aims to design and study neuromotor prostheses in order to store and reinstate lost function with replacement that is different as little as possible from the real thing (a lost arm or hand, lost vision etc.).29 The word Cyborg stands for cybernetic organism, a term coined by Manfred Clynes and Nathan Kline.30 A cyborg is any living being that has both organic and mechanical/electrical parts that either restore or enhance the organism’s functioning. People with the most common technological implants such as prosthetic limbs, pacemakers, and cochlear/bionic ear implants, or people who receive implant organs developed from artificially cultured stem cells can be consired to belong to this category. The first real cyborg was a ‘lab rat’ at Rockland State Hospital in 1950 (New York). The principal advantages of mixing organs with mechanical parts are for the human health. For example:

  1. People with replaced parts of their body (hips, elbows, knees, wrists, arteries, etc.) can now be classified as Cyborg.
  2. Brain implants based on neuromorphic model of the brain and the nervous system help reverse the most devastating symptoms of Parkinson disease.

Disadvantages of Cyborg include:

  1. Cyborgs do not heal body damage normally, but, instead, body parts are replaced. Replacing broken limbs and damaged armor plating can be expensive and time consuming.
  2. Cyborgs can think the surrounding world in multiple dimensions, whereas human beings are more restricted in that sense. A comprehensive discussion of Cyborgs is given.31

 Automation technology ethics: Automation technology ethics is the part of applied ethics and technology ethics (technoethics) which studies the application of ethics to processes and systems automated in one or the other degree.36,32 Today, automation is achieved using digital computers technology, digital feedback control technology, information technology, and modern communication technology. Therefore the ethical issues of automation naturally overlap considerably with the ethical issues rising in all of these areas, and can be studied in a unified way. As noted33 many people feel that using a computer to do something which is illegal or unethical is somehow not as “wrong” as other “real” criminal or unethical acts. A crucial fact regarding the application of ethics and ethical standards in information-based practice is that many professionals in this area do not belong to professional organizations, and many others do not belong to any professional organization. Three fundamental questions about information and automation ethics addressed are:34-36

  1. What constitute substantive ethical issues and how can we learn or know about ethics related to automation?
  2. Do we need better ethics for automation systems? What is this better ethics?
  3. Does anticipatory ethics that studies ethical issues at the R&D and introduction stage of a technology, via anticipation of possible future equipment, applications, and social implications, help to determine and develop a better automation ethics?

Three principal information and service requirements in automation systems are the following, and their achievement depends on ethical performance of engineers and professionals:

 Accuracy: Information must be as more accurate as possible such that the conclusions or decisions based on it are correct. Today the information which is viable and being accessed is sufficiently accurate.

 Accessibility: Information must be accessible. Accessibility involves the right of accessing the required information, as well as true payments of charges to access the information.

 Quality of service: In contrast to goods, services are intangible and heterogeneous. Production and consumption of service are inseparable. Quality of service (QoS) is defined and evaluated by the client, and is not evaluated only on the basis of outcomes but on processing delivery. The key requirements for QoS are:37

  1. Reliability (promised service should be performed dependably and accurately).
  2. Competence (the company has the skill and knowledge to carry out the service).
  3. Responsiveness (readiness and willingness to provide the service).
  4. Access (service personnel easily approachable by customers).
  5. Courtesy (politeness and friendliness of service personnel).

Other areas of ethical concern in R&A are:

  1. Criminal behavior.
  2. Ownership and copyright.
  3. Privacy and anonymity.
  4. Autonomy.
  5. Identity.
  6. Professional conduct.

Automation can have positive and negative impacts for the people, the organizations, and the society in general.38 Basic questions related to R&A social impact are the following:

  1. How might R&A affect the everyday life of human society members?
  2. Could vulnerable people be particularly affected by R&A?
  3. Could events occurring in the virtual world of R&A have negative impact on the real world?
  4. Does R&A seek informed consent where necessary?

From a technical point of view, robotic automation implies a range of technical advantages and disadvantages, namely:

 Advantages: Reliability, Sensitivity, Endurance, Motion velocity, Mechanical power, Work accuracy.

Disadvantages: Human isolation feeling, Telepresence and virtual reality.

The interaction between automated systems and robots with people brings about new legal considerations in respect to safety and health regulations, law compliance, and assignment/apportioning of risk and liability. Those using robotic production lines that rely heavily on multiple technologies should ensure that they have contractual arrangements agreed with each machine or technology supplier. A thorough discussion of the implications of robotics on the employment and society is provided.39 Ethics is overlapping with law but goes beyond it. Laws provide a minimum set of standards for obtaining a desired human behavior. Ethics often provides standards that exceed the legal minimum. Therefore, that which is legal is not always ethical. For good human behavior and development both law and ethics should be respected. Specifically, ethics and laws are different in the manner that ethics tells what a person should do and laws specify what a person must do. The law is universally accepted, and ethics is ideal human conduct agreed upon by most of the people. The best results are obtained if the law and ethics go side by side so as to guide to actions that are both legal and ethical.40,41

Conclusion

This paper has provided a short conceptual review of the ethical aspects and social implications of R&A. The material presented starts with the fundamental phisophical questions about R&A ethics which have been addressed in the literature and still provide motivation for further research. Then, the core of the paper is presented which includes:

  1. the review of R&A ethics,
  2. an outline of the major branches of roboethics (medical robot, assistive robot, social robot, autonomous cars, war/military robot, cyborg ethics), and
  3. a discussion of automation technology ethics and social implications. Extensive coverage of the concepts, and topics reviewed in the paper is provided in the references cited. A global conclusion is that there is still a strong need to develop more practical, and easy to understand and apply, ethical rules and codes for the designers, professionals, and users of R&A products.

Acknowledgements

My Institute’s (National Technical University of Athens) representative needs not to be fully aware of this submission.

Conflict of interest

The author declares there is no conflict of interest.

References

  1. Wallach W, Allen C. Moral machines: Teaching robots right from wrong. Oxford, UK: Oxford University Press; 2009.
  2. Galvan JM. On technoethics. IEEE Robotics and Automation Magazine; 2003;10:58–63.
  3. Veruggio G. The birth of roboethics. Proceedings of ICRA’2005: IEEE International Conference on Robotics and Automation: Workshop on Roboethics; 2005 Apr 18; Barcelona, Spain. 2005. p. 1–4.
  4. Asaro PM. What should we want from a robot ethic? International Review of Information Ethics. 2006;6(12):10–16.
  5. Veruggio G, Operto F. Roboethics: A bottom-up interdisciplinary discourse in the field of applied ethics in robotics. International Review of Information Ethics. 2006;6(12):3–8.
  6. Ramaswany S, Joshi H. Automation and ethics. Handbook of Automation, Berlin: Springer; 2009: 809–833.
  7. Tzafestas SG. Roboethics: A Navigating Overview. Intelligent Systems, Control and Automation: Science and Engineering, Springer; 2016.
  8. Lin P, Abney K, Bekey GA. Robot Ethics: The ethical and social implications of robotics. MIT Press; Cambridge, MA, USA. 2012.
  9. Lichocki P, Kahn PH, Billard A. The Ethical landscape of robotics. IEEE Robotics and Automation Magazine. 2011;18(1):39–50.
  10. Asimov I. Runaround: Astounding science fiction, march 1942. Republished in robot visions, New York, USA: Penguin. 1991.
  11. Veruggio G, Solis J, Van der Loos M. Roboethics: ethics applied to robotics. IEEE Robotics and Automation Magazine. 2001;18(1):21–22.
  12. Wallach W, Allen C, Smit I. Machine morality: Bottom-up and top-down approaches for modeling moral faculties. AI Society. 2008;22(4):565–582.
  13. Capurro R, Nagenborg M. Ethics and robotics. Amsterdam, The Netherland: IOS Press; 2009.
  14. Dekker M, Guttman M. Robo-and-Informationethics: Some Fundamentals. Muenster, Germany: LIT Verlag. 2012.
  15. Gundel DJ. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, MA, USA: MIT Press; 2012.
  16. Tavani HT. Ethics and technology: ethical issues in an age of information and communication technology. New Jersey: John Wiley; 2004.
  17. Mappes TA, De Grazia D. Biomedical ethics. New York: McGraw-Hill. 2006.
  18. Pence GE. Classic cases in medical ethics. New York: McGraw-Hill. 2000.
  19. International classification of functioning, disability, and health. World Health Organization; Geneva, Switzerland. 2001.
  20. Tanaka H, Yoshikawa M, Oyama, et al. Development of assistive robots using international classification of functioning, disability, and health (ICF). Journal of Robotics. 2013;1–12.
  21. RESNA code of ethics.
  22. www.crccertification.com/pages/crc_ccrc_code_of_ethics/10.php
  23. Tzafestas SG. Sociorobot world: A guided tour for all. Berlin, Germany, Springer; 2016.
  24. Fog T, Nourbakhsh I, Dautenhahn K. A survey of socially interactive robots. Robot Autonomous Systems. 2003;42:143–166.
  25. Notes on autonomous cars. Lessrong; 2013.
  26. Walzer M. Just and unjust wars: A moral argument historical with illustrations. New York. 2000.
  27. Coates AJ. The ethics of war. Manchester, UK: University of Manchester Press; 1997:1–320.
  28. Asaro A. Robots and responsibility from a legal perspective. Proceedings of 2007 IEEE International Conference on Robotics and Automation: Workshop on Roboethics, Rome. 2007:1–13.
  29. LynchW. Wilfred Implants: Reconstructing the Human Body. Journal of Clinical Engineering. 1982;7(3):1–263.
  30. Clynes M, Kline S. Cyborgs and space. Astronautics; 1960.
  31. Warwick K. Cyborg moral, cyborg values, cyborg ethics. Ethics and Information Technology. 2003;5(3):131–137.
  32. Luppicini R. Technoethics and the evolving knowledge society: ethical issues in technological design, research, development, and innovation. IGI Global; 2010:1–323.
  33. Phukan S. IT ethics in the Internet Age: New dimensions. Information Science. 2002:1249–1257.
  34. Kendall KE. The significance of information systems research on emerging technologies: Seven information technologies that promise to improve managerial effectiveness. Decision Sciences. 1987;28:775–792.
  35. Stahl BC. Ethics of emerging information and communication technology: On the implementation of responsible research and innovation. Science and Public Policy. 2017;44(3):369–381.
  36. Brey PAE. Anticipatory ethics for emerging technologies. Nanoethics. 2012;6(1):1–13.
  37. Pinto MA. Delivery of service quality and satisfying library customers through web-based services. 2011:1–51.
  38. Tzafestas SG. Systems, cybernetics, control, and automation. Ontological, Epistemological, Societal, and Ethical Issues, Gistrup, Denmark. River Publisher Publishers; 2017.
  39. West DM. What happens if robots take the jobs? The impact of emerging technologies on employment and public policy, Center for Technology Innovation at Brookings; USA. 2015.
  40. Hazard GC. Law, morals, and ethics. Southern Illinois University Law Journal. 1995;19:447–458.
  41. Self-driving cars: Absolutely everything you need to know.
Creative Commons Attribution License

©2018 Tzafestas. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.