Artificial Intelligence Machine Ethics and Military Robots

As the sophistication of artificial moral agents improves, it will become increasingly important to construct fully general decision procedures that do not rely on assumptions of special types of agents and situations to generate moral behavior. Since such development may require extensive research and it is not currently known when such procedures will be needed to guide the construction of very powerful agents, the field of machine ethics should begin to investigate the topic in greater depth.

Machine Ethics is the division of the ethics of artificial intelligence concerned with the moral behavior of robots and other artificially intelligent beings. It is different from roboethics, which is related to the moral behavior of humans as they design, assemble and apply such beings. Machine ethics is referred to as computational ethics and computational morality.

The book Moral Machines, Teaching Robots Right from Wrong, focus on challenges of building artificial moral agents and questioning deeply into the nature of human decision making and ethics.

The specialists are more inclined towards designing the robots in such a way that they could become self-sufficient, make their own decisions, and acquire autonomy. They are also concerned by the level of degree of their ability that may pose any threat or danger. The experts also illustrated that some machines have acquired various forms of semi-autonomy.

Now-a-days machine ethics is giving machines ethical principles and a process for ascertaining a way to resolve the ethical dilemmas they may come across, enabling them to function in an ethically responsible manner through their own ethical judgment. They also focus on making artificial agents safer and exploring solutions for agents with autonomous capacities in-between current artificial agents and humans.

MACHINE ETHICS CHALLENGES OF SUPER INTELLIGENCE: Gary Drescher describes two classes of agents: situation-action machines with rules specifying actions to perform in response to particular stimuli, and choice machines, which possess utility functions over outcomes, and can select actions that maximize expected utility. Situation action machines can produce sophisticated behavior, but because they only possess implicit goals they are rigid in behavior and cannot easily handle novel environments or situations. In contrast, a choice machine can easily select appropriate actions in unexpected circumstances based on explicit values and goals (Drescher 2006).

Machine ethics is a subject that needs serious consideration, as it is an emerging field that seeks to implement moral decision-making sense in computers and robots. The semiautonomous robots might violate ethical standards; in the case of AI and robotics, fearful scenarios range from the future takeover of humanity by a superior form of AI to the havoc created by endlessly reproducing nanobots.

Ethics and robotics are two academic disciplines, one dealing with the moral norms and values and the other aiming at the production of artificial agents, with some degree of autonomy based on rules and programs set up by their creators.

Human-robot interaction raises serious ethical questions that are practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code. Topics include the ethical challenges of healthcare and warfare applications of robotics, as well as fundamental questions concerning the moral dimension of human-robot-interaction issues.

Rosalind Picard was of the opinion that "The greater the freedom of a machine, the more it will need moral standards".

One ethical issue is whether a robot should be permitted to by itself identify and kill suspected enemy soldiers. On the one hand the idea of robots replacing soldiers could be much-admired from the point of view of a person who does not wish for seeing their fellow countrymen killed in warfare. Still, the idea of robots making life and death decisions seems extremely risky, particularly (but not limited to) when we consider the ethical implications if a robot were to make a mistake and kill a civilian.

If a robot goes wrong and damage somebody, who will be held liable and accountable: the owner of the robot, its manufacturer, or the robot itself?  Under what circumstances can robots are put in positions of authority?  It ethically wrong for robots to kill the emotional sensitivities of human beings.

ADVANTAGES OF MILITARY ROBOT: There are many advantages of these robots as compare to human soldier. One of the most important things about these robots is that they have the capability to perform missions remotely in the field, without any actual danger to human lives.

Major Kenneth Rose of the US Army's Training and Doctrine Command outlined some of the advantages of robotic technology in warfare: "Machines don't get tired. They don't close their eyes They don't hide under trees when it rains and they don't talk to their buddies ... A human's attention to detail on guard duty drops dramatically in the first 30 minutes ... Machines know no fear."
PROSPECTIVE RISKS: In 2009, academics and technical experts attended a conference to discuss the impact of the hypothetical possibility that robots and computers could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to get hold of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved cockroach intelligence.

EFFECTS AND IMPACT OF MILITARY ROBOTS: Whether to position robots on the front line that would make their own decisions has lifted up several moral questions:
•    Will robot be able to distinguish between opponent troops and innocent civilians?
•    Can programmers envision every state of affairs that robots will come across on the battleground? If not, robots could make deadly error during primary installation.
•    The robots might be “hacked” by the enemy and turned against friendly troops?
•    The robots might contain a remote in which a “kill switch” can be installed to shut them down in case that happened, might that be hacked by the enemy to disable them?
•    The robots might break the rules of war, then who is responsible: the manufacturer, the programmer or the nearest human commander
•    If robots use video or other sensors and gather information on the conduct of human troops, the soldiers might feel they are being “spied on” and thus harm the morale.

Advantages of using robots on the battlefield:
•    Replace soldiers in dangerous missions.
•    One human fighter could control a squad of robots working semi autonomously.
•    Make faster decisions than humans.
•    Be unaffected by annoyance, vengeance, food shortage, fright, exhaustion, or anxiety.
•    Use video or other sensors to monitor human soldiers on both sides.
•    Refuse to carry out an unethical or illegal commands.

DISADVANTAGES OF ROBOTIC SYSTEMS: 
High initial cost of robotic systems and robots is too high
 
Possible need for extra space, and new technology, to accommodate robotic systems and robots

Importance of using highly skilled and technical engineers, programmers and others to set up robotic systems and robots to prevent unnecessary future problems and mishaps

Learning curve of persons working with new robotic systems and possible injuries during that time

Robotic systems and robots are limited to their functions and only the programmers really know what those functions are. Unless Artificial Intelligence is highly sophisticated, robots may not respond properly in times of an emergency or when some unexpected variance occurs.
Introducing new systems will inherently bring out defects.

MORAL ISSUES:
Countries and companies appear to be aggressively working to have consumers idolize robots and robotic systems for their own personal best interests.
Robots and robotic systems replace certain workers causing economic losses with possible resultant shortening of lifespans.

Certain people, due to lack of funds, may not be able to access important uses of robots - for example, certain surgeries using robotic systems.


CONCLUSION: The aggressive introduction of robots and robotic systems in the world is making companies and human beings more dependent upon robots and robotic systems and therefore a more dependent society. 
Since Artificial Intelligence is becoming more sophisticated and robots will be entering more households, there may be important negative effects on the human family system. 
Robots are and will remain -in the foreseeable future- dependent on human ethical scrutiny as well as on the moral and legal responsibility of humans.



REFERENCES
1.    Anderson, Michael, and Susan Leigh Anderson. 2007a. The Status of Machine Ethics: A Report From The AAAI Symposium. Minds and Machines 17 (1): 1–10. doi:10.1007/s11023-007-9053-7.

2.    Anderson, Susan Leigh, and Michael Anderson. 2007b. The consequences for human beings of creating ethical robots. In Human Implications of Human-Robot Interaction: Papers from the 2007 AAAI Workshop, ed. Ted Metzler, 1–4. Technical Report, WS-07-07. AAAI Press, Menlo Park, CA. http://www. aaai.org/Papers/Workshops/2007/WS-07-07/WS07-07-001.pdf.

3.    Bostrom, Nick. 2003. Ethical Issues in Advanced Artificial Intelligence. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, ed. Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute of Advanced Studies in Systems Research /Cybernetics.

4.    Drescher, Gary L. 2006. Good and real: Demystifying paradoxes from physics to ethics. Bradford Books. Cambridge, MA: MIT Press.

5.    Good, Irving John. 1965. Speculations Concerning the First Ultra Intelligent Machine. In Advances in Computers, ed. Franz L. Alt and Morris Rubinoff, 31–88. Vol.6. New York: Academic Press. doi:10. 1016/S0065-2458(08)60418-0.

6.    Greene, Joshua D. 2002. The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About It. PhD diss., Princeton University. http : / / scholar . harvard . edu / joshuagreene / files /dissertation_0.pdf.

7.    Kurzweil, Ray. 2005. The Singularity is Near: When Humans Transcend Biology. New York: Viking.

8.    Moravec, Hans P. 1999. Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press.