Stephen Hawking really set the chattering classes to chattering, which his latest ominous warning about unfettered AI (Artificial Intelligence). Afterall without ever-smarter Computers, where would we be right now? -- Back in the Digital Dark Ages!
Experts Divided On Stephen Hawking's Claim That Artificial Intelligence Could End Humanity
by Richard Ingham, Pascale Mollard, AFP (Paris); businessinsider.com -- Dec 6, 201
[... physicist Stephen Hawking: ]
"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC.
"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said.
But experts interviewed by AFP were divided.
[...]
Does AI really threaten the future of the human race?
by Rory Cellan-Jones Technology correspondent, bbc.com -- Dec 4, 2104
[...]
Ben Medlock told me that Professor Hawking's intervention should be welcomed by anyone working in artificial intelligence: "It's our responsibility to think about all of the consequences good and bad", he told me. "We've had the same debate about atomic power and nanotechnology. With any powerful technology there's always the dialogue about how do you use it deliver the most benefit and how it can be used to deliver the most harm."
[...]
The whole question of the use of artificial intelligence in warfare has been addressed this week in a report by two Oxford academics. In a paper called Robo-Wars: The Regulation of Robotic Weapons, they call for guidelines on the use of such weapons in 21st Century warfare.
"I'm particularly concerned by situations where we remove a human being from the act of killing and war," says Dr Alex Leveringhaus, the lead author of the paper.
He says you can see artificial intelligence beginning to creep into warfare, with missiles that are not fired at a specific target: "A more sophisticated system could fly into an area and look around for targets and could engage without anyone pressing a button."
[...]
Afterall without ever-smarter Weapon Systems, where would we be right now? --
Back in the Military Stone Age!
Who needs all that 'blood and guts' warfare conflict -- in our backyards ...
-- MQM-107E Streaker in flight alongside an F-16 Fighting Falcon. -- Wikimedia Commons
Robotic weapons: researchers call for new regulations for 21st century warfare
News -- oxfordmartin.ox.ac.uk -- Dec 1, 2104
[...]
Robotic weapons, whether autonomous or remote controlled, have generated widespread controversy in recent years. A new policy paper from the Oxford Martin School, University of Oxford, urges governments to recognise the increasing prominence of these weapons in contemporary and future forms of warfare and proposes steps towards suitable regulation.
[...]
“It is clear that robotic weapons are here to stay, and that they will play a growing role in future armed conflicts,” says Dr Alex Leveringhaus, the paper’s lead author. “Their use is increasing in militaries around the world, as is research into new systems. A recent example is the announcement of co-operation between the UK and France on a new drone, known at the moment as the Future Combat Air System. But their use raises a multitude of legal and ethical questions. Many people are uncomfortable with the concept of an autonomous robotic weapon, or even with the idea that military personnel can ‘kill by remote control’.
“New military technologies and their deployment are in danger of outpacing the development of an appropriate regulatory framework. There is now an urgent need for states, the military and manufacturers to work together to respond to justified legal and moral concerns.”
[...]
Its authors, Dr Leveringhaus and Dr Gilles Giacca state that, given the complexity of the issue, neither a blanket endorsement nor condemnation of robotic weapons is feasible. Instead, regulation should be conducted on a case-by-case basis.
Their recommendations include:
[...]
• For states and the military to work together to define the contexts in which robotic weapons can be used, and prevent illegal use;
• Ensuring new weapons comply with existing legal and ethical restrictions.
As long as were giving
them trillions to 'keep us safe', they might as well do it "smartly" -- and 'ethically' too (
whenever it's convenient, of course).
That would only be 'the Human thing' to do. You know, to have our Killer-Robots follow our rules ...
Perhaps Stephen Hawking really knew what he was doing -- when he recently decided to stir up an AI Hornet's Nest ... especially if one of his colleagues had passed along this very clinically, AI-tuning planning paper ...
Robo-Wars -- The Regulation of Robotic Weapons
Oxford Martin Policy Paper
Alex Leveringhaus
Gilles Giacca
© Oxford Martin School,
University of Oxford, 2014
(cc) Creative Commons
Executive Summary -- The aims of the paper
Future armed conflicts will be characterised by the deployment of military robots and, in particular, robotic weapons. Remotely Piloted Aircraft Systems (RPAS), often popularly known as ‘drones’, have generated widespread controversy in recent years. Some observers object specifically to the use of RPAS as part of counter-terrorism operations; others are uneasy about the fundamental principle of ‘killing by remote control’.
The current generation of RPAS represents the tip of the iceberg of robotic weapons. Samsung’s SGR-A1 robots, equipped with two machine guns and a gun with rubber bullets, now ‘man’ border posts in the Korean Demilitarized Zone.[1] In principle, once programmed, the SGR-A1 robots can, without assistance from a human operator, accurately identify and target individual humans. Last year, the UK Ministry of Defence and BAE Systems announced the successful test of a stealth plane, Taranis. As an object of study, it does not carry weapons and cannot select its own targets but it can, whilst always under the control of an operator, take off, fly to a given destination and find a pre-determined object of interest with little intervention from its operator unless required.[2]
Unfortunately, the emerging debate on robotic weapons is confused. Robotic weapons encompass a variety of systems, some more problematic than others. Furthermore, there is little agreement on the features of robotic weapons which might be deemed legally and ethically problematic. This confusion is compounded at the policy level. Is new legislation to regulate the development and deployment of robotic weapons required? Is robotic warfare inherently unethical, meaning robotic weapons should be banned?
This policy paper summarises different types of military robots and outlines the relevant technological features of robotic weapons. It provides an overview of the ongoing debates as to whether the use of robotic weapons is legal and ethical. It then assesses current proposals for the regulation of robotic weapons. Finally, the paper makes recommendations to states, manufacturers and the military on how to develop a suitable regulatory framework for robotic weapons.
[...]
2.2 What is a robotic weapon?
There are two key features of weapons that are relevant here. Firstly, weapons are specifically designed to harm, or threaten to harm, another party.[8] Secondly, weapons harm predominantly (but not exclusively) by producing a kinetic effect in order to disable, destroy or kill a target. Carrying a payload, weaponised robots are designed to create such a kinetic effect. Their artificial body, sensors and governing software are engineered to deliver the payload.
In addition, robotic weapons are systems (meaning that there are certain criteria that govern the application of force) and are uninhabited (meaning that there is no operator physically located inside the robot).
2.2.1 Targeting and control
The key question is how the targeting functions of a robotic weapons system are controlled. Targeting processes – or ‘Kill Chains’ – encompass five steps: observe; orient/analyse; decide; enact; and assess.
In remote-controlled robotic weapons, central - if not all - steps of the Kill Chain are directly controlled by the operator via remote control. Tele-operation, used extensively in current RPAS, is a popular method of controlling robotic weapons.[9]
[...]
3. Are robotic weapons legal?
In terms of international law, there is no single answer as to whether robotic weapons are illegal. Rather, from a legal perspective, this
technology must be capable of satisfying different legal requirements under different legal regimes. The legal regimes in question are international humanitarian law and international human rights law, as well as rules that regulate the use of force by one state against another (known as ‘jus ad bellum’).
[...]
Weapons that are unlawful due to their nature are prohibited because they cause excessive injury or unnecessary suffering that has no military purpose. Unnecessary suffering, in this context, refers primarily to the effect of such weapons on combatants.[15] Weapons in this category include weapons loaded with poison[16], chemical and biological weapons[17] and blinding laser weapons.[18] Since it is not clear that robotic weapons cause excessive injury or unnecessary suffering, it may be reasonable to say that they are not illegal due to their nature, unless they serve as delivery platforms for the aforementioned weapons.
3.4 Target selection and engagement without human intervention
The legal principles governing the use of force remain the same whether the use of force is carried out by a piloted aircraft, under remote real-time control by a human operator or by an autonomous weapon system without any human control or oversight at the stage of force delivery.
For robots that are remote-controlled by a human operator, the operator determines who is a lawful target and how this determination is to be made. Thus RPAS, in principle, do not raise different legal issues than other piloted aircraft. The engagement of RPAS falls under exactly the same strict military rules as ordinary military aircraft.
Autonomous robotic weapons are more problematic since the robot, though preprogrammed by a human operator, does not function under the direct control of a human operator. In order for the deployment of the robot to be lawful, the machine would have to be programmed in such ways that it can comply with the two key principles of international humanitarian law, namely distinction and proportionality.
[...]
3.4.1 Distinction
Under the laws of armed conflict, parties to an armed conflict must distinguish between the civilian population and combatants, and between civilian objects and military objects.
[...]
The principle of distinction poses a number of challenges to autonomous robotic weapons. How would a robot distinguish between a child with a toy gun and a soldier with a machine gun? Would it be possible for a robot to distinguish between a sniper lying on the ground and a wounded combatant, protected under international humanitarian law, who no longer poses a threat? Could a machine also adequately identify a combatant who has expressed the will to surrender and is thus protected under international humanitarian law?
At the moment there are no clear answers to these challenging questions.
[...]
3.4.2 Proportionality
The rule of proportionality prohibits an attack if the ensuing civilian harm is excessive in relation to the concrete and direct military advantage anticipated by the attack.[22] An attack may become illegal if excessive collateral damage affecting civilians or civilian objects is to be expected. The concrete application of the rule of proportionality leads to a number of intricate questions. What is the value of a military objective relative to the likely civilian casualties? How many casualties are acceptable in order to eliminate, say, an enemy tank or supply bridge? What is the likelihood that the destruction of a bridge is going to lead to casualties in the school nearby?
Answering these questions requires a number of value judgements that are highly contextual. It is therefore questionable whether autonomous robotic systems can be pre-programmed to foresee the indefinite number of situations in armed conflict that involve value judgements. Relevant judgements require a lot of experience, and military personnel are trained to learn how to make those decisions and calculations.
Advocates of autonomous robotic weapons could point out that military officers sometimes get these decisions and calculations wrong, with highly negative humanitarian consequences. Even extensive training cannot guarantee that military officers never make the wrong decisions. For the case in favour of autonomous robotic weapons to succeed, it needs to be shown that, if we take the imperfect decisionmaking by humans as a baseline, autonomous robotic weapons could effectively outperform humans.
[...]
6. Recommendations
Given the heated debate on the legality and morality of developing and deploying robotic weapons, their current use in multiple conflict zones and continuous rapid advances in robotic weapons technology, it is vital that policymakers, manufacturers and military leaders reflect on the available regulatory options. Due to the complexity of the issue, a blanket endorsement or condemnation of robotic weapons is impractical. We propose that the regulation of robotic weapons be pursued on a case-by-case basis and, based on the above discussion, we recommend the following actions.
[...]
6.2 Recommendation for manufacturers and the military
• Design for responsibility
- Prioritise human oversight of and control over remote-controlled and autonomous weapons at all stages of their deployment.
- Ensure operators are able to override the robot at any stage of its deployment. Genuine ‘out-of-the-loop’ systems are not desirable.
- Put in place adequate mechanisms so that individuals can be held responsible for the deployment of remote-controlled and autonomous weapons.
- Design for machine autonomy should be used to enhance human decision-making, not replace it.
Dr Alex Leveringhaus is a James Martin Fellow at the Oxford Institute for Ethics, Law and Armed Conflict and a post-doctoral research fellow at the 3TU Centre for Ethics and Technology at Delft University of Technology (joint appointment). He trained in political philosophy at the London School of Economics, and currently works on the ‘Military Human Enhancement: Design for Responsibility and Combat Systems’ project funded by the Netherlands Organisation for Scientific Research. In addition, he carries out research for his own project on ethics and new military technologies, funded by the British Academy via a Small Research Grant. His most recent publication is Ethics and Autonomous Weapons: Technology and Armed Conflict in the 21st Century (Palgrave, forthcoming, 2015).
Dr Gilles Giacca is Coordinator of the Oxford Martin Programme on Human Rights for Future Generations; a research fellow at the Faculty of Law, University of Oxford; and Research Associate at the Oxford Institute for Ethics, Law and Armed Conflict. He has advised states, international organisations and NGOs on matters of international law. His main research interests lie in the field of public international law, collective security, international humanitarian law, human rights law, refugee law and weapons law. His most recent publications include Economic, Social and Cultural Rights in Armed Conflict (OUP, 2014) and Commentary on the UN Arms Trade Treaty (OUP, forthcoming, 2015).
[emphasis added]
Aren't you glad it's informed Doctors like that, who are shaping our put-it-on-auto-pilot policies -- and not some daffy Dr Strangelove characters who think 'War is our sole reason for existing'?
... Our 'always will-be' necessary evil. Afterall, War is just Evolution happening, on RPAS Steroids. Survival of the swiftest and all that.
(Remotely Piloted Aircraft Systems -- NOW with "new and improved" with autonomous targeting systems -- upgraded with "aftermath image-blurring"© for your protection.)
Finding that "acceptable baseline" for Autonomous Robotic Weapons systems is really Not such 'a high bar' -- if our robots just need to be "better at War" -- than we humans now are, now is it?