10 Break-Out Sessions

  • Time: 3:30 pm - 4:30 pm

[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]

Sign up for our Newsletter

Sign up for our Newsletter

Weapons of automated destruction

In April 2018, the Robotic Complex Breach Concept demonstration was conducted by troops from the United States and the United Kingdom at a base in southern Germany. Autonomous weapons were deployed to perform a variety of tasks, including a remote-controlled breaching of a mock-up enemy position. “U.S. military replaces soldiers with robots in first-ofits-kind training exercise,” read an April 2018 Newsweek headline.

The episode made clear that artificial intelligence and robotics are already widespread in the warfare industry, and that their applications will continue to grow. Dileep George, co-founder of Vicarious, and Marek Rosa, CEO and CTO of Good AI, both manage companies that aim to develop human-level AI. In the midst of an increasing technological arms race, the two entrepreneurs give their perspective on what the future of war will look like, and whether the unavoidable transformations in the job of soldiers will be for better or for worse.

Machines making decisions?

Any technology has both positive and negative effects. As a society, “we figure out the best ways to apply it,” George says. However, the Indian researcher has his doubts when it comes to lethal autonomous weapons: “I do not think it is an idea that we should rush into without a lot of thinking,” he says.

He is not alone. In 2017, 116 leaders in tech companies from 26 different countries signed an open letter pressing the United Nations to ban the use of “killer robots” in warfare. Marek Rosa was among them. Nonetheless, he knows it is a trend that cannot be stopped because of the socalled security dilemma: States are scared that if they do not build such technology fast, somebody else will. Still, the Slovak entrepreneur insists, good uses of AI can “save soldiers’ lives by not putting them in physical danger in the first place.”

Soldiers of the future

George and Rosa agree that the work of soldiers as we know it is about to come to an end. Military technology is already enhancing human capacities, and shortly warfighters will be able to make split-second decisions with the help of artificial intelligence and augmented reality toolkits. “The individual will still make the decisions, but will be provided with researched information that will help them be more accurate,” George explains.

A widespread argument against automated weapons goes like this: By saving lives on one side of the conflict, you are probably killing more people behind enemy lines. George believes this reasoning is counterintuitive. “War has always been about the advantage of one side over the other,” he responds. In fact, AI can reduce the number of casualties by coldly analysing the battlefield, without worrying about self-preservation “If the emotional decision factor on the field is removed, maybe things become safer,” argues George.

And while Rosa agrees that weaponised robots with the capacity to determine when to shoot are not a good idea, “even people are not that good at deciding who to kill.” For him, it is important to distinguish between using artificial intelligence to attack, killing as many people as possible, and using it to limit casualties both on the civilian and enemy sides. “AI can also help soldiers do the latter, and then it would actually be good for everyone,” he says. Most of these technologies already exist. However, their developers still cannot guarantee predictability and zero-failure functionality, and there is still a way to go before they can extensively be used in the battlefield.

The good guys’ responsibility

Could AI still be used the wrong way in warfare? Undoubtedly, as the technology becomes cheaper and more accessible, it can end up in the wrong hands. AI can also be hacked and reprogrammed. Terrorism or other threats might even grow. “Once we let it out, it may be hard to control,” says George.

As AI experts and entrepreneurs, George and Rosa feel a certain responsibility to reduce the risk that technology might bring. Good AI even started a challenge for its workers  offering prizes for those who submit proposals on how to avoid a race. And, of course, they both work to raise awareness, by participating in the St. Gallen Symposium and other events. One burning question is how open researchers in the field should be about their discoveries. According to Rosa, cooperation is always more beneficial than  competition, and that is a way to reduce conflict and risks.

On the other hand, a report titled “The Malicious Side of Artificial Intelligence” was released in February 2018 by multiple respected researchers from the US and the UK. With the aim of prevention and mitigation, the document suggested, among many other things, not spreading research broadly until the associated risks have been assessed. It’s an approach that gives George pause: “Not putting ideas out there is going to kill innovation rather than control the bad guys.”

Far from now

As weapons become faster and more effective, Rosa imagines a future without human soldiers. War itself will be reimagined, he says: Instead of machines killing people or other machines, confrontations will be more about information, because it is a much more efficient way to fight.

That’s the optimistic view. For a pessimist, the full mechanisation of war is a grim prospect. For example, without soldiers there might be fewer worries about military action. “When a person is included in the loop, the empathy that they will have for other human beings becomes part of the equation of controlling any conflict,” George says. “Removing that empathy from the conflict is a disastrous decision.”

Share the article

Leave a Reply

Your email address will not be published. Required fields are marked *