First global summit on AI in military makes strong case for a human in the loop

First global summit on AI in military makes strong case for a human in the loop



THE HAGUE: The perfect storm is coming. And there is no forecast model for it. Artificial-Intelligence (AI)-driven and directed weapon systems are revolutionising warfare and battlegrounds. It’s the next arms race. The search is now for the “safety lever” before one puts the finger on the trigger. There are far too many questions about AI in the military domain. Thanks to the Netherlands government, the world is now one step closer to getting answers.
Thefirst global summit on Responsible Artificial Intelligence in the Military Domain(REAIM), organised by the Netherlands government, was held at The Hague on February 15-16. It’s a platform for all stakeholders to discuss key opportunities, challenges and risks associated with military applications of AI. It’s the first global attempt to prevent the proliferation of lethal autonomous weapons (LAWS) and insert ethics, responsibility, accountability and the moral factor into a rapidly developing weaponisation technology that has potential for cataclysmic damage.
The aim is for nations to sign up for a Nuclear Non-Proliferation Treaty-kind of agreement. REAIM 2023 concludedin a call to action to the world. Delegations from 80 countries participated in the summit. India hasn’t signed the Call to Action — at least not yet — though China and US have.
“AI has the potential to revolutionise the way wars are fought and won. But it also poses significant risks. To prevent abuses we need to establish international guidelines. It has been established that AI is as ground-breaking as nuclear technology. It is crucial we take action now,” Netherlands minister for foreign affairs Wopke Hoekstra said in his opening address. “T ogether, we must seek common ground, starting with two basic questions: what is AI and who is responsible for its actions,” he pointed out. “In Ukraine we are unfortunately already seeing the influence of new technology, including drone and cyber attacks. We are also witnessing how Russia is violating international human laws in the most gruesome way,” he said. AI is a double-edged sword, especially in weapon systems. As one expert told TOIon the sidelines of the summit: “It often takes a little bit of noise to confuse the system. It takes stupid decisions. ” Can such a system be left to take its own decisions on pulling the trigger?
The Netherlands’ chief of defence, General Onno Eichelsheim, made a strong case for human control. “A human must be in the loop in the use of force, specifically in the offensive part. We must also know when the algorithm can take a decision when we are on the defensive side and the enemy is moving fast and using AI,” he said inthe opening panel discussion. Agnes Callamard, secretary-general of Amnesty International, said the risk of focusing too much on the “reliability” of asystem and made a strong case for conformation with humanitarian laws. “We need to ensure meaningful human control in the use of force. Fully automated weapon systems should be prohibited. There should be a strict regulation of all autonomous weapon systems that have the potential of mass destruction. We need to keep human control over AI,” she said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *