Experts call for urgent regulations on killer robots- Here’s why

Experts call for urgent regulations on killer robots- Here's why

On Monday, April 29, Austria emphasized the urgent need for international collaboration to regulate the use of artificial intelligence (AI) in weaponry, particularly the dangers posed by autonomous “killer robots.”

During a conference in Vienna attended by delegates from 143 nations, along with numerous NGOs and international bodies, Austrian Foreign Minister Alexander Schallenberg declared, “We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control.”

‘Humanity at crossroads’

The event, named ‘Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation,’ according to AFP, focused on the moral and legal questions surrounding AI-driven weapons that operate independently of human guidance. In his opening statement, Schallenberg emphasized, “At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines.”

Although the United Nations has hosted lengthy debates on this topic, little has been achieved towards effective regulation. The urgency to formulate solutions was a common concern among the attendees of the Vienna summit.

Mirjana Spoljaric, president of the International Committee of the Red Cross, reiterated the urgency of immediate action.

“It is so important to act and to act very fast,” said Spoljaric, adding, “What we see today in the different contexts of violence are moral failures in the face of the international community.”

“We do not want to see such failures accelerate by giving the responsibility for violence, for the control over violence, over to machines and algorithms,” she warned during a panel discussion.

AI in battlefield

Examples of AI employed in combat were discussed by diplomats, including autonomous drones used in Ukraine and AI applications by the Israeli military for target selection in Gaza. In a keynote address, Jaan Tallinn, a software developer and tech investor expressed concerns over the reliability of these AI systems in both military and civilian sectors. He recounted several AI misidentifications, stating, “We have already seen AI making selection errors in ways both large and small, from misrecognizing a referee’s bald head as a football, to pedestrian deaths caused by self-driving cars unable to recognize jaywalking.”

Exit mobile version