Autonomous arms threaten civilization

Tesla CEO Elon Musk, in cooperation with over 100 robotics and intelligence specialists, has called on the United Nations to outlaw autonomous weapons.

Musk and his associates further stated that artificial intelligence in weapons is more dangerous to the world than North Korea is.

Musk and 116 other experts expressed their concerns in a letter that included signatures from representatives in organizations across 26 countries. These experts asked for “morally wrong,” hazardous, autonomous weapons to be placed on the registry of weapons prohibited under the United Nation’s convention created in 1983. Musk called artificial intelligence humanity’s biggest existential threat.

Mustafa Suleyman, Alphabet’s official intelligence expert, wrote, “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

The letter was revealed at the opening night of the International Joint Conference on Artificial Intelligence in Melbourne on Monday and highlighted the demand to question autonomous weapons.

Two years ago, Musk and Stephen Hawking pursued the same mission as the IJCAI.

It is alarming that autonomous weapons are on the rise in many superpowers, such as China, Russia, South Korea, the United Kingdom and the United States. South Korea is using Samsung’s SGR-A1 fixed-place sentry gun to catch, pursue and fire at trespassers along its 160-mile border. It is equipped with voice recognition, a mounted machine gun and a grenade launcher. While it is not fully autonomous, it has the potential to become so.

Russia is working on creating an autonomous model of the Uran-9 unmanned combat ground vehicle.

The United Kingdom’s Taranis drone, created by BAE Systems, is expected to have full autonomy. The computerized combat aerial carrier completed its first test flight in 2013 and is anticipated to be in use after 2030 as a component of the Royal Air Force’s Future Offensive Air System.

The Taranis drone will succeed the human-controlled Tornado GR4 warplanes. The United Kingdom’s government already stated in 2015 that it disagreed with a prohibition of dangerous autonomous weapons, with the Foreign Office explaining that “international humanitarian law already provides sufficient regulation for this area.” Musk is right to bring attention to the perils of self-governing weapons.

Last month, Musk urged governors to supervise artificial intelligence. In addition, the United Nations recently proposed to start official debates on weapons, such as drones, machine guns and tanks.

The United Nations should be mindful of Musk’s warning as the United States continues to focus on autonomous weapons.

The United States’ unmonitored warship, the Sea Hunter, was constructed by Vigor Industrial and introduced in 2016. It is designed to have the ability to perform anti-submarine artillery. Boeing’s unmonitored submarine constructed on the Echo Voyager will also be evaluated for naval military operation.

The United States is in the wrong for exchanging aerial supervision for robotic aircrafts. These aircrafts can utilize bombs and missiles without humans. However, they can be flawed and end up attacking civilians instead of the targeted enemy.

The United States should be conscious of a robot going into war as there is the possibility of a malfunction, in which case the robot attacks the incorrect target and perhaps even innocent people.

Autonomous weapons should not be trusted because they clash with humanitarian values. Unlike humans, self-governing machines do not have integrity or a sense of morals.

An absence of virtue means that these weapons should not be involved in life or death situations. Autonomous weapons should not have such serious authority over humans.

The United Nations, alongside countries that are creating autonomous weapons, should listen to Musk’s warning.

The weapons should be strictly investigated. Weapons that can recognize and destroy targets with nobody present to control the armaments have the potential to go out of control.

A robot devastating a target without humans in control is an uneasy thought. These countries should not place responsibility on robots instead of humans in demanding situations, such as war.

There is no need to forgo human intervention and develop automatic aircrafts. Trust should be placed in humans, not machines with no ethics, in a difficult situation, such as intervention in war where lives may be lost.

OpinionsMaya YegorovaComment