It may seem like a strange thing to reference, but Treyarch’s 2015 and 2018 entries to the Call of Duty: Black Ops series, Black Ops 3 and 4’s multiplayer component had a specialist character named Reaper. Reaper was designated the identification tag EWR-115, which stands for “Experimental War Robot – 115”. It (or he, as he wants to be addressed as) was a humanoid robot equipped with Gatling guns, grenade launchers and infrared sight. A gambling arena sent him on high profile assassination missions and people would bet on his success. But as his AI grew to understand the barbaric actions he had been made to do, he became sentient and chose the name Reaper for himself, instead of the tag he had from his creators. He was one of the first experimental droids built by the Swiss army to minimise casualties in the war that was brewing.
Now, if we ask whether Reaper’s story will someday become a reality, the answer will probably be a no~ as any mature argument will lead, it’s just a sci-fi video game after all. But should Reaper’s story set the standards for AI utilisation? That depends. The armed forces employ millions of people, feeding all of their families. But the question has to be asked~ why should living, breathing people risk their lives to protect the citizens of a country when machines can do it for them?
Almost ever since the advent of AI technology, one of the central questions that we have had to deal with has been whether Artificial Intelligence could be used to replace human involvement in really undesirable pursuits or arenas, which, in this case, is war. While on one hand, this can spare human lives from grotesque deaths, on the other, we don’t know whether AI involvement will lead to pacific settlements of potentially war-like disputes, or fuel them further and use more resources to blow wars completely out of proportion and turn into instruments of mass destruction. And the mind-blowing part of it all is that there is no way to satisfactorily and answer that question without multiple trials, and unfortunately, multiple errors.
Massive innovations are being made with research about the Internet of Things, which is a network of appliances interconnected and controlled by one device. Some researchers have given this centrepiece an identity of its own. Through Artificial Intelligence and machine learning, the centrepiece is much more efficiently able to manage the network. A now cancelled crime drama from 2011 called Person of Interest shows off the concept of Artificial Intelligence being used to monitor everyone and predict possible terrorist attacks and the culprits. It borders on the privacy concerns of a computer program, and by extension, whole governments spying on the global population. For some relatable context, the concept of Skynet, or by a difference of opinion, the Matrix, might be the scariest scenario this might turn into. If this process is implemented in the Defense sector, legions of weaponized robots will serve to protect or march to war, instead of human lives struggling just to survive another day. These robots don’t feel the cold, they’re extremely sharpshooters and can also be equipped with specialised gear such as grapple hooks or infrared sights, much like Pathfinder from Apex Legends. But of course, this would start a race to achieve “advanced warfare” stages of military preparation. And foot-soldiers would essentially be obsolete and their lives would be safe. These extra hands can be put to the welfare of society or into the many industries that are as necessary as they are understaffed.
There is also another edge to this blade. AI will maximise the efficiency in factories, industries by analysing data and crafting the program best suited for the situation. But if that happens (and we manage to avoid the AI gaining sentience), there will one hundred per cent be a national unemployment crisis. With machines taking over jobs, billions of people around the world will be out of jobs, therefore deprived of their life’s necessities, resulting in riots and absolute chaos. So, AI, especially in combination with the Internet of Things, has to be carefully monitored and regulated. Or else we might as well see a post-apocalyptic Cyberpunk dystopian story play out, with the rich controlling the industries and the poor being forced to work there.
Is it worth letting Artificial Intelligence decide the next moves of the armed forces? The soldiers will all be working in safer environments. But that doesn’t mean that the AI armed forces will work completely autonomously. They will always need human overseers. The “singularity” has long been predicted by doomsayers and scientists alike. The event that machines would take over as the new dominant species on the planet and the only way a mechanical object can gain conscience is if an Artificial Intelligence controlling it suddenly becomes self-aware. This has a pretty big possibility of happening through the means of machine learning, but Hollywood has blown it out of proportion. However, as David Cage’s work points out sentient machines are not lesser than all life and will become a part of this world. Skynet doesn’t seem like an immediate danger.
If Artificial Intelligence is allowed to take command of a legion of modified robots designed for specific missions, warfare would be changed forever. It would then be an even more lethal prospect than the picture that war poetry has been painting for civilians since forever. But, there is also a possibility that the AI system can be used to solve global problems by circumventing war. So, following that line of thought, AI and IoT can be combined to settle heated disputes while adopting a very humanitarian approach. They can be utilised to look at the data around the world and give humanity environmental ultimatums to improve the measures taken to protect the environment.
The thing is, there are a lot of unanswered questions that are also very likely unanswerable. AI has always been an enigma, simply owing to the sheer novelty of it. Is AI the key to the new world? Or is it a curse that humanity cannot afford to implement to save both lives and jobs? We don’t know. But we will, in due course, and till we do, all we can do is hope.