By Dylan Lee
What do you like most about this topic?
What I like most about the topic on AI is that there is a lot of flexibility as to what we can talk about during debate. Because it is an ongoing topic, anything can happen daily. For instance, just last year in April, Russia created a humanoid robot called Final Experimental Demonstration Object Research (FEDOR) that will be sent to outer-space for various tasks. What is surprising about this robot, though, is that the Russian government is training this robot to shoot guns. Its creators are doing so to improve the android’s motor skills and decision-making, but critics have raised that FEDOR might become a real-life Terminator. This is the flexibility that I am talking about because there could be both positive and negative consequences of AI, just like how FEDOR could be used to accomplish space-related tasks more efficiently but also become a dangerous weapon.
What do you think is the most important part of this topic?
I think the most important part of this topic is to control AI’s ability to be influential now and in a couple hundred years. Right now, weak AI have already made an impact on global industries. For instance, factories have started to implement automation, which increases production rates and boosts global economy. Because of the complexity to AI in general, scientists are only able to produce weak AIs like robots and software programs. However, in the next couple hundred years, it is predicted that strong AI, artificial intelligence that is capable of acting like humans, might be created. In the case that does happen, a whole new mindset needs to be adapted because no one will know for sure what will happen to those strong AIs. Will they stay benevolent and assist humans with their superhuman abilities, or will they turn evil and rebel against mankind like we see in movies? Therefore, why else would our topics address economic implications, the regulation of autonomous weapons systems, and the fostering of global cooperation on AI research and development?
I don’t quite understand “Regulating autonomous weapons systems.” Can you explain it in layman’s terms?
The topic of “regulating autonomous weapons systems” simply mean controlling weapons that have the ability to independently select targets and decide whether to strike them. Why we need to regulate this, though, is something that has been in debate for a very long time. I’m sure you heard of cases where autonomous weapons malfunction and accidentally targets innocent civilians as well as friendly-fire scenarios. Thus, this topic is one of our three topics for APQ in order to address this very problem.