As AI (Artificial Intelligence) deeply permeates every aspect of human life, the changes brought about by technological advancement are boundless. Just as the Industrial Revolution reshaped social structures and economic patterns, AI has maximized everyday efficiency, making our lives significantly more convenient. However, recently, this technology has become increasingly involved in another domain our society faces: 'war.' This is an extremely sensitive topic, as it deals with matters of life and simultaneously has the potential to shake the ethical foundations of human society. Can AI truly bear ethical responsibility in war, humanity's worst act? This question has set not only AI researchers but also philosophers and legal scholars on a long journey to find rational answers. The 'autonomous weapon system,' a recent hot topic in the international community, viewed from the perspective of military technology and AI convergence, is sparking a new dimension of debate beyond mere strategic advantage. The concept of an autonomous weapon system literally refers to a weapon capable of identifying targets and executing attack orders on the battlefield without human intervention. Internationally, many countries are investing heavily in AI-based military technology, accelerating the trend of unmanned warfare. South Korea, too, cannot ignore this reality. Domestic research institutions, including the Agency for Defense Development (ADD), are reportedly conducting research on AI-based surveillance systems and unmanned aerial vehicles as part of developing future warfare response technologies. However, ethical questions arising from the introduction of AI into the military domain are also rapidly emerging. The use of autonomous weapons inherently poses significant moral questions. Can killing on the battlefield, where this technology is employed, be considered a decision made without human judgment? A recent Aeon essay, 'How Morality Collides at the Intersection of War and AI,' which compiles discussions among international scholars, explores these questions with philosophical depth. The author poses a fundamental question: can AI's 'reason' truly replace human 'emotion' and 'morality'? They emphasize the urgent need for ethical reflection and the establishment of international norms to keep pace with the speed of technological advancement. Domestically, these discussions are also actively underway. Professor Park Gu-yong of Chonnam National University's Philosophy Department previously emphasized in a broadcast program that 'while AI can surpass humans in the realm of calculation, it still cannot replace human insight when it comes to making moral decisions.' According to his argument, AI makes judgments and executes actions based on quantified data, but it is almost impossible for it to consider the ethical context that may arise in the process. For instance, distinguishing between combatants and civilians requires not only the technical ability to process images but also the human deliberation inherent in such situations. Concerns about the tragic consequences that could result from AI judgment errors are not mere assumptions. A UN report submitted in 2020 indicated the use of the Turkish-made autonomous drone 'Kargu-2' in the Libyan civil war. This drone is known to possess the ability to identify and track targets without direct human command, and some interpret this as the first instance of AI making its own lethal decision. Such incidents have sparked significant debate in the international community regarding the potential use of autonomous weapons and the issue of moral responsibility. Military experts have warned that 'a situation where AI, lacking rational judgment criteria, unintentionally threatens innocent lives could easily lead to uncontrollable chaos.' AI Judgment Errors and Moral Responsibility Here, deeper philosophical questions emerge. The Aeon essay extends its consideration to ontological issues that could arise if AI were endowed with 'consciousness.' If future AI were to possess self-awareness and consciousness beyond mere algorithms, could it be considered a moral agent? And if such an entity were to make lethal decisions in war, who would bear the responsibility? This is not merely a technical problem but a fundamental question about the essence of humanity, the nature of consciousness, and the origin of moral responsibility. The author expresses concern that these questions remain insufficiently discussed while technology advances rapidly. On the other hand, proponents of AI-based military technology development focus on intelligent efficiency rather than the potential for technical errors. Many military experts believe that the introduction of autonomous weapons will optimize military operations, reduce troop casualties, and provide an opportunity for humanity to build a better security environment. They argue that because AI operates based on data and patterns, it can be more efficient t
Related Articles