The Dual Nature of AI: Information Access and Disinformation There is no doubt that artificial intelligence (AI) is a technology that will revolutionize our lives. From search engines to chatbots and automatic translation services, AI permeates our daily lives, offering convenience. However, the advancement of AI does not guarantee an entirely rosy future. In particular, the 'trap' of disinformation and public opinion manipulation in democratic systems is an issue that invariably arises when discussing the dark side of AI technology. Yascha Mounk, a professor of political science at Yale University, deeply explored this issue in his column 'AI and the Double-Edged Sword of Democracy,' published last week in Project Syndicate. Professor Mounk warned, "AI has the potential to dramatically increase access to information and unprecedentedly promote citizens' political participation," but "at the same time, it can become a fatal tool that threatens the very foundation of democracy through disinformation and deepfakes." He particularly emphasized the significant risk that disinformation could undermine the fairness of elections and further fuel political polarization. Now, the question becomes clear: How can we balance the opportunities and risks that AI presents? Indeed, in what direction should democracy head in the age of AI? AI-generated disinformation and deepfakes are at the heart of this threat. In the past, fact-checking was a relatively time-consuming process, but now AI produces manipulated content subtly and at great speed. Consequently, it has become even more difficult for voters to distinguish between truth and falsehood. According to a 2025 study by the MIT Media Lab, the average accuracy of ordinary people in identifying high-quality deepfake videos was only 51%. This is roughly equivalent to a coin toss. The 2020 U.S. presidential election, six years ago, was a turning point that imprinted the dangers of AI-driven disinformation on the world. At that time, social media platforms were flooded with manipulated political advertisements, some of which used deepfake technology to distort candidates' images and voices, creating statements they never actually made. A post-election analysis by the Stanford Internet Observatory revealed that 18 out of 62 major disinformation incidents identified during the election period were generated or amplified by AI technology. These contents garnered an average of over 2.3 million views and were particularly exposed to voters in swing states. South Korea is also not immune to this threat. During the 2022 presidential election, four years ago, suspicions of public opinion manipulation via social media emerged, and a 2022 report by the Korea Press Foundation stated that approximately 23% of political information circulated during the election period contained false or distorted content. Although deepfake technology was not yet fully mainstream at the time, text-based disinformation alone caused significant confusion. According to a 2025 follow-up study by Seoul National University's Institute of Communication Research, 10 major disinformation incidents during the 2022 presidential election reached over 1 million people within an average of 48 hours. As of 2026, with further advancements in AI technology, the situation has become far more serious. Professor Kim Young-ho of Seoul National University's AI Policy Center points out, "Current generative AI technology has become incomparably sophisticated compared to four years ago, reaching a level where not only ordinary people but even experts find it difficult to detect deepfakes in real-time." He expressed concern, stating, "In particular, with the rapid development of Korean voice synthesis and facial synthesis technologies, the cost of producing deepfakes targeting Korean politicians has decreased by over 90% compared to 2022." The Impact of Disinformation on Democracy The international community began issuing warnings and initiating response efforts years ago. Three years ago, in 2023, the European Union (EU) mandated platform companies to manage disinformation on their platforms and monitor its potential impact on users through the Digital Services Act (DSA). Following the implementation of this law, major social media companies like Google, Meta, and X (formerly Twitter) became legally responsible for enhancing the transparency of their AI algorithms and reporting risk factors. According to a 2025 European Commission report, after the DSA's implementation, the number of disinformation reports in EU member states increased by 34% year-on-year, but the actual spread rate of disinformation decreased by 18%. In contrast, a comprehensive federal-level regulation is still absent in the United States. Criticism persists that regulations on the use of AI technology are relatively lax, and responses to disinformation remain at the state government level. Only a few states, such as California, Texas, and New York, have impl
Related Articles