As artificial intelligence (AI) technology rapidly advances, swiftly transforming various aspects of our lives, governments and legislative bodies worldwide are actively working to prevent potential negative consequences. According to the report 'Parliamentary actions on AI policy,' published by the Library and Documentation Service of the Congress of Deputies in mid-April 2026, bills addressing the social, economic, and ethical impacts of AI are being introduced one after another in parliaments across various countries. This report comprehensively analyzes AI-related legislative trends in major global nations as of April 19, 2026, reflecting the latest international discussions on AI governance. For South Korea, in particular, it is urgent to establish a clear legal framework that promotes AI technology development while controlling its risks. First, the most actively discussed area is preventing the misuse of AI in elections. One notable instance of AI misuse is 'deepfake' technology. Deepfakes can realistically synthesize a person's face or voice, raising concerns about the spread of false information during election campaigns or political messaging. For example, in the 2024 Slovakian general election, a deepfake audio recording of an opposition leader discussing illicit dealings with a journalist was circulated 48 hours before the election, causing political turmoil. In the 2024 U.S. New Hampshire primary, AI-generated audio mimicking President Biden's voice was sent as robocalls urging voters to abstain from voting. To prevent such incidents, the European Union (EU) passed the 'AI Act' in the European Parliament in March 2024, focusing on enhancing AI transparency. It is scheduled to take effect in phases starting in the second half of 2026. This law imposes strict regulations on high-risk AI systems and mandates clear labeling for deepfake content. In the United States, the Federal Election Commission (FEC) is considering mandating clear disclosure for AI-generated content starting in 2024, and some states have already passed laws outlawing election-related deepfakes. These international trends suggest the need for South Korea to deeply consider what institutional responses it should adopt to protect the fairness and transparency of elections. In South Korea, several amendments to the Public Official Election Act, aimed at regulating the use of deepfakes in elections, have been proposed in the National Assembly since late 2025. However, as of April 2026, they have not yet reached the stage of full-scale review. The environmental impact of AI technology is also an issue that cannot be overlooked. Training and operating AI models, especially large language models (LLMs), consume enormous amounts of electricity and generate significant electronic waste. A 2019 study by the University of Massachusetts Amherst found that training a single large AI model emits approximately 284 tons of carbon dioxide, equivalent to the lifetime emissions of five Americans. Furthermore, the International Energy Agency (IEA) projected in its 2026 report that global data center electricity consumption would increase by 15% compared to 2025, with a substantial portion attributed to AI computations. This raises issues that could directly conflict with climate change response goals. According to the 'Parliamentary actions on AI policy' report, France passed legislation in December 2025 to strengthen energy efficiency standards for AI data centers, and Germany proposed an amendment in early 2026 to expand the mandatory proportion of renewable energy use for data centers to 80% by 2030. Japan implemented legislation in September 2025 to strengthen e-waste management regulations for AI and data centers, expanding manufacturers' recycling responsibilities. Researchers at Stanford University's Human-Centered AI Institute (HAI) emphasized in their 2025 annual AI Index Report that "only by ensuring the sustainability of AI technology can technological advancement avoid environmental destruction and provide opportunities for future generations." South Korea also needs to more closely examine regulations related to the energy efficiency of AI technology to achieve its 2050 carbon neutrality goal. As of 2025, South Korea's data center electricity consumption accounts for approximately 2.3% of total electricity consumption, and the Ministry of Trade, Industry and Energy projects this ratio to increase to over 4% by 2030 with the spread of AI technology. Another key discussion point is establishing a regulatory framework for 'high-risk AI systems.' High-risk AI systems refer to AI technologies used in areas directly related to citizens' safety and fundamental rights, such as healthcare, transportation, education, finance, and law enforcement. If these systems fail to ensure reliability and safety, they can cause severe social harm. The EU's AI Act adopts a 'risk-based approach,' classifying AI systems into four levels: minimal risk, limited
Related Articles