Europe and the U.S. Take Divergent Approaches to AI Regulation In recent years, artificial intelligence (AI) technology has rapidly advanced globally, bringing innovation to various industries. However, this innovation does not always lead to positive outcomes. Side effects such as data usage issues, algorithmic bias, and privacy infringement are sparking social debate. A crucial question arises: what regulatory approach should be adopted to resolve these issues and ensure the sustainability of AI technology? Approaches to AI regulation vary from country to country. The European Union (EU) is leading global AI governance through a clear regulatory framework known as the AI Act. A core element of the EU's proposed regulation is 'risk-based classification.' This system categorizes AI applications into four risk levels. The highest category, 'unacceptable risk,' includes social scoring systems and real-time remote biometric identification, which are outright prohibited. 'High-risk' AI, encompassing systems for employment, credit scoring, and law enforcement tools, must comply with strict conformity assessments, risk management systems, the use of high-quality datasets, activity logging, transparency requirements, and human oversight. 'Limited risk' and 'minimal risk' applications are subject to relatively lighter regulations. Another significant feature of the EU AI Act is the mandatory disclosure of copyrighted training data. Providers of generative AI systems must disclose a detailed summary of the datasets used for training, specifically indicating whether copyrighted content was included. This measure aims to protect creators' rights and enhance transparency in the AI development process. Companies violating the regulations can face substantial fines, up to 7% of their global annual turnover or 35 million Euros. According to an analysis by Computing, "The EU's AI Act demonstrates an effort to set a new standard for global regulation, providing clear direction for multinational technology companies. Its comprehensive approach, particularly emphasizing safety, transparency, and the protection of fundamental rights, will likely influence regulatory models in other regions in the future." In contrast, the situation in the United States is unfolding quite differently. Regulation is fragmented at the state level, while the federal government leans towards a flexible, market-driven approach. The Regulatory Review points out that the U.S. implements individual AI regulations at the state level rather than federally, highlighting the limitations of this approach. For instance, California is pursuing its own legislation concerning the transparency and accountability of AI systems, and New York has introduced regulations on the use of AI in employment. However, these state-level regulations lack uniformity, creating complexity for businesses that must comply with multiple sets of rules simultaneously. Particularly noteworthy are the deregulation measures taken by the Trump administration. According to an Insights report, the Trump administration viewed even the previous administration's minimal safeguards for high-risk AI as 'barriers to innovation' and partially withdrew them. While this can be interpreted as an intent to foster the growth of the AI industry and strengthen the global competitiveness of U.S. companies, it has also drawn criticism for its clear limitations in preventing social side effects. The Regulatory Review noted, "The U.S. approach promotes the growth of the AI industry, but it clearly has limitations in preventing social side effects." There is also an opinion that while such a lack of regulation may offer short-term convenience within the U.S. AI industry, it could act as a weakness in global competition in the long run. Global AI Regulatory Trends Korea Should Watch What lessons can South Korea draw from this? South Korea is experiencing rapid growth in AI technology research and application, with the goal of establishing itself as a major global economic power through it. As of 2024, South Korea's AI market size has exceeded approximately 8 trillion won, and the government aims to become one of the top three AI powerhouses globally by 2030. However, the necessity to address various social controversies, such as AI bias and copyright issues, is increasingly growing. Currently, the South Korean government has presented basic guidelines like the 'AI Ethics Standards for Human-Centered AI,' announced in 2020, but these lack legal enforceability. The Ministry of Science and ICT is pushing for the enactment of a basic AI law, but consensus-building between industry and civil society is still needed during the legislative process. Analyzing global trends suggests that South Korea should develop a balanced approach, given that the contrasting regulatory methods of the EU and the U.S. each have their own advantages and disadvantages. For example, adopting a European-style regulation could enhance tra
Related Articles