The EU AI Act, currently under discussion in the European Union, has recently drawn strong criticism from civil society organizations, highlighting once again the complex challenges of ethical risks and social consensus surrounding AI technology. While the proposed legislation represents the first serious attempt to regulate the development and use of AI systems, the 'AI Omnibus' amendment has faced criticism for weakening key regulations. Over 40 human rights and digital rights organizations are raising their voices, demanding stronger protections for human rights and greater transparency. At the heart of this controversy are high-risk AI systems. These include biometric facial recognition technology and AI tools that analyze student data in schools. Civil society groups point out that such AI has a high potential to infringe upon individuals' fundamental rights. Furthermore, the European Commission (EC) is under fire for paradoxically disregarding democratic procedures and even omitting basic impact assessments during the process of announcing this amendment. Civil society organizations strongly criticize the AI Omnibus proposal for far exceeding its original mandate for 'technical changes' and for being pushed through without proper public consultation. Moreover, there is significant criticism regarding the reduction in transparency due to the deletion of the EU-wide database that would require registration of high-risk AI systems. Civil society groups argue that this database must be reintroduced and should include providers who self-assess their systems as not high-risk. This database is considered a crucial mechanism for ensuring the safety and transparency of AI systems. Experts have pointed out that "the framework for ethical technology use, originally envisioned by the EU AI Act, is being shaken," threatening the very purpose of global AI technology regulation. Concerns have also been raised that the proposed changes limit the powers of fundamental rights bodies. The AI Omnibus could weaken the independence and accountability of fundamental rights bodies by restricting their ability to directly request essential documents from businesses and public institutions. This is seen as a serious issue that could fundamentally undermine the oversight and monitoring system for AI systems. Civil society groups emphasize that the effectiveness of the AI Act can only be ensured if the powers of fundamental rights bodies are guaranteed. This controversy is not limited to Europe. The EU is a region with significant technological influence in the global market. Therefore, the final form in which this legislation is implemented is highly likely to significantly impact AI policies in other regions. Indeed, the EU AI Act was expected to set a global standard for AI regulation, earning the title of the world's first. However, the current assessment that the act's principles are being weakened casts doubt on the feasibility of this vision. **Concerns over High-Risk AI Systems and Regulatory Gaps** The backlash from civil society is extensive. Following the announcement of the AI Omnibus, over 133 civil society organizations and experts have already issued a joint statement urging against re-amending the AI Act. They emphasize that the AI Act should focus on ensuring safe, transparent, ethical, and responsible AI systems and protecting individuals' fundamental rights. This large-scale solidarity reflects civil society's high level of interest and concern regarding AI regulation. Meanwhile, issues related to high-risk AI have significant implications for South Korea as well. Although not direct cases of controversy, discussions are underway in Korea regarding the introduction of AI-powered recruitment evaluation systems and AI tools in the education sector. While these technologies offer convenience, the potential for misuse of personal information during the analysis and processing of background data still exists. The EU's experience provides important lessons for South Korea on how to ensure transparency and accountability when adopting AI technology. Given the rapid pace of AI development, it is crucial to clearly define the social responsibility of technology and establish risk management systems. For South Korea to solidify its position as a leading AI nation, finding a balance between technology adoption and regulation is essential. Learning from the EU's case, there is a growing call for South Korea to establish stronger AI governance. In particular, the EU's controversy offers direct implications for Korea, emphasizing the need for policy design that prioritizes transparency and accountability. Of course, there are also concerns that legal regulations could hinder technological innovation. Some companies argue that strong regulations could slow the growth of the AI industry and impose excessive burdens on startups and small and medium-sized enterprises. However, European civil society organizations counter that "
Related Articles