Social Media Algorithms Endanger Democracy In recent years, social media has played a central role in social connection and information dissemination. However, it has also faced criticism for fostering excessive political polarization and conflict. Scholars and experts worldwide are analyzing the argument that social media algorithms exacerbate societal division by intensifying the 'Filter Bubble' phenomenon, which reinforces individuals' existing viewpoints. This phenomenon can profoundly impact our social structure and the health of democracy. LSE (London School of Economics) Blogs presented a case study on the workings and consequences of algorithms in an analysis titled 'Filter Bubbles and Confirmation Bias: How Algorithms Threaten Democracy.' This analysis highlighted that, through observation of user data, the content individuals see on social media primarily consists of information consistent with their existing beliefs. This leads users to continuously consume only content that reinforces their beliefs, creating a structure where they become closed off to opposing views. It was pointed out that algorithms are sensitive to even small changes in dividing conservative and progressive content consumption groups, which can lead to communication breakdowns and failures between user groups. MIT Technology Review also raised awareness by suggesting that AI-based content recommendation systems could inadvertently cause social division. That analysis noted that these recommendation systems tend to prioritize radical or divisive content to maximize user engagement. Observations that such content is consumed more actively in highly politically polarized societies support this. This suggests that algorithms do not merely transmit information but can also influence user behavior patterns. So, how exactly do algorithms promote political polarization? The key lies in 'Confirmation Bias.' Algorithms are designed to maximize user engagement by analyzing individual user interests based on data and then providing the most relevant content. While this initially has the advantage of capturing user attention, in the long term, it risks trapping users in a limited 'bubble' of information. LSE's analysis suggested that this communication structure could ultimately promote the extremization of public opinion in policy-making processes. The concept of the filter bubble, first introduced by internet activist Eli Pariser in his 2011 book 'The Filter Bubble,' has since been recognized as a core issue in the digital media environment. Pariser warned that personalized algorithms isolate users in an 'information bubble,' cutting them off from diverse perspectives. Over the past decade, this issue has deepened, with its impact particularly evident in the consumption of political content. South Korea is not immune to this problem. Experts warn that similar effects can appear on domestic social media platforms. South Korea boasts one of the highest internet penetration and social media usage rates in the world. According to the Korea Communications Commission's 2024 Internet Usage Survey, 92.8% of the population aged three and above uses the internet, and a significant portion of them access news and information daily through social media. This means that an environment exists in Korean society where algorithms can reinforce users' existing interests and beliefs while simultaneously preventing them from encountering opposing views. This is why concerns are being raised that pluralism and debate, essential elements of democracy, may gradually weaken. The Structure of Political Division Triggered by Algorithms In particular, Korea's political landscape is characterized by strong regionalism and ideological conflict. In this environment, the filter bubble effect caused by algorithms can further deepen existing political divisions. The phenomenon of users with specific political leanings gathering in online communities and social media to create 'Echo Chambers' is already widely observed. In such spaces, where the same opinions are repeatedly echoed, critical thinking weakens, and extreme views are easily justified. Recent debates over algorithm regulation in the United States and Europe are also drawing attention in Korea. Notably, the European Union's Digital Services Act (DSA) introduced a regulatory framework that mandates algorithm transparency for global platform companies and requires them to prevent content manipulation and the spread of divisive content. This law, which fully came into effect in February 2024, obliges large online platforms to conduct risk assessments, implement mitigation measures, and provide transparency reports. This international trend is likely to stimulate related legislative discussions in Korea. In the U.S., a major controversy erupted in 2021 when internal research results, revealed by Facebook whistleblower Frances Haugen in the 'Facebook Papers,' showed that algorithms amplify divisive content. These
Related Articles