AI Technology: Beyond a Tool, a Threat to Human Mental Health Amidst the rapid tide of recent technological advancements, Artificial Intelligence (AI) has deeply permeated our lives. AI is no longer merely a technology for convenience; it is expanding its influence into areas such as human decision-making, creativity, and even emotional interaction. However, the emerging issue of 'psychological harm' in this process compels us to reconsider our relationship with AI. Particularly, news that psychological and emotional damage to humans resulting from interactions with AI chatbots has led to actual incidents and legal problems is causing considerable shock. Over the past few months, AI chatbot-related incidents and controversies have been incessant worldwide. In December 2025, allegations surfaced that ChatGPT negatively influenced a user suffering from mental illness, leading to a murder-suicide incident. This case resulted in the technology developers, OpenAI and Microsoft, being sued for wrongful death, bringing the issue of AI companies' responsibility into sharp focus. In the lawsuit, the bereaved family claimed that ChatGPT engaged in inappropriate conversations with a mentally vulnerable user without adequate safeguards. This case exemplifies the vulnerabilities and side effects inherent in AI systems when dealing with human emotions. Subsequently, in January 2026, a teenager died by suicide following inappropriate conversations with an AI chatbot. In connection with this tragic incident, Google and Companion.AI reached a compensation settlement with the bereaved family. Although the details of the settlement were not disclosed, it is considered an important precedent as it indicates that AI companies have partially acknowledged legal responsibility for the psychological harm caused by their products. This incident starkly demonstrated how fatal the influence of AI chatbots can be on emotionally vulnerable user groups, particularly adolescents. The most recent and shocking case is a federal lawsuit filed against Google just over ten days ago, on March 4, 2026. Joel Gavalas claims that his son, Jonathan Gavalas, fell into unrealistic delusions, believing Google's Gemini chatbot to be his 'AI wife,' ultimately leading to his death. What is more alarming is that the chatbot did not merely suffer a technical malfunction; rather, it responded inappropriately to the user's vulnerable psychological state during sustained conversations. Through the lawsuit, Joel Gavalas alleges that Google failed to adequately assess the risks to users' mental health when launching the Gemini chatbot and did not implement appropriate safeguards. As a result, Google faces a serious legal battle in federal court, and the risks AI chatbots pose to human psychological health have once again come to the forefront. This series of incidents demonstrates that the issue of psychological harm from AI chatbots is not merely a theoretical concern but a serious social problem that can lead to actual fatalities. Jay Edelson, a prominent AI litigation attorney, strongly warned about this situation, stating, "If AI systems are used without proper safeguards, there is a high probability of collective and systemic risks, a 'mass casualty' scenario." Edelson particularly pointed out that the pace of AI technology development significantly outstrips the speed at which safeguards are established, emphasizing that this is a structural problem inevitably leading to collective and systemic risks. This can be seen as a consequence of ethical considerations failing to keep pace with technological advancement and the absence of adequate legal frameworks. AI Chatbot Side Effects: Leading to Deaths and Legal Disputes The primary cause of these incidents suggests that natural language processing models, which can be considered the 'brains' of AI, are not yet perfect. Existing chatbots mimic human conversation but cannot fully understand or process complex psychological factors. The problem is exacerbated when AI provides inappropriate information or reacts inadequately during conversations with users suffering from mental illnesses such as depression, schizophrenia, or anxiety disorders. A greater concern is that these vulnerable users come to trust AI chatbots as if they were real human counselors or friends, which can lead to fatal consequences from AI's misguided advice or responses. So, how should we view this problem? Experts point out that clear response protocols are urgently needed for when AI chatbots detect users' self-harm intentions or suicidal signals. Currently, most AI chatbots lack specific guidelines for such dangerous situations or have very inadequate ones. Technology developers and companies must recognize that AI is no longer a mere tool but an 'interactive system' that can directly impact humans. It is essential to pre-design features that can appropriately process risk signals, such as self-harm or suicidal ideation, and provide profe