The Impact of Language on AI Perception When you tell your smartphone's voice assistant, "Tell me the weather," do you simply think you've operated a machine, or do you believe some 'entity' has fulfilled your request? This question reflects a new perspective emerging as artificial intelligence (AI) becomes increasingly integrated into our daily lives. We readily describe AI as 'smart' or say it 'knows,' but is this truly harmless, or is it a precarious linguistic choice that leads us to misunderstand AI? A recent study from Iowa State University has focused precisely on this point. According to the research, the language used to describe AI, particularly expressions that attribute human-like characteristics, can significantly impact public understanding of AI. While calling AI 'smart' or saying it 'knows' might sound harmless, such expressions can subtly mislead people about AI's true capabilities. If phrases like 'AI understands' or 'AI thinks' are repeatedly used without a clear understanding of AI's limitations, people are likely to overestimate or misunderstand AI's abilities. This study is more than just a linguistic analysis textbook; it raises social and ethical issues that all of us living in the AI era should pay attention to. Jo Mackiewicz, a professor of English, and researcher Jeanine Aune, who conducted the study, analyzed AI-related expressions in news articles using a dataset of over 20 billion words from English news articles published in 20 countries. Their study, titled 'Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT,' was published in 'Technical Communication Quarterly.' The research team focused on how frequently mental verbs such as 'learn,' 'mean,' and 'know' are used in conjunction with terms like AI and ChatGPT. The results were somewhat unexpected. News writers tended to use much more cautious and restrained language than in everyday conversation, deliberately refraining from anthropomorphic expressions. News writers did not frequently pair AI-related terms with mental verbs. Overall, the researchers found that the anthropomorphization of AI in news reporting was less frequent and more subtle than anticipated. Professor Mackiewicz emphasized, "The anthropomorphization of AI is much less common and much more subtle than we might think." This appears to stem from an awareness that excessive use of anthropomorphism can cause unnecessary confusion in readers' perceptions of AI. The research team stated that 'anthropomorphism,' which attributes human characteristics to AI, is common in everyday conversation but much less prevalent in news writing. However, the study did not end there. Although the frequency of anthropomorphic expressions was lower than expected, some phrases, even in their subtlety, were found to carry the potential risk of leading people to overestimate AI's capabilities. The research team demonstrated that not all uses of mental verbs are equal. Some phrases are closer to implying human-like characteristics. For instance, the statement "AI needs to understand the real world" can carry expectations related to human reasoning, ethics, or perception. Such usage begins to suggest deeper capabilities beyond mere description. The expression 'AI understands the real world' can imply that AI possesses human-like ethical thinking or reasoning abilities, going beyond a simple calculating machine. The Reality of AI Anthropomorphism in News Articles Professor Mackiewicz explained the duality of such language use: "We use mental verbs all the time in our daily lives, so it makes sense to use those terms when talking about machines. It helps us relate to machines." However, at the same time, he warned of the risk of blurring the lines between what humans and AI can do when applying mental verbs to machines. The research team emphasized, "These nuances are important for writers because the language we choose shapes how readers understand AI systems, their capabilities, and the humans responsible for them." "The language we use when anthropomorphizing machines can convey surprisingly powerful messages," they stated, referring to the responsibility in using 'mental verbs.' So, what are the larger problems these linguistic expressions can cause? The biggest risk is that people mistakenly believe AI thinks or acts like a human. This goes beyond a simple misunderstanding. It also increases the likelihood of blindly trusting decisions made by AI or failing to clearly assign responsibility for errors generated by AI. Particularly interesting is the pattern of passive voice usage discovered by researcher Jeanine Aune. Aune explained that these instances of anthropomorphism are often written in the passive voice, shifting responsibility from the technology itself to human actors. In other words, when expressions like "it was decided by AI" are used instead of "AI decided," the responsibility of the humans who designed and operated that AI
Related Articles