The Emergence of AI Scientists: A Shift in Research Paradigm A new concept, 'AI scientists,' has emerged, fundamentally shaking the paradigm of scientific research. We have entered an era where artificial intelligence (AI) autonomously performs not just data processing or analysis, but also hypothesis generation, experimental design, and even result interpretation. AI scientists are automating complex processes traditionally performed by human researchers, dramatically accelerating the pace of discovery. However, this innovation simultaneously sparks new discussions across various domains, including research ethics, copyright, accountability, and unequal access to AI models, while challenging existing research cultures. A 'Nature' editorial dated March 25, 2026, strongly highlighted the need to redefine research governance frameworks in light of the emergence of 'AI scientists.' It conveyed the message that academic publishing processes, mechanisms for maintaining research integrity, and methods for acknowledging contributors must fundamentally change to adapt to the AI era. The editorial specifically emphasized concerns that AI-driven automation could further weaken already fragile research integrity. There is a lurking risk of an explosive increase in issues such as paper manipulation, hallucinated analysis, biased results, and the use of opaque training data in AI-generated outputs, potentially causing academic journals to lose credibility. In response to this editorial, Jim Shimabukuro provided a deeper analysis of the complexities of the AI scientist era. He stressed that the academic consensus on 'agential' AI, which has been forming from 2023 to 2024, must now translate into concrete policy and governance discussions. As AI systems evolve beyond mere tools to function as independent agents in the research process, research institutions, funding bodies, and publishers must fundamentally rethink research organization, contributor recognition, and governance frameworks. One of the most pressing issues raised by the advent of AI scientists is the automation of discovery. With AI systems capable of autonomously generating hypotheses, designing experiments, and interpreting results, processes that once took months or years for experimental design can now be completed in a matter of hours. This offers funding agencies and research institutes the potential to process large-scale data and transcend human cognitive limitations. However, such automation carries the risk of bypassing qualitative validation processes in research, potentially leading to situations where the reliability of results cannot be guaranteed. Another key issue is the blurring of copyright and attribution. When AI performs a significant portion of research, to whom should the copyright for the results be attributed? The AI developer, the researcher who utilized the AI, or those who provided the data used to train the AI? These questions represent new territory that current academic publishing systems and intellectual property frameworks struggle to answer clearly. The Nature editorial warns that such ambiguity in contributor recognition could undermine the foundations of research ethics. Accountability for errors is also a complex issue. When an error is found in AI-generated research results, does the responsibility lie with the AI developer or the researcher who used the AI? Specifically, when AI produces 'hallucinated analysis'—patterns or conclusions that do not actually exist—to what extent is the human researcher accountable for failing to verify it? The absence of such accountability mechanisms could severely threaten research integrity. The Urgency of Research Integrity and AI Accountability Discussions The issue of unequal access to powerful AI models cannot be overlooked. Cutting-edge AI systems require substantial funding and computing resources, thus concentrating primarily in large, well-resourced research institutions and corporations. This can exacerbate inequalities in research opportunities and lead to a situation where specific groups or nations monopolize leadership in scientific advancement. Developing countries or smaller research institutions risk being marginalized from the benefits of the AI scientist era, which could harm the diversity and inclusivity of the global scientific community. The delay in governance relative to technological advancement exacerbates all these problems. While AI technology is progressing exponentially, governance frameworks capable of regulating and overseeing it are failing to keep pace. The Nature editorial warns that this governance lag could lead to a crisis in research integrity and cause the academic publishing sector to lose credibility. There is an urgent need for robust oversight and clear accountability mechanisms to keep pace with AI's rapid development. The crisis of research integrity is particularly likely to arise when AI is designed to learn from biased data or when opaque