The outsourcing of thought to artificial intelligence
Are we eroding our ability to think for the sake of convenience?
Artificial intelligence (AI) large language models (LLMs) have seen a rapid introduction into medical education, with a positive reception overall, largely due to their effective marketing as tools for efficiency. The alacrity of AI’s introduction without stringent guidelines or restrictions demands careful consideration of the potential consequences. We should think about how we are outsourcing our ability to learn, write, and think critically to AI, without fully realizing the implications.
The technological capabilities of AI are growing rapidly, and it is becoming more of a constant feature of students’ lives. For example, GPT-4.5, released in February 2025, can act as a personal tutor, and OpenEvidence, a medical search engine for clinical decision support, has more than 40% of physicians in the United States logging daily use.[1] OpenAI has released additional tools for ChatGPT—ChatGPT Health, Emergency Medicine GPT, and study mode—stating on its website that “ChatGPT is becoming one of the most widely used learning tools in the world.”[2]
Anecdotally, AI is largely well received among my peers. I have seen some use ChatGPT in the library to answer medical questions, and many recommend putting learning objectives into Chat-GPT to summarize and answer questions. I have heard only one concern: it can sometimes provide inaccurate information. As a result, I decided to try using it for a few months, and at first it seemed useful. It saved me time by expediting information searches, answering questions, and solving problems that I had trouble grasping. Then I started noticing something problematic. I was finding it difficult to remember what I had read a month prior so that I could apply it. I wondered whether I was having a harder time coming up with my own ideas, and I was feeling less creative. I needed to take a step back to reflect on what I had let happen in my life. I was unknowingly letting this tool take over my thinking. If a tool is something that helps you carry out a task, is AI even really a tool?
One recent study showed that among students tasked to write an essay, those who used ChatGPT had the lowest brain engagement, displaying weaker executive function.[3] Another study showed that young learners exhibited lower critical thinking scores with frequent AI use, highlighting a “cognitive laziness” side effect, whereby learners offload their cognitive responsibilities to the AI tool.[4] In medicine, when a patient case is presented, the question of differential diagnosis arises. OpenEvidence and ChatGPT can offer a concise answer, and new functions allow LLMs to give step-by-step guidance on how to reach that answer. This instructs users how to think through a problem without having to pause and think for themselves. Without these tools, when an answer is not apparent, we face cognitive friction, having to evaluate information to reach a hypothesis. That uncomfortable pause can strengthen critical thought, forcing us to embrace the idea of inquiry, make sense of information, and question conclusions.[5] If we outsource our thinking to AI for the sake of ease, it can slowly erode our ability to think critically to understand medicine and the world around us.
Because AI can summarize information into easily digestible packets, we quietly lose our ability to find information across numerous sources, synthesize our own conclusions, and generate new ideas. Additionally, AI can undermine our ability to articulate our thoughts in discussions and in writing. Frankly, it is easier to prompt an LLM to write something than to actively think, consider how we want to communicate, coherently express our interpretations, organize our writing, and revise. Having AI replace our ability to write can make us gradually dependent. If two people have a discussion in which both use AI to communicate, are they communicating with each other or with AI?
As AI creeps into people’s lives, we may not grasp the resulting dependency or the loss of intellectual autonomy—the ability to use our mind to navigate situations and make decisions. Suppose that one day doctors simply ask AI for patient management steps and trust the results more than their own judgment, a behavior of overreliance already demonstrated in students.[6]
Currently, there is no oversight of the use of AI in medical education; there are no guidelines, rules, or warnings. Not only does AI hallucinate false information to appeal to users,[7] but it has also been demonstrated to possess capabilities of deceptive behavior.[8] As AI evolves at a pace that organizations cannot keep up with, how can students decipher what is accurate or not as they try to form baseline knowledge? Additionally, a study from Microsoft noted a correlation between a user’s confidence in AI and lower self-reported critical thinking,[9] raising concerns about how younger learners can gauge confidence amid the constant release of new LLMs.
Medical trainees are introducing AI into their education, regardless of whether it is directly incorporated into their curriculum. We need to promote discourse that helps students develop their critical thinking skills and question whether AI is an aid to learning or whether it will destroy our cognitive abilities and, eventually, pilot the replacement of learners. Many medical professionals and students deny the possibility that AI will replace doctors. What makes us so confident? And what are we prepared to do about it?
Competing interests
None declared.
hidden
This article has been peer reviewed.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
1. OpenEvidence. OpenEvidence, the fastest-growing application for physicians in history, announces $210 million round at $3.5 billion valuation [news release]. 15 July 2025. Accessed 15 October 2025. www.openevidence.com/announcements/openevidence-the-fastest-growing-application-for-physicians-in-history-announces-dollar210-million-round-at-dollar35-billion-valuation.
2. OpenAI. Introducing study mode. 29 July 2025. Accessed 15 October 2025. https://openai.com/index/chatgpt-study-mode.
3. Kosmyna N, Hauptmann E, Yuan YT, et al. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv 2025:2506.08872. Accessed 15 October 2025. https://doi.org/10.48550/arXiv.2506.08872.
4. Gerlich M. AI tools in society: Impacts of cognitive offloading and the future of critical thinking. Societies 2025;15. https://doi.org/10.3390/soc15010006.
5. Glaser EM. An experiment in the development of critical thinking. Ithaca, NY: Teacher’s College, Columbia University; 1941.
6. Zhai C, Wibowo S, Li LD. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments 2024;11:28. https://doi.org/10.1186/s40561-024-00316-7.
7. Kalai AT, Nachum O, Vempala SS, Zhang E. Why language models hallucinate. arXiv 2025:2509.04664. Accessed 15 October 2025. https://doi.org/10.48550/arXiv.2509.04664.
8. Huan H, Prabhudesai M, Wu M, et al. Can LLMs lie? Investigation beyond hallucination. arXiv 2025:2509.03518. Accessed 28 November 2025. https://doi.org/10.48550/arXiv.2509.03518.
9. Lee H-P(H), Sarkar A, Tankelevitch L, et al. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 25 April 2025. https://doi.org/10.1145/3706598.3713778.
hidden
Ms Kaye is a third-year medical student in the Faculty of Medicine at the University of British Columbia.
Corresponding author: Ms Esther Kaye, esthergk@student.ubc.ca.