
ChatGPT and other AI chatbots are replacing the search engine. Instead of letting you suffer the laborious task of looking up sources of information, these powerful large language models will simply concoct an answer for you, with the minor risk that it might be totally made up.
It turns out there’s another hazard: while getting your answers this way may be quick, it isn’t great for actually learning, according to a new study published in the journal PNAS Nexus.
“When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search,” study co-lead author Shiri Melumad, a professor at the Wharton School of the University of Pennsylvania, wrote in an essay for The Conversation about her work.
The findings are based on an analysis of seven studies with more than 10,000 participants. The gist of their experiments went like this: the participants were told to learn about a topic, and were randomly assigned to either only use an AI chatbot like ChatGPT to do their research, or a standard search engine like Google. At the end, the participants were asked to write advice to a friend about what they learned.
A clear pattern emerged. The participants who used AI to do their research wrote shorter advice, with generic tips and less factual information, while the people who used a Google search produced more detailed and thoughtful tips. The pattern held even after controlling for factors like the information the users saw during their research by showing each group the same facts, or what tools they used.
“The findings confirmed that, even when holding the facts and platform constant, learning from synthesized LLM responses led to shallower knowledge compared to gathering, interpreting and synthesizing information for oneself via standard web links,” Melumad wrote.
Scientists are still only beginning to grasp the longterm effects of AI usage on the brain, but the body of evidence so far suggests alarming risks. One major study that turned heads was conducted by researchers from Carnegie Mellon and Microsoft, and found that people who trusted the accuracy of AI tools saw their critical thinking skills atrophy. Another study linked students who heavily relied on ChatGPT to memory loss and tanking grades.
“One of the most fundamental principles of skill development is that people learn best when they are actively engaged with the material they are trying to learn,” Melumad explained. “When we learn about a topic through Google search, we face much more ‘friction’: We must navigate different web links, read informational sources, and interpret and synthesize them ourselves.”
“But with LLMs,” Meluad added, “this entire process is done on the user’s behalf, transforming learning from a more active to passive process.”
While scientists continue to explore AI’s risks, the tech is making rapid inroads into education, where they’re a popular tool for cheating on assignments. Leaders like OpenAI, Microsoft, and Anthropic are spending millions of dollars to provide teachers training on how to use their AI products, while universities partner with these same firms to create their own tailor-made chatbots to foist on their students, like Duke University’s creatively named collaboration with OpenAI, “DukeGPT.”
More on AI: As AI Reigns, Students’ Math and Reading Scores Just Hit an All-Time Low