By The Conversation
Publication Date: 2025-12-01 03:18:00
Since ChatGPT was released in late 2022, millions of people have started using large language models to access knowledge. And their appeal is easy to understand: ask a question, get a sophisticated summary, and move on—it feels like effortless learning.
However, a new paper I co-authored provides experimental evidence that this facilitation may come at a price: When people rely on large language models to summarize information about a topic for them, they tend to develop less knowledge about it than when they learn via a standard Google search.
Co-author Jin Ho Yun and I, both marketing professors, reported this finding in a paper based on seven studies with more than 10,000 participants.
Related: More and more people are risking medical advice from chatbots. Here’s why.
Most studies used the same basic paradigm: participants were asked to learn about a topic—how to grow a vegetable garden, for example—and were randomly assigned to do so using either an LLM such as ChatGPT or…