The Hogwarts bookshelf
Why AI answers change frequently

The Hogwarts bookshelf
Why AI answers change frequently
AI answers feel stable, even confident. Ask a chatbot a question today, and in return you’ll get an answer full of self-confidence. But beneath the confident surface, something is shifting—and if you're a brand or a researcher, the shifting grounds beneath the models and the answers could and should be of interest.
Citation Drift
At 3RD, we are looking into what has been coined citation drift: the ongoing shifts and replacements of sources cited by AIs in response to the same question. These change significantly over time, even when the question itself hasn't changed—and the world hasn't either.
Ask the same AI chatbot the same question twice, a month apart, and half of the cited sources may be entirely different. Some AI search engines refresh their underlying source pools even more aggressively. Perplexity, for instance, updates its index approximately every 48 hours (Perplexity AI, 2024). The result is a system in which the answer you got last Tuesday is genuinely not the answer you'll get next Tuesday.
Probabilistic, not permanent
We all know the classic software excuse: “It’s not a bug, it’s a feature.” In this case, it’s true.
Why? Because LLMs do not retrieve facts the way a traditional search engine retrieves a cached page. Responses are generated probabilistically, weighting sources differently each time based on recency, relevance signals, and the statistical patterns baked into the model (Zhao et al., 2023, A Survey of Large Language Models).
Add to that the fact that retrieval-augmented systems like Perplexity or ChatGPT with search are continuously pulling from a live and shifting web, and you have a knowledge base that is anything but static.
Think of it less like a Google results page—where position 1 today is likely position 1 tomorrow—and more like the library at Hogwarts, where the books rearrange themselves when you're not looking. The information exists. It's just never quite in the same place twice.
One citation is not a victory
This has a direct and underappreciated implication for anyone working with AI visibility or brand presence: being cited once is not a win. It's a starting point.
Traditional SEO thinking rewarded you for claiming a position and holding it. A number one ranking, once earned, had a degree of inertia. Citation in AI systems has no such inertia. The probabilistic nature of LLM outputs means that visibility must be continuously earned, not simply established.
A source that appears prominently in AI answers this month may vanish next month—not because it became less credible, but because the system's weighting shifted.
This mirrors a broader truth about our media landscape: attention is not a fixed asset. It's a flow. And like all flows, it requires active management.
Practical implications for brands
For organisations thinking seriously about AI presence, citation drift demands a new kind of monitoring—one that tracks not just whether you appear in AI answers, but how consistently, in response to which queries, and against which competing sources. It's a continuous process, not a one-time audit.
The bookshelf is always moving. The question is whether you're paying attention.
Sources
Perplexity AI product documentation (2024)
Zhao et al., A Survey of Large Language Models, arXiv:2303.18223 (2023)
3RD internal research on citation drift patterns