hyperpersonalisation

Hyperpersonalisation: Why a Prompt Gives Different Answers

Hyperpersonalisation: Why a Prompt Gives Different Answers
11:14

A common scenario right now is this: a business owner searches platforms such as Google Ai mode and ChatGPT and thinks they are showing up with no link. Then someone else checks and the link is clearly there. You might also test the exact same prompt across three people and get three different answers. This makes it difficult to optimise for Answer Engines. 

That is not a bug. It is the direction search and AI are heading.

If you are trying to understand visibility in AI responses, you need to understand hyperpersonalisation. It is not just “personalised search” in the old Google sense. It is a stack of signals, retrieval steps, model behaviour, and UI experiments that can change what gets shown, what gets cited, and what gets clicked.

TL;DR

Hyperpersonalisation is when an AI response is shaped by context signals like location, language, account settings, and past activity. It can also be shaped by different models and experiments, even on the same platform. 

LLM answers are not a single ranking list. Some systems run multiple related searches to build one response. Google calls this a “query fan out” technique, which helps find a wider set of supporting pages than classic search. 

If you want to measure performance without bias, you need controlled testing. Fix the prompt set, location, language, device assumptions, and logged in state. For traditional ranking baselines, tools like SE Ranking let you specify location and language for local tracking.

Does AI recommend you?

What an LLM is, and why different LLMs behave differently

An LLM is a “large language model”. In simple terms, it is a type of AI model trained on huge amounts of text so it can understand and generate language.

What matters for hyperpersonalisation is that you are rarely interacting with a raw LLM on its own. You are using a product built on top of one or more models, plus extra layers like web retrieval, personalisation, memory, and UI decisions. That is why two tools can answer the same question differently, even when both feel like “chatbots”.

ai chatbot blog image

Here are the common ones people will run into:

ChatGPT (OpenAI): A general purpose assistant that can optionally reference chat history and saved memories to personalise future chats.

Gemini (Google): A general purpose assistant that has introduced personalisation that can use your Google apps context, starting with Search history, when enabled.

Claude (Anthropic): A general purpose assistant that can search past chats and use memory when those capabilities are toggled on.

Perplexity: An “answer engine” style tool that combines live web search with AI models and returns answers backed by citations you can check. It also has a memory feature that can remember details between conversations.

Microsoft Copilot (especially Microsoft 365 Copilot): An assistant designed to work inside your organisation, grounding answers in work content you have permission to access, through Microsoft Graph.

So when people say “AI is inconsistent”, what they often mean is “different models, different retrieval, different memory, different interface”.

What is hyperpersonalisation

Hyperpersonalisation means AI answers change based on you, not just your prompt. It uses context like your location, language, account and search history, past chats, device, and what it thinks you are trying to achieve right now.

Google spells this out clearly for Search in general. Search results can vary between people, and not only because of classic personalisation. Language settings and localised results can also change what you see.

LLMs push this further, because they can combine multiple sources of context at once, then generate a single response that looks definitive.

That is why it feels more confusing than keyword rankings.

How hyperpersonalisation works under the hood

Here is the practical mental model. This is not one single algorithm. It is a pipeline. The key idea is simple: when two people type the same prompt, they are rarely sending the same request. Their context, the model path used, and even the interface can change what gets generated and what gets linked.

Step 1: The platform builds a context profile for this request

Before the model answers, the system gathers signals. This is where most “same prompt, different output” situations start.

Common signals include:

  • Location and language settings
  • Signed in state and activity based personalisation
  • Past chatbot conversations or “memory” features
  • Browser or ecosystem data sharing, depending on the platform. For example, Microsoft describes Microsoft 365 Copilot accessing user context like emails, chats, and documents through Microsoft Graph, with permission controls.

This is also where account context and history comes in. If you are signed in, the system can use your past activity to adjust what it thinks you want. 

Google Search personalisation is explicitly based on your activity and preferences, and it is something you can control in settings. 

Chat tools can do the same thing through conversation history. ChatGPT can reference past conversations when “Reference chat history” is turned on, and Gemini has introduced personalisation that can use Google apps context, starting with Search history.

Location plays a similar role. Even outside classic personalisation, results may vary due to localised results and language settings. For local queries, even a suburb level difference can change the recommendations and links that appear.

The important point is this.

Two people can type the same words, but their context profile is different. So the system is not answering the same question.

Step 2: The system decides whether to retrieve live sources

For AI systems connected to search, retrieval is where things become less like classic rankings.

Google describes a “query fan out” approach for AI Mode and AI Overviews. It issues multiple related searches across subtopics and data sources to develop a response, while identifying supporting pages to show a wider set of links.

That is a major difference from classic keyword rank tracking.

You are not measuring “position 1”. You are measuring whether your site is pulled into a bundle of supporting evidence across multiple sub queries.

This is also where variation can appear because the system may not take the same retrieval path every time. Google states that AI Overviews only show when their systems determine it is additive to classic Search, and as such, often do not trigger. So prompt timing, query category, and experimentation can change whether you even see the AI layer at all, even before the answer is generated.

Step 3: The system ranks sources for this specific user and task

This is where hyperpersonalisation becomes real.

The same set of candidate sources might exist, but the weighting changes:

  • Local sources can be boosted if the intent looks local.
  • Sources can be filtered or replaced based on language, region, or what the system believes the user wants.
  • The system can choose different link sets because the underlying model and technique differ between AI Mode and AI Overviews.

This is the “different models and different techniques” factor. Google’s Search Central documentation says AI Mode and AI Overviews may use different models and techniques, and as a result, the set of responses and links will vary. So even if the prompt looks identical, the backend path might not be identical.

Step 4: The model generates the answer

This is where most marketers focus, but it is not the beginning.

The model is given:

  • Your prompt
  • A context profile
  • Retrieved snippets or pages
  • System rules about safety and quality
  • Then it generates text.

If the platform supports citations, the system attaches links based on what it used and what it is willing to show in that UI. This is why you can sometimes be “in the answer” but not visibly cited the way you expect, depending on how the system chooses to display sources.

Step 5: The UI decides what is visible

Even if links exist, they may be displayed differently.

Google has rolled out different ways of showing links for AI Overviews, including a right hand link display on desktop and site icons on mobile. This is the “no link, but there is” situation. Two users can be looking at the same answer, but one does not notice the link module, or their interface variant renders it differently.

This matters because user perception is not analytics. A business owner can genuinely feel “there is no link” even when a link exists.

Also worth noting.

Even when links exist, people click less when AI summaries appear. Pew Research found visits with an AI summary had lower click rates on traditional results than visits without an AI summary.

So visibility is not just about links existing. It is about being chosen as a cited source, and being the source users actually trust.

track metrics blog secondary image

How to track this without bias

You cannot remove all bias. But you can control for the biggest variables.

1: Create a standard prompt set

Pick 20 to 50 prompts that reflect how customers actually ask.

Do not rewrite them each time. Keep them fixed. The point is trend tracking, not creative prompting.

2: Control the environment

For each test run, record:

  • Location you are testing from
  • Language settings
  • Signed in or signed out
  • Device type
  • Date and time

This turns “we got different results” into “we changed variables”.

3: Use neutral rank tracking for the baseline

For traditional SEO, the best way to reduce personalisation noise is to use a tracking tool that runs queries from a consistent location and language.

SE Ranking’s position tracking allows you to specify location and language settings for local rank monitoring.

That does not solve LLM variability. But it gives you a stable baseline for classic search visibility.

4: Use Search Console for Google AI features impact

Google states that sites appearing in AI features like AI Overviews and AI Mode are included in overall search traffic in Search Console, reported in the Performance report under Web search type.

So if someone asks “why did clicks change after AI Mode”, you can at least ground the conversation in first party data.

5: Track citations, not just clicks

Because clicks are dropping when AI summaries appear, you need additional measures.

Track:

  • Whether the brand is mentioned
  • Whether the site is cited
  • Which page is cited
  • The wording used to describe the brand

That is how you measure visibility in the answer layer.

Stop guessing. Measure AI visibility properly.

Hyperpersonalisation makes manual testing inconsistent because the same prompt can produce different answers depending on the user, device, location, and account history. 

A free visibility report gives you a repeatable baseline to track whether you are appearing, which pages are being cited, how you are being described, and what to improve next so your visibility is more consistent over time.

Does AI recommend you?

 

Download your free guide to Smarter SEO

Good SEO can mean the difference between your business being found, or you being lost in the growing mass of online resources. This guide is designed to help you rise above your competition.

Download now
How-to-rank-higher-in-search-v02