SEO

Google research points to a post-question future for search intent

Google is working towards a future where it understands what you want before you type a search.

Now Google is pushing that thinking into the device itself, using smaller AI models that work almost as well as larger ones.

What’s going on. In a research paper presented at EMNLP 2025, Google researchers show that a simple change makes this possible: break down “intentional understanding” into smaller steps. When they do, small multimodal LLMs (MLLMs) become powerful enough to match systems like the Gemini 1.5 Pro — while being faster, cheaper, and keeping data on the device.

The future is to bring out the intention. Big AI models can already infer intent from user behavior, but they usually run in the cloud. That creates three problems. They walked slowly. They are very expensive. And they raise privacy concerns, because user actions can be sensitive.

Google’s solution is to break down the task into two simple steps that small, mobile models can handle well.

  • First step: Each screen interaction is summarized separately. The system records what was on the screen, what the user did, and a tentative guess about why they did it.
  • Step two: Another small model updates the true parts of those summaries. It ignores guesswork and produces one short statement that describes the user’s overall intent for the session.
    • By keeping each step focused, the system avoids a common failure mode of small models: crashing when you’re asked to think about long, messy histories all at once.

How researchers measure success. Instead of asking if the objective summary “looks like” the correct answer, they use a technique called Bi-Fact. Using its primary quality metric, the F1 score, step-by-step submodels consistently outperform other submodel methods:

  • Gemini 1.5 Flash, model 8B, matches the performance of Gemini 1.5 Pro in mobile data.
  • The guesswork is reduced because the guesswork is removed before the final intent is written.
  • Even with the extra steps, the system works faster and is cheaper than large cloud-based models.

How does this work. An objective is broken down into small pieces of information, or facts. Then they measure which facts are missing and which are established. This:

  • Exhibitions How Objective understanding fails, not just that it fails.
  • It reveals where systems often falsify meaning and discard important information.

This paper also shows that dirty training data is detrimental to large, end-to-end models in this step-by-step approach. If the labels are noisy – which is typical of real user behavior – the decomposed system holds up better.

Why do we care. If Google wants agents to suggest actions or answers before people search, it needs to understand intent through user behavior (how people move between apps, browsers, and screens). This study brings this idea closer to reality. Keywords will still be important, but the question will be one signal. In this future, you’ll need to design clear, meaningful user journeys — not just words typed at the end.

Google Research blog post. Small models, big results: Achieving higher target emissions through decomposition


Search Engine Land is owned by Semrush. We are committed to providing the highest quality of marketing articles. Unless otherwise stated, the content of this page is written by an employee or paid contractor of Semrush Inc.


Danny Goodwin

Danny Goodwin is the Editorial Director of Search Engine Land & Search Marketing Expo – SMX. He joined Search Engine Land in 2022 as a Senior Editor. In addition to reporting on the latest marketing news, he hosts Search Engine Land’s SME (Subject Matter Expert) program. He also helps organize US SMX events.

Goodwin has been editing and writing about the latest developments and trends in search and digital marketing since 2007. He was previously Editor-in-Chief of Search Engine Journal (from 2017 to 2022), managing editor of Momentology (from 2014-2016) and editor of Search Engine Watch (from 2007 to 2014). He has spoken at many major search conferences and virtual events, and has shared his knowledge in a variety of publications and podcasts.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button