Here’s what stood out this month, how it’s reshaping the SEO and GEO landscape, and some thoughts, opinions and guidance to help you navigate.

The direction of travel is hard to miss. Search is becoming more synthesised, more interface-driven, more multimodal and more action-oriented. Visibility still matters. Rankings still matter. Technical foundations still matter. But the route from query to site visit is no longer the default path. In more and more cases, it's the fallback.
That doesn't mean SEO is dead, rebadged or somehow magically obsolete. Nor is it something that can be ignored. It means the surface area of search has changed, the weighting of signals is shifting, and lazy either/or arguments are becoming less useful by the month.
The loudest strategic story of the month was not a product launch. It was the unusually candid set of comments from Google, Microsoft and Perplexity on the GEO boom. They all weighed in on the now-exhausting GEO debate and actually said something substantive and insightful instead of repeating tired tropes.
Google's Danny Sullivan warned against chasing brittle tactics designed purely to please models in the moment. Microsoft’s Krishna Madhavan took a more operational line, pointing to structure, freshness, Q&A formatting and machine-readable cues that make content easier for AI systems to parse. Perplexity, refreshingly, said the quiet part out loud: there are people who benefit from claiming GEO is everything, and others who benefit from dismissing it as nonsense. In other words, there's 'grift' at both ends of that scale. The truth and the real value, obviously, live in the uncomfortable middle.
That messy middle matters because the industry has spent far too long forcing a fake binary between “it’s just SEO” and “everything has changed”. Neither position survives contact with reality for very long. Hopefully people were taking notes.
AI retrieval and synthesis don't weight every signal in the same way as classic blue-link ranking. Information gain, extractability, disambiguation, answer formatting, explicit entity cues, multimodal usefulness, and brand confidence all appear to matter more (or differently) in these environments. That's where the serious conversation should be. The teams that navigate this well are not the ones shouting loudest about a new acronym, nor the ones pretending nothing important has changed. They are the ones doing the harder work of adapting established search principles to a messier, more mediated environment where ranking, citation, extraction, interface design and task completion increasingly overlap.
My Take: The Search industry has become divided and polarised over the "GEO is just SEO” debate: to the point where several platforms I used to love for hosting open, curious, exploratory debate have, sadly, become quite toxic and tribal.
So, it felt reassuring—and dare I say vindicating—to see each platform's representative land exactly where we've been all along... GEO is neither snake oil nor a clean replacement for SEO. It's a different optimisation context built on overlapping foundations.
Good SEO is still the base layer. Crawlability, accessibility, structured meaning, brand signals, topical depth, internal architecture, evidence, and a site that doesn't feel like it was assembled by ten disconnected stakeholders all still matter. But if inclusion in AI-generated answers is part of the brief, then the optimisation lens changes. Not entirely. Not cosmetically. But materially and practically in approach.
The more revealing bit is what each company chose to emphasise. Google pushed back on gimmicks and short-term pattern chasing. Fair enough; also convenient when you control the shifting target. Microsoft leaned into parseability and structural clarity, which tells you a lot about how these systems still consume and operationalise content beneath the glossy interface. Perplexity was the most honest about how incentives distort the discourse.
The real mistake is treating AI visibility as either a philosophical argument or a bolt-on tactic. If AI answer inclusion matters commercially, then the logical strategic decision is to pragmatically optimise for retrieval, synthesis and citation behaviour on purpose. That doesn't mean chasing fads. It means understanding signal weightings, understanding user behaviour in AI, doing the research to understand your visibility in AI, and adapting your SEO strategy accordingly, even though the underlying principles and day-to-day activities may remain the same.
Another interesting take on what we've discussed many times in previous posts... swapping AI-written content for human-written content, by itself, won't necessarily trigger recovery.
John Mueller’s point was simple. The real issue isn't whether a human touched the prose on the way out. The issue is whether the site’s broader content strategy, usefulness and purpose are any good in the first place.
That matters because a lot of the market is still pretending the problem with scaled content is the use of AI, rather than the far more obvious issue that most scaled content programmes are strategically hollow. They are built to manufacture inventory, not value. AI merely made the bad habit cheaper.
My Take: An interesting nuance here goes beyond the “AI content is fine if it’s high quality” line. Everyone should really know that by now if they've been paying attention. My ears pricked up regarding Google’s framing of recovery. This was not about editing passes or disclosure labels or sprinkling a bit more human seasoning on machine output. It was a reminder that once a site has trained Google to see it as low-value, changing the production method without changing the publishing logic is mostly theatre. As we've always known but rarely had confirmed by Google, this hints at the fact that black marks are hard to wash off. That's why it's so important to avoid spammy, low-value, soft-focus content tactics in the first place.
That has a direct GEO angle too. AI systems are even less tolerant of indistinct, derivative content because they are built to compress and synthesise. If your content contributes nothing unique, the model doesn't need you. In blue-link search, that might still scrape you a ranking here and there. In answer engines, it often gets you quietly abstracted out of the exchange.
Google spent much of the month threading Gemini 3 into Search.
First came the announcement that AI Mode was being powered by Gemini 3, with Google talking up more complex reasoning, multimodal understanding, generative layouts and dynamic experiences built on the fly. Then came the follow-on detail around automatic model routing, where tougher questions would be escalated to Gemini 3 Pro while simpler ones continue to use faster models.
In parallel, there was more industry chatter around ranking volatility and whether any of this was bleeding into the broader search landscape. The evidence for a neat causal line there is thin, and anyone claiming certainty is overselling it. But the broader point stands: Google is making Search more model-routed, more dynamic, and more selective about how much heavy AI it applies to any given query.
That's an important architectural signal. Search is no longer one uniform retrieval system with a single output pattern. It's increasingly a traffic controller, deciding when to serve links, when to serve synthesis, when to escalate to stronger reasoning, and when to generate a more bespoke interface altogether.
My Take: The real significance of Gemini 3 in Search isn't just “better answers”. That's the press release headline, but the deeper shift is Google becoming more comfortable letting model capability shape interface behaviour in real time.
That has two consequences. First, consistency goes down. The same query class won't necessarily produce the same answer experience over time, across users, or even across moments. Secondly, optimisation gets messier. We haven't had a single stable SERP archetype for a long time now; Maps Packs, Image Packs, Featured Snippets and other SERP features have been appearing and disappearing across query sets for years, with AI Overviews the latest addition. Historically, though, that variability has been more about whether a feature was present on the SERP, while the underlying delivery stack and algorithmic logic remained relatively stable. What is changing now is the degree of foundational model fluctuation underneath the experience itself.
That's exactly why the “just rank number one and job done” worldview keeps ageing badly. In a world of intelligent routing, dynamic layouts and query fan-out, you're not only competing for rank; you are competing for inclusion, extraction, citation, utility inside generated interfaces, and relevance across a wider set of downstream retrieval steps. Again, we find ourselves in the territory of everything and nothing changing all at once!
Fresh reporting suggested AI Overviews are continuing to drive down click-through rates: and not by a little.
The headline numbers were ugly. Organic CTR was reported as down 65% on pages with AI Overviews when a site was not cited, and still down 49% even when it was. Paid CTR also took a hit. More intriguingly, declines were also visible on pages without AI Overviews, suggesting the problem is bigger than one feature box and may reflect broader behavioural shifts in how users engage with modern SERPs.
That last point matters. We're not just seeing AI Overviews changing traffic where they visibly appear. AI chat experiences, more broadly, may also be retraining user expectations across search in general: more scanning, less clicking, more confidence in AI answers and in-platform resolution, and more tolerance for partial answers.
My Take: At this point, anybody still treating citation inside AI Overviews as some kind of magical click shield is clinging to a story the data does not support. Citation can help CTR. It is not nothing. But it doesn't restore the old economics of visibility eiher.
The bigger strategic mistake is to read this purely as a traffic loss story. It's also an attribution, forecasting and budget allocation story. If the click is no longer the clean unit of value, then SEO and GEO teams need a more mature way to talk about influence higher up the funnel, brand imprinting inside answer engines, assisted discovery, and downstream conversion paths that are no longer linear.
None of that means publishers should politely accept being hollowed out while being told exposure is the new reward. But it does mean measurement models built for ten blue links are increasingly unfit for the environment we're actually operating in. I cut my teeth in offline advertising (billboards, TV, radio) and it feels like we're coming full circle back to the days of 'brand impressions', 'salience' and 'brand lift'.
If November proved anything, it's that Google doesn't want AI search to stop at summarising information. It wants it to help users do things.
Across the month, Google rolled out or previewed a string of AI Mode and Shopping changes:
This demonstrates a coherent product direction. Search is being pushed further down the journey, from discovery engine to task engine.
Commercially, that means Google is trying to reduce friction between intent, evaluation and action. The more it can keep users inside a guided, synthesised interface while still completing meaningful steps, the less it needs to send them outward until the moment it absolutely has to.
My Take: This is where the GEO conversation gets genuinely interesting. Not because “AI is changing everything”, which is a content-marketing bumper sticker, but because the mechanics of influence are shifting closer to the transaction.
In classic search, much of the optimisation game happened before the click. In this new layer, parts of merchandising, comparison, recommendation framing and even action initiation are happening before the site visit too. SEO/GEO is further encroaching on the parts of funnel optimisation previously reserved for UX and CRO teams. That changes what it means to be visible. It's not just about being discovered. It's about being legible and preferable inside a machine-mediated buying journey.
For ecommerce brands in particular, the practical implication is uncomfortable but clear: feed quality, product data completeness, review content, availability signals, pricing logic, merchant integrations, and comparative product framing are becoming even more central.
While Google kept expanding AI search depth, Microsoft made a subtler but strategically telling move: it gave Copilot a more dedicated search experience with more prominent citations, better clickability, a fuller reference list and clearer navigation links.
Microsoft are not being altruistic. They understand that if answer engines become too effective at consuming publisher value without visibly returning it, the ecosystem revolt becomes harder to ignore.
Whether this turns into a meaningful traffic advantage is another question. But as a product stance, it is notable. Microsoft is signalling that the answer-engine future doesn't necessarily have to hide the source layer in a tiny superscript buried behind three interaction states.
My Take: The contrast here isn't just aesthetic. It's a philosophical commercial foundation, differentiating Microsoft from the competition. Whether it proves to be a blunder or a power move remains to be seen.
Google often behaves as if the source is something to be acknowledged just enough to keep the lawyers and publishers at bay. Microsoft, at least in this iteration, seems more willing to make the source layer part of the user experience.
That matters because citation design is not neutral. It shapes what gets clicked, remembered and trusted. For brands, the opportunity isn't merely referral traffic. It is branded association at the exact moment an AI system constructs authority in front of the user. In a world where answer engines are going to mediate more and more consumer journeys, source prominence isn't merely a UI footnote. It's distribution economics. It's a critical ingredient in the creation of a sustainable ecosystem; where publishers still feel incentivised to publish, and brands find the value exchange equitable enough to continue playing ball.
Google’s documentation says AI Overview clicks, impressions and positions are tracked in Search Console. In practice, the reporting picture remains muddy.
The problem isn't just a missing filter, though that remains maddening. It's that AI Overview citation cards can shift based on interaction, expansion and even rerunning the same query. Links move. Source order changes. The same result can present differently as the interface unfolds.
So yes, Google can count the events. But from an analyst’s point of view, the environment is dynamic enough that interpreting those counts with confidence becomes much harder.
My Take: This is the reporting problem in one sentence: Google is measuring a fluid interface using metrics inherited from a more static one.
That doesn't make the numbers useless. It does make them more dangerous in the hands of anyone pretending they are clean. Position, impression and click all become fuzzier once answer modules are interactive, expandable and citation order is conditional.
The likely outcome is that mature teams will have to build a blended measurement layer: Search Console where it helps, third-party tracking where it approximates well enough, manual SERP observation where stakes justify it, and a much stronger connection to downstream business outcomes. Waiting for a perfectly neat native GEO dashboard from Google feels optimistic.
Google managed to create a minor panic this month by dropping support for more structured data types and search features. John Mueller later clarified that Google is not “killing schema”.
What's happening is more prosaic. Google is trimming low-value or lightly used features, deprecating some markup types, and continuing to support the schema that aligns with features it still actually wants to maintain.
Alongside that, Google updated review and aggregate rating documentation to reduce ambiguity around nested reviews. In short: be clearer about what's being reviewed and avoid redundant or conflicting ways of expressing it.
So the direction isn't anti-structured data. It's anti-noise.
My Take: There's a temptation to treat schema news as either existential drama or clerical admin. It's neither.
What we're seeing is Google becoming more selective about which structured signals deserve long-term surface area. That shouldn't discourage implementation. It should make implementation more disciplined. The right question is no longer “how much schema can we add?” but “which markup materially improves machine understanding, eligibility or disambiguation in the systems that still matter?”
That's especially relevant for AI retrieval. Even where a markup type no longer maps to a shiny visual treatment, structured meaning still helps machines resolve entities, relationships and attributes with less ambiguity. The rich result may disappear. The interpretive value often does not.
A small but positive story: SEOs have reported that Google’s form for negative review extortion scams appears to be working.
It's hardly a glamorous industry milestone, but it is a meaningful operational one for local businesses dealing with fake one-star attacks and crude removal shakedowns.
The catch, naturally, is that proving extortion may require evidence that is awkward to gather. So while this isn't a perfect system, it is a step in the right direction
My Take: This is one of those stories that won't dominate LinkedIn discourse, largely because it cannot be turned into a thread about “the future of discovery”. Yet for the businesses affected, it's far more tangible than most of the month’s hotter takes.
It's also a reminder that local trust systems remain highly gameable, and that review integrity is still a ranking, conversion and brand problem rolled into one. The form working is good news. The fact that businesses may need to dance close enough to extortion to produce evidence is, frankly, absurd. As anyone who has been on the receiving end of review extortion will know, though, any additional support will be appreciated.
There were signs of ranking volatility during the month, with some speculation about whether Gemini 3 or other infrastructure changes were involved.
Google appears to be surfacing more free Merchant Center listings inside AI Overviews. For retailers, that's another sign that product feed health is becoming part of visibility strategy well beyond the Shopping tab.
Event tickets, beauty bookings, wellness appointments, restaurant reservations, travel planning, price tracking, assisted checkout. None of these launches alone changes the whole market. Together, they paint a clear picture of what Google wants Search to become.
If you have any thoughts or questions, or would like to discuss how we can help you to optimise in light of these changes, please reach out!
