Here’s what stood out this month, how it’s reshaping the SEO and GEO landscape, and some thoughts, opinions and guidance to help you navigate.

Google is rolling out 'Query Groups' to Search Console Insights reports. Instead of a flat list of queries, you’ll see AI-clustered groups that represent the main topics your audience is finding you for, including “Top”, “Trending up” and “Trending down” filters. Rollout is limited to properties with substantial query data at this point, but we're hopeful that everyone will eventually get access.
My Take: This is Google handing you a first-party view of your topical footprint. Treat it as a north star for cluster planning and coverage debt: if a group is trending up but you’ve only covered 40% of intent variants, prioritise that gap. It’s also a sanity check for brand strategy... do your commercial priorities show up as meaningful groups? Even if smaller websites never get access to these reports, the deeper insight here is that Google is aware of topical authority and feels it is important enough to shift its own reporting platform in that direction.
OpenAI launched ChatGPT Atlas, an agentic browser that can take actions while you navigate the web, making it the first serious contender to Perplexity’s Comet browser, but with far wider reach thanks to OpenAI’s user base. The shift to agentic browsing is well underway.
My Take: OpenAI launching an agentic browser was expected, but still a hugely significant milestone. The truly eye‑opening bit, though, is how heavily Atlas leans on Google:
Is Atlas' dependence on Google tech a sign of things to come, a marketing blunder, or a genius power move? Time will tell.
One notable downfall with the browser is the limited integration with ChatGPT without web connectivity. Yes, you read that correctly - a web browser, marketed on an AI-first UX brand platform, with built-in AI that can't browse the web. Wild. To clarify, the AGENTS can browse the web, but when you use the built-in chat sidebar, all of our testing seemed to show that no grounding was occurring. You can ask questions about your current webpage in that tab, but the answers only use the base GPT model, without live search and citations. Why does this matter? Imagine you're looking at an article about a significant global event and you ask the sidebar for more information, you won't be given any information that isn't either in ChatGPT's training data or the article you're already reading. No live updates. No fresh content. This feels like a massive oversight to me.
If there's one notable difference between Google and OpenAI over the last few years, it's that OpenAI tend to rush products into production to be 'first-to-market', then they iterate and improve; meanwhile Google takes their time and launches products that genuinely impress straight out-the-gates. If (when) Google launch their own fully agentic browser (Chrome or otherwise), I suspect it will take the crown.
Either way, I'm loving the browser functionality on offer in Atlas (teething security issues aside). I especially love the way I can have agents running in tabs ("Agent Mode") doing things for me, and the way I can recall and interact with my history conversationally ("Browser Memories"). It's just a nice UI, as well. As a browser shell, it feels mainstream‑ready in a way Comet never quite did. This feels like something every generation of my family, from primary school to octogenarians, could quite easily get along with and I'm increasingly using it as a daily tool. The limitations in search and AI chat will presumably, hopefully, be resolved soon. Surely!?
Google reiterated that links, site moves or technical tweaks won’t rescue rankings if overall site quality isn’t there. This latest reminder came at Search Central Live Dubai, where Google stressed that quality issues are a limiting factor for SEO.
To be clear, they didn't acknowledge a specific attribute or data point. However, the existence of tangible metrics for this isn't just speculation:
site_quality attribute was seen in the wild some time ago when, for a brief period, Google was inadvertently printing out various classifiers and scores in the network response code of SERPs. Site quality was scored on a 0-1 scale and correlated with SERP Feature eligibility and ranking potential. Interesting side-note: the scores were set on subdomain level, not for entire top-level domains.So, while we already suspected/knew this existed, it's interesting to hear Google talk about it so plainly, if only in 'fluffy' and loose terms.
My take: Great content, crisp UX and strong links are still the factors which move the needle directionally, but only within the ceiling your site-level quality score allows. If your domain’s perceived quality is low, even world-class page-level work will not pass muster. The commercial play is dual-track:
In other words, make sure your site is slick, build links, and promote your content in all the ways you already know you should. That’s what raises the ceiling by improving quality score. This also gives more credence to metrics like Semrush's Authority Score which try to factor in signals beyond simple PageRank. SEO tool metrics are still just estimations of Google's internal metric (speak to us if you want to understand the differences between metrics from Semrush, Ahrefs, Moz and others), but they are the best we have for analysis.
Liz Reid (VP of Google Search) clarified that 'AI-generated' is not, by default, a hallmark or indicator of spam. Google has broadened the notion of “spam” to include low-value content that adds nothing new, and is up-weighting content that brings genuine expertise and perspective – and interestingly this is being influenced by the stuff people actually click from AI Overviews. Critically, though, whether content was written by AI plays no direct role in that decision.
My Take: Stop optimising for word count or AI detection scores. Optimise for quality. Be original, add value, bring opinions, say something new. If AI helps you express deep, experience-led answers faster—great. But if you’re using it to churn summaries of what’s already ranking, or you're avoiding it at the expense of content enrichment, then you’ve fallen into the very class of content that will be suppressed.
We've said this all along: AI is a tool that can be used well or poorly. If you use AI to help you produce high-quality, intent-matched, topically-authoritative content, then you can benefit from gains in both content performance and efficiency. A critical point to drive home here, though: the latter isn’t really the point. If you are using AI primarily to save time, then you’re already in the wrong mindset and you haven't fully grasped what's up for grabs. The real value in AI is that you can put the time you used to spend writing and proofing content into producing context-rich and highly targeted inputs for AI, and then fine-tuning, humanising, and adding value to the output. Those research and refinement phases were the all-too-often rushed in the past, leading to thin and uninspiring content. Now, they are the battleground separating the mediocre and mundane from the engaging and visible.
Ever wondered why or how Google changes your headline in search results? A leaked internal system dubbed Goldmine appears to score heading/title candidates (e.g., your <title>, <h1>, internal anchors) and pick the one predicted to perform best in SERPs. Behavioural systems like NavBoost then rank pages by tracking goodClicks (engaged) and badClicks (pogo back to SERP), with performance feedback informing future Goldmine choices.
In short:
My take: Titles aren’t a static metadata field; they’re an experiment. That means your job isn’t just “write a good title”—it’s architect aligned signals: title ↔ H1 ↔ intro copy ↔ internal anchors. For anybody who has worked with RankBrain, you'll know how much emphasis we place on document outlines and internal link optimisation. This just gives us another reason. Reduce ambiguity and you reduce the chance that Google “improves” your headline in ways you didn’t intend, resulting in poor CTR and negative UX signals dragging down your quality scores.
I saw Andrea Volpini’s testing on this in October and was so glad for the opportunity to clarify this point, which I'm asked about regularly on new business calls: structured data improves consistency and contextual relevance in AI snippets, despite not being a direct ranking signal.
Key nuance: LLMs don’t train on schema (they strip code so they can train purely on natural language). The tools they use to browse and ground answers do benefit from it, though.
My Take: Yes, think about schema. You should have been doing that even before AI. It’s not a ranking factor for traditional search or LLMs, but it does guide the bots behind both, helping them understand what your content is and how to treat it. Prioritise Article/NewsArticle, Product/Offer, FAQ, HowTo, and robust Organization/Brand graphs with sameAs. The KPI isn’t “rich results earned”; it’s snippet presence and persistence across AI surfaces.
DeepMind researchers introduced BlockRank, a way to make LLM-based in-context ranking efficient by exploiting “block sparsity” and query-document relevance. In tests (e.g., BEIR, MS MARCO, NQ) a 7B model with BlockRank matched or beat fine-tuned rankers, hinting at democratised, high-quality ranking without heavy retraining. It’s a research release; not confirmed in production.
In plain English? This technology can rank content very well and it's something nerds like us will be watching with great interest so you don't have to.
My Take: Two implications:
I don’t usually sign off with this kind of thought piece, but in October we saw even more stories of sites being clapped by Google for using AI in lazy or ill-informed ways, including being too heavily focused on AI visibility at the expense of traditional SEO. I’ll probably write a full blog about this at some point, but here are the four main pitfalls we’ve seen again and again:
My Prediction: AI search platforms will build their own site-level quality and anti-spam systems soon enough. Many are already building their own indexes to minimise reliance on Google (spurred on by Google going out of their way to make scraping its results harder). Short-term thinking and lazy shortcuts will backfire as those systems mature, just as it did in Google from 2012-2014 when the Penguin and Panda algorithms closed the majority of simple, black-hat, SEO shortcut loopholes.
If you have any thoughts or questions, or would like to discuss how we can help you to optimise in light of these changes, please reach out!
