Monthly Roundups
November 2, 2025

October 2025: ChatGPT Atlas, Query Groups, Site Quality, BlockRank & AI SEO Pitfalls

Here’s what stood out this month, how it’s reshaping the SEO and GEO landscape, and some thoughts, opinions and guidance to help you navigate.

October 2025: ChatGPT Atlas, Query Groups, Site Quality, BlockRank & AI SEO Pitfalls

New Query Groups reporting in Search Console confirms importance of topical authority

Google is rolling out 'Query Groups' to Search Console Insights reports. Instead of a flat list of queries, you’ll see AI-clustered groups that represent the main topics your audience is finding you for, including “Top”, “Trending up” and “Trending down” filters. Rollout is limited to properties with substantial query data at this point, but we're hopeful that everyone will eventually get access.

My Take: This is Google handing you a first-party view of your topical footprint. Treat it as a north star for cluster planning and coverage debt: if a group is trending up but you’ve only covered 40% of intent variants, prioritise that gap. It’s also a sanity check for brand strategy... do your commercial priorities show up as meaningful groups? Even if smaller websites never get access to these reports, the deeper insight here is that Google is aware of topical authority and feels it is important enough to shift its own reporting platform in that direction.

OpenAI launches agentic Atlas browser… and it leans heavily on Google!

OpenAI launched ChatGPT Atlas, an agentic browser that can take actions while you navigate the web, making it the first serious contender to Perplexity’s Comet browser, but with far wider reach thanks to OpenAI’s user base. The shift to agentic browsing is well underway.

My Take: OpenAI launching an agentic browser was expected, but still a hugely significant milestone. The truly eye‑opening bit, though, is how heavily Atlas leans on Google:

  • Atlas is built on Chromium (the open‑source core of Google Chrome). Lots of browsers are, including Microsoft Edge and Perplexity Comet, but it’s still a notable point given OpenAI’s direct AI competition with Google.
  • The built‑in search appears to lean on Google’s results, although without SERP Features currently. When using the browser's search functions, the results tabs (e.g., web, images, news) even provide links that direct the user to Google's search results (not Bing's).
  • Early server‑log observations from some sites suggest Atlas’s agent mode requests have been identified as Googlebot in certain setups.

Is Atlas' dependence on Google tech a sign of things to come, a marketing blunder, or a genius power move? Time will tell.

One notable downfall with the browser is the limited integration with ChatGPT without web connectivity. Yes, you read that correctly - a web browser, marketed on an AI-first UX brand platform, with built-in AI that can't browse the web. Wild. To clarify, the AGENTS can browse the web, but when you use the built-in chat sidebar, all of our testing seemed to show that no grounding was occurring. You can ask questions about your current webpage in that tab, but the answers only use the base GPT model, without live search and citations. Why does this matter? Imagine you're looking at an article about a significant global event and you ask the sidebar for more information, you won't be given any information that isn't either in ChatGPT's training data or the article you're already reading. No live updates. No fresh content. This feels like a massive oversight to me.

If there's one notable difference between Google and OpenAI over the last few years, it's that OpenAI tend to rush products into production to be 'first-to-market', then they iterate and improve; meanwhile Google takes their time and launches products that genuinely impress straight out-the-gates. If (when) Google launch their own fully agentic browser (Chrome or otherwise), I suspect it will take the crown.

Either way, I'm loving the browser functionality on offer in Atlas (teething security issues aside). I especially love the way I can have agents running in tabs ("Agent Mode") doing things for me, and the way I can recall and interact with my history conversationally ("Browser Memories"). It's just a nice UI, as well. As a browser shell, it feels mainstream‑ready in a way Comet never quite did. This feels like something every generation of my family, from primary school to octogenarians, could quite easily get along with and I'm increasingly using it as a daily tool. The limitations in search and AI chat will presumably, hopefully, be resolved soon. Surely!?

Google doubles-down on site_quality as a gating factor

Google reiterated that links, site moves or technical tweaks won’t rescue rankings if overall site quality isn’t there. This latest reminder came at Search Central Live Dubai, where Google stressed that quality issues are a limiting factor for SEO.

To be clear, they didn't acknowledge a specific attribute or data point. However, the existence of tangible metrics for this isn't just speculation:

  1. There is a Google patent describing a “site quality score” that modifies how pages across a domain can rank. It frames quality as a kind of brand-footprint signal that influences eligibility for certain SERP features and the headroom individual pages have in the rankings. One factor influencing the system described in the patent was PageRank, alongside things like branded search volume, brand mentions and popularity, content coverage, and user engagement.
  2. site_quality attribute was seen in the wild some time ago when, for a brief period, Google was inadvertently printing out various classifiers and scores in the network response code of SERPs. Site quality was scored on a 0-1 scale and correlated with SERP Feature eligibility and ranking potential. Interesting side-note: the scores were set on subdomain level, not for entire top-level domains.
  3. The recent DOJ vs Google antitrust trial in the US revealed a huge amount of information about the underpinnings of Google. One such revelation was the existence of a signal called Q* (pronounced "Q-Star"). There's a whole blogs-worth of nuance around Q* , but in simple terms, this is the quality scoring algorithm in plain sight.

So, while we already suspected/knew this existed, it's interesting to hear Google talk about it so plainly, if only in 'fluffy' and loose terms.

My take: Great content, crisp UX and strong links are still the factors which move the needle directionally, but only within the ceiling your site-level quality score allows. If your domain’s perceived quality is low, even world-class page-level work will not pass muster. The commercial play is dual-track:

  1. Remedy systemic weak spots that depress site‑wide trust (thin sections for topical authority, stale content, UX drag, duplication, poor architecture).
  2. Invest in entity‑building (i.e., brand‑building) activity that grows branded demand and credible third‑party references.

In other words, make sure your site is slick, build links, and promote your content in all the ways you already know you should. That’s what raises the ceiling by improving quality score. This also gives more credence to metrics like Semrush's Authority Score which try to factor in signals beyond simple PageRank. SEO tool metrics are still just estimations of Google's internal metric (speak to us if you want to understand the differences between metrics from Semrush, Ahrefs, Moz and others), but they are the best we have for analysis.

Google VP of Search: High-quality AI content is fine; “low-value” gets down-weighted

Liz Reid (VP of Google Search) clarified that 'AI-generated' is not, by default, a hallmark or indicator of spam. Google has broadened the notion of “spam” to include low-value content that adds nothing new, and is up-weighting content that brings genuine expertise and perspective – and interestingly this is being influenced by the stuff people actually click from AI Overviews. Critically, though, whether content was written by AI plays no direct role in that decision.

My Take: Stop optimising for word count or AI detection scores. Optimise for quality. Be original, add value, bring opinions, say something new. If AI helps you express deep, experience-led answers faster—great. But if you’re using it to churn summaries of what’s already ranking, or you're avoiding it at the expense of content enrichment, then you’ve fallen into the very class of content that will be suppressed.

We've said this all along: AI is a tool that can be used well or poorly. If you use AI to help you produce high-quality, intent-matched, topically-authoritative content, then you can benefit from gains in both content performance and efficiency. A critical point to drive home here, though: the latter isn’t really the point. If you are using AI primarily to save time, then you’re already in the wrong mindset and you haven't fully grasped what's up for grabs. The real value in AI is that you can put the time you used to spend writing and proofing content into producing context-rich and highly targeted inputs for AI, and then fine-tuning, humanising, and adding value to the output. Those research and refinement phases were the all-too-often rushed in the past, leading to thin and uninspiring content. Now, they are the battleground separating the mediocre and mundane from the engaging and visible.

Inside the indexing-ranking pipeline: Goldmine → NavBoost → feedback loop

Ever wondered why or how Google changes your headline in search results? A leaked internal system dubbed Goldmine appears to score heading/title candidates (e.g., your <title>, <h1>, internal anchors) and pick the one predicted to perform best in SERPs. Behavioural systems like NavBoost then rank pages by tracking goodClicks (engaged) and badClicks (pogo back to SERP), with performance feedback informing future Goldmine choices.

In short:

  • Goldmine (indexing): judges competing title candidates; selects the strongest for snippets.
  • NavBoost (ranking): scores satisfaction signals (good vs. pogo clicks).
  • Feedback loop: NavBoost outcomes fine-tune Goldmine’s future picks.

My take: Titles aren’t a static metadata field; they’re an experiment. That means your job isn’t just “write a good title”—it’s architect aligned signals: title ↔ H1 ↔ intro copy ↔ internal anchors. For anybody who has worked with RankBrain, you'll know how much emphasis we place on document outlines and internal link optimisation. This just gives us another reason. Reduce ambiguity and you reduce the chance that Google “improves” your headline in ways you didn’t intend, resulting in poor CTR and negative UX signals dragging down your quality scores.

Structured data & AI search: not a ranking factor, still a force-multiplier

I saw Andrea Volpini’s testing on this in October and was so glad for the opportunity to clarify this point, which I'm asked about regularly on new business calls: structured data improves consistency and contextual relevance in AI snippets, despite not being a direct ranking signal.

Key nuance: LLMs don’t train on schema (they strip code so they can train purely on natural language). The tools they use to browse and ground answers do benefit from it, though.

My Take: Yes, think about schema. You should have been doing that even before AI. It’s not a ranking factor for traditional search or LLMs, but it does guide the bots behind both, helping them understand what your content is and how to treat it. Prioritise Article/NewsArticle, Product/Offer, FAQ, HowTo, and robust Organization/Brand graphs with sameAs. The KPI isn’t “rich results earned”; it’s snippet presence and persistence across AI surfaces.

Google’s BlockRank lands: scalable in-context ranking for everyone

DeepMind researchers introduced BlockRank, a way to make LLM-based in-context ranking efficient by exploiting “block sparsity” and query-document relevance. In tests (e.g., BEIR, MS MARCO, NQ) a 7B model with BlockRank matched or beat fine-tuned rankers, hinting at democratised, high-quality ranking without heavy retraining. It’s a research release; not confirmed in production.

In plain English? This technology can rank content very well and it's something nerds like us will be watching with great interest so you don't have to.

My Take: Two implications:

  1. Retrieval tech is getting good enough for startups and internal search to feel “Google‑ish” without Google‑scale infrastructure. That could be a commercial opportunity for publishers with big archives, ecommerce stores with huge catalogues, and LLM‑search players looking to rival Google (think ChatGPT, Apple, Perplexity, etc).
  2. If in‑context rankers become commonplace, content blocks (or sections, or chunks, or passages, call them what you will) with crisp intent targeting will gain even more value versus whole‑page optimisation. That already matters for Featured Snippets and AI grounding; BlockRank would likely push it further.

Google AI experiences continue to evolve

  • Sticky citations in AI Overviews keep sources pinned as you scroll, likely improving source recall and click propensity.
  • Agentic AI Mode is expanding globally, with opt‑in capabilities for AI Ultra subscribers; broader access is rolling out to 180+ countries.
  • More visual answers: additional images, multimodal content, and “fan‑out” image panels for more immersive results.

Thought Piece: More sites get clapped for dangerous AI use

I don’t usually sign off with this kind of thought piece, but in October we saw even more stories of sites being clapped by Google for using AI in lazy or ill-informed ways, including being too heavily focused on AI visibility at the expense of traditional SEO. I’ll probably write a full blog about this at some point, but here are the four main pitfalls we’ve seen again and again:

  1. Using low-effort inputs to generate AI content: This inevitably leads to low-quality, low-value outputs. The time invested into prompting and context-building for AI content generation should be seen as akin to the financial investment you’d have made in a freelancer before LLMs were readily available. You wouldn’t have used the cheapest writer you could possibly find because you’d have expected a low-quality product in return. You need to do the research, provide context, guide the content, and put the effort in if you want a high-quality product from an LLM.
  2. Not reviewing or adding a human touch: There is a common and persistent fallacy that, just because content is written by an AI, it is somehow naturally optimised for AI visibility. We are always asked whether it’s OK to edit content from an AI. The answer is not just “yes, you can”… it’s “yes, you absolutely should!”. Not because you have to make it “less AI” (see the next pitfall below), but because you need to add your expertise, perspective, insight, brand, USPs, and creative stamp in order to make it original, unique, and valuable. If you’ve successfully avoided pitfall #1, then some of this may already be included in your raw output because you provided it as context, but even then, why would you shy away from adding more of your brand, more value, more commercial focus and more creative flair to your content? You wouldn’t copy and paste content directly into your site from even the most expensive and experienced copywriter—don’t treat AI content any differently.
  3. Trying to evade “AI Detection” scores: Three points here. Firstly, Google have repeatedly said they don’t care if you use AI so long as your content is good. They couldn't be clearer on this. Secondly, either Google can’t tell whether content is written by AI, as proven by the unreliability of all the leading AI detection tools, or they have technology that is far more advanced than those AI detection tools so you have no chance of pulling the wool over their eyes. Either way, it's a waste of time. Finally, in almost every case I’ve seen, content quality has been eroded by tweaking to evade AI detection. It's death by a thousand paper cuts (or a thousand rewrites). Given these three points, you should spend your time adding value and improving the quality of your content – and in doing so, you’ll be adding more ‘humanisation’ anyway. Forget the AI detection scores, save yourself the subscription cost, and optimise for the real-world outcome that truly matters – quality.
  4. Thinking AI is a strategist: Unless you’ve built a custom-trained strategy GPT that is contextually aware of every element of your brand, tone of voice, marketing strategy, commercial strategy, competitive landscape, financial position, risk sensitivity, and every other factor you should consider when crafting strategies—including all of the nuances of SEO, PPC, social, email, and every other channel you might market through—don’t expect to get a good answer when you ask ChatGPT, “How can I make this page rank better for this keyword?”. It will almost always provide surface-level advice that may help you to achieve one set of goals (albeit usually not in the most efficient or scalable way) but will hurt you in ten other ways which it never considered. The most common things I see are the creation of duplicate content or messy internal linking, both of which may have been suggested by AI based on perfectly sound guidance when blinkered by limited context, but which ultimately introduce issues that negatively impact overall site quality and optimisation!

My Prediction: AI search platforms will build their own site-level quality and anti-spam systems soon enough. Many are already building their own indexes to minimise reliance on Google (spurred on by Google going out of their way to make scraping its results harder). Short-term thinking and lazy shortcuts will backfire as those systems mature, just as it did in Google from 2012-2014 when the Penguin and Panda algorithms closed the majority of simple, black-hat, SEO shortcut loopholes.

If you have any thoughts or questions, or would like to discuss how we can help you to optimise in light of these changes, please reach out!

Find out how we can help you to scale your brand