Monthly Roundups
August 6, 2025

July 2025: June 2025 Core Update Analysis, HCU Recoveries, & AI Mode Launches in the UK

Here’s what stood out this month, how it’s reshaping the SEO and GEO landscape, and some thoughts, opinions and guidance to help you navigate.

July 2025: June 2025 Core Update Analysis, HCU Recoveries, & AI Mode Launches in the UK

June 2025 Core Update Impact Analysis

Initially, we thought the June 2025 Core Update was shaping up to be a low-impact, run-of-the-mill quality update. However, the volatility ramped up and up. The final week of the update proved to be one of the biggest shake-ups we've witnessed in a very long time.

The official rollout was June 30 to July 17, which was quicker than the initial estimate of three weeks provided by Google. However, the volatility remained incredibly high for several days after the update was officially over - much higher than the usual 'tremors' we see after an update.

The key takeaways from the update are:

1. Reviews System & HCU Classifiers

Correlation data strongly suggests that the classifiers (read: marks against penalised domains) from both the Reviews system and the former Helpful Content Update (HCU) system were both significantly adjusted in this update.

Evidence of a "Stealth" Reviews Update

Before the core update was even announced, there was major ranking volatility through late June. Analysis of this suggests it was due to a significant, unannounced update to the Reviews system. Many sites previously impacted by Google's various reviews-focused updates saw major fluctuations starting around June 21st, well before the core update's official start on June 30th. This aligns with Google's previous statement that the Reviews system would be improved on a "regular and ongoing basis" without announcements. The evidence points to a substantial tweak to this system happening just before the main core update began, and a lot of the sites impacted during this pre-Core phase were also impacted during the core update itself.

Helpful Content Update (HCU) Classifier Adjustments

One of the most significant findings from the core update was the partial recovery of many sites which had previously been decimated by HCU updates, especially the September 2023 HCU which is now infamous, but also sites that had been carrying these silent penalties ever since the very first HCU back in August 2022 (and even some from the HCU precursor updates a year earlier in August 2021!).

  • Clue #1 - Timing: The timing of the recovery for these specific sites wasn't random or dispersed; it began happening, en masse, on July 8th and 9th. This indicates that Google refined the signals related to content "helpfulness" during the core update's rollout, as opposed to simple reassessing and rescoring site-by-site. The update appears to have recalibrated the system, leading to a reversal of fortunes for some sites and a worsening of the situation for others.
  • Clue #2 - Core system Integration: Google officially confirmed that the HCU system (or at least the function of that system, which was subsequently retired) was integrated into the main core algorithm in March 2024. As such, any adjustments to its classifiers now happen during a core update.
  • Were historic, lingering classifiers simply deprecated? One theory is that the old HCU classifiers were still a lingering echo from the old HCU days, and this core update finally saw those markers fully removed – handing over 100% of the work to the new core algo replacement system, possibly leaning more heavily on machine-learning (i.e. pattern-matching).

2. Broad Impact, Intent-Matching & YMYL Sites Hit Again

The update had global reach, affecting all types of content, across all Google surfaces, including web search, image search, video search, Google News, AI Overviews, and Google Discover. As with many core updates, sites in the "Your Money or Your Life" (YMYL) category experienced more significant fluctuations.

Another thing we noticed was that intent-matching and topicality seemed to play a major role in the pre-update volatility; then these shifts played directly into the new separation of traditional search results and AI Mode (more on that below). Throughout late-May and early June, articles on client sites which were previously nailed-on for certain keywords suddenly had competition in the SERPs from other articles on the same site. These were old, unchanged, high-quality articles that sat within what had, until this point, been an optimised site architecture that was working well, with good keyword separation and low internal competition. This wasn't an introduction of new competing factors on the pages, it was a shift in the way Google was assigning topical relevance and user intent, as though the algorithm's peripheral vision for these keywords got slightly broader.

Here's the really interesting bit... our SERP analysis showed that many of these specific keywords remained largely informational in intent, and the article groups which started competing against each other for them tended to perform well in AI Mode. By contrast, commercial and transactional intent pages became stronger in traditional SERPs. Correlation and causation are two very different things, but there is a simple and logical divergence in these outcomes. Informational intent shifted to AI Mode, and traditional SERPs shifted towards commercial and buying intent. Is this a new normal, Google testing, a quirk in our limited dataset, or simply a downstream effect of the HCU and Reviews classifier changes noted above? We'll keep digging and we'd love to hear any insights, anecdotes or data study findings you come across.

3. The Role of Recent Backlinks

According to Google's John Mueller, recent links are not likely to be a significant factor in core update logic. He emphasised that core updates tend to rely on long-term data and signals, not short-term changes like a recent influx of backlinks.

This isn't new information, but it's good to have the reminder. Reading between the lines, they are pattern-matching, not applying penalties based on single metrics or acquisition of specific backlinks. That kind of work is saved for Google's Quality Raters who can apply Manual Action Penalties.

My Take: It's clear that Google is still aggressively fine-tuning the way it deciphers and determines what "helpful" looks like. The fact that some sites have finally recovered after YEARS of being throttled by a HCU classifier shows that Google might have realised its "helpfulness" algorithm was too blunt and wrongly penalised legitimate sites. It's time to double down on what works: creating content that genuinely helps a user to solve a problem or make a decision. For an ecommerce site, this means going beyond basic product descriptions to include detailed buying guides, video demonstrations, and authentic user reviews. For a SaaS brand, it means creating fair comparison pages (not just sales fluff) and detailed case studies. We can also see that Google is looking at the long-term authority and trustworthiness of your entire domain. Don't get distracted... keep your time and energy focused on creating useful content, optimising UX, and building quality links over time. Build your brand online. If an AI is now judging your content's helpfulness, it's looking for patterns of expertise and trust at a massive scale. This means your entire footprint needs to scream "we are experts." It's no longer about a few good blog posts and a handful of tactical backlinks; it's about a consistent, site-wide demonstration of E-E-A-T using content that is intent-matched and semantically optimised for the right topics. Nothing new for those in the know.

AI Mode is Now Live in the UK

This might have been pushed half way down this roundup by the Core Update news, but this is nonetheless huge news. Google began rolling out AI Mode in the UK, and updated the core AI Mode systems and functionality. It now offers a more advanced, multimodal search experience powered by a custom version of Gemini 2.5 Pro (arguably the leading consumer-facing AI model right now, according to most benchmark tests) with the ability to upload files (images, PDFs, etc) and a "Canvas" feature for planning and editing.

As a pertinent reminder, Google's Gary Illyes stated that "normal SEO" practices are sufficient to appear in AI answers, with no need for specialised "GEO" or "LLMO" (Large Language Model Optimisation) tactics. This is what we've said all along... the strategy might be framed differently—using language to describe the new and evolving ways people search and answers are presented—but it's all still very much based on tried-and-tested SEO principles.

My Take: AI Mode is a great experience. I didn't want to admit it, but there it is. I use it all the time to research. But when I'm ready to buy, I still prefer a good old "10 blue links" list rather than having to read an essay to get where I'm going. That could change, of course, as AI Mode matures and the outputs get sharper.

What’s already clear is that website traffic downstream of AI experiences often converts better than traffic driven purely by traditional search. Users arrive better educated and ready to purchase, enquire, sign up, whatever. The window to engage them from AI surfaces is narrow as the CTRs are low (for now) and traditional SERPs will likely keep the “last-click” crown for now because they skew more commercial and transactional. Making those final, bottom-of-funnel steps simple and frictionless is, arguably, more critical than ever because users are not as likely to immerse in your website as they once were - they are there, card in hand, looking for the easy and painless checkout or enquiry form. Looking further ahead, AI agents will soon be taking actions on your site on users’ behalf, which is another reason to start optimising conversion as much as possible right now. It's already important for CRO, brand loyalty and SEO to offer good user experience, but now you have another couple of reasons to prioritise that work.

Freshness, Citations & GEO Tactics

Google's head of AI, Jeff Dean, highlighted that index freshness is a core strength for Google's AI-powered search. This real-time index gives Google a significant advantage over closed Large Language Models (LLMs) that often rely on stale data from static training runs. While this may be true in terms of getting brands or information featured in base model responses, we live in an age of RAG-based (retrieval augmented generation) 'grounding', where those base model answers are updated by live web content found specifically to improve the quality of the response. The point Jeff Dean appears to be making is that Google's AI can benefit from a more current base model as well as having that RAG layer. OpenAI have also talked about aiming towards a more continual training layer, updating base model weights in real-time as they crawl the web, so this can probably be seen as a bit of posturing on Google's part –but it does hint at an interesting paradigm shift.

On a related theme, more data studies have shown that recent content is more likely to be surfaced in AI experiences, including ChatGPT and AI Overviews. In these tests, old content was given a more recent date (in the frontend but also crucially in the lastmod field in the sitemap), and those pages generally saw immediate positive impacts.

Sounds too good to be true? Yeah, we thought so as well, but then, to the surprise of everyone in the industry, Bing officially confirmed it!

My Take: Am I suggesting you should try to dupe ChatGPT, Co-Pilot and Google AI experiences by constantly refreshing your lastmod for every page in your sitemap? No, because this trick won't last forever, it may even be penalised at some point, and there are more sustainable and ethical ways of optimising, like focusing on content quality, topical coverage and structured data. It's worth knowing, though, if only to get an understanding of where these AI models are at in terms of the complexity (or simplicity) of their crawling, indexing and content extraction methods.

Other Industry News & Updates

  • Microsoft Bing's New Layout: Microsoft is restructuring Bing's interface to prioritise its AI. The "Copilot" search is now the first tab, aligning with Google's AI Mode, and the traditional search results tab, formerly "All," has been renamed to "Web."
  • AI-Generated Content & SEO: An Ahrefs study confirmed that a mix of hybrid humanised AI-generated content doesn't harm rankings. Quality is still king, but precisely how a piece of high-quality content is produced is irrelevant in search. Google wants to serve the best content, plain and simple.
  • Google Search Console Bug: A bug in Google Search Console's performance report caused a reported drop in average position for many users, which did not reflect their actual rankings. This tied in precisely with the rollout of AI Mode in the UK. Google have confirmed nothing. Without the AI experience filters in Search Console (which Google are still being very quiet about) it's difficult to investigate this bug further as a user. If you saw that drop, though, you're not alone – just manually check rankings and traffic data from other sources to verify everything is ok.

If you have any thoughts or questions, or would like to discuss how we can help you to optimise in light of these changes, please reach out!

Find out how we can help you to scale your brand