Here’s what stood out this month, how it’s reshaping the SEO and GEO landscape, and some thoughts, opinions and guidance to help you navigate.
Initially, we thought the June 2025 Core Update was shaping up to be a low-impact, run-of-the-mill quality update. However, the volatility ramped up and up. The final week of the update proved to be one of the biggest shake-ups we've witnessed in a very long time.
The official rollout was June 30 to July 17, which was quicker than the initial estimate of three weeks provided by Google. However, the volatility remained incredibly high for several days after the update was officially over - much higher than the normal 'tremors' we usually see after an update.
The key takeaways from the update are:
Correlation data strongly suggests that the classifiers (read: black marks against penalised websites) from both the Reviews system and the former Helpful Content Update (HCU) system were both significantly adjusted in this update.
Before the core update was even announced, there was major ranking volatility through June. Analysis of this suggests it was due to a significant, unannounced update to the Reviews system. Many sites previously impacted by Google's various reviews-focused updates saw major fluctuations starting around June 21st, well before the core update's official start on June 30th. This aligns with Google's previous statement that the Reviews system would be improved on a "regular and ongoing basis" without announcements. The evidence points to a substantial tweak to this system happening just before the main core update began, and a lot of the sites impacted then were also impacted during the core update.
One of the most significant findings from the core update was the partial recovery of many sites which had previously been decimated by HCU updates, especially the September 2023 HCU but also sites that had been carrying these silent penalties ever since the very first HCU in August 2022 (and even some from the HCU precursor a year earlier in August 2021!).
The update had global reach, affecting all types of content, across all Google surfaces, including web search, image search, video search, Google News, AI Overviews, and Google Discover. As with many core updates, sites in the "Your Money or Your Life" (YMYL) category experienced more significant fluctuations.
Another thing we noticed was that intent-matching and topicality seemed to play a major role in the pre-update volatility; and then these shifts played directly into the new separation of traditional search results and AI Mode (more on that below). Throughout late-May and early June, articles on client sites which were previously nailed-on for certain keywords suddenly had competition in the SERPs from other articles on the same site. These were old, unchanged, high-quality articles that sat within what had, until this point, been an optimised site architecture that was working well, with good keyword separation and low internal competition. This wasn't an introduction of new competing factors on the pages, it was a shift in the way Google was assigning topical relevance, as though the algorithm's peripheral vision for these keywords got slightly broader.
Here's the really interesting bit... SERP analysis showed that some of those keywords remained largely informational in intent, and the article groups which started competing against each other for them tended to perform well in AI Mode. By contrast, commercial and transactional intent pages became stronger in traditional SERPs. Correlation and causation are two very different things, but there is a simple and logical divergence in these outcomes. Informational intent shifts to AI Mode, and traditional SERPs shift towards commercial intent. Is this a new normal, Google testing, or just a quirk in our limited dataset? Time will tell.
According to Google's John Mueller, recent links are not likely to be a significant factor in core update logic. He emphasised that core updates tend to rely on long-term data and signals, not short-term changes like a recent influx of backlinks.
This isn't new information, but it's good to have the reminder. Reading between the lines, they are pattern-matching, not applying penalties based on single metrics or acquisition of specific backlinks. That kind of work is saved for Google's Quality Raters who can apply Manual Action Penalties.
My Take: It's clear that Google is still aggressively fine-tuning the way it deciphers and determines what "helpful" looks like. The fact that some sites have finally recovered after YEARS of being throttled by a HCU classifier shows that Google might have realised its "helpfulness" algorithm was too blunt and wrongly penalised legitimate sites. It's time to double down on what works: creating content that genuinely helps a user to solve a problem or make a decision. For an ecommerce site, this means going beyond basic product descriptions to include detailed buying guides, video demonstrations, and authentic user reviews. For a SaaS brand, it means creating fair comparison pages (not just sales fluff) and detailed case studies. Etc. We can also see that Google is looking at the long-term authority and trustworthiness of your entire domain. Don't get distracted... keep your time and energy focused on creating useful content, optimising UX, and building quality links over time. If an AI is now judging your content's helpfulness, it's looking for patterns of expertise and trust at a massive scale. This means your entire site needs to scream "we are experts." It's no longer about a few good blog posts; it's about a consistent, site-wide demonstration of E-E-A-T using content that is intent-matched and semantically optimised for the right topics.
Google began rolling out AI Mode in the UK, and updated the core AI Mode systems and functionality. It now offers a more advanced, multimodal search experience powered by a custom version of Gemini 2.5 Pro (currently arguably the leading consumer-facing AI model according to most benchmark tests) with the ability to upload files (images, PDFs) and a "Canvas" feature for planning (for a fee currently).
As a pertinent reminder, Google's Gary Illyes stated that "normal SEO" practices are sufficient to appear in AI Overviews, with no need for specialised "GEO" or "LLMO" (Large Language Model Optimisation) tactics.
My Take: AI Mode is a great experience. I didn't want to admit it, but there it is. I use it all the time to research. When I'm ready to buy, though, I still prefer a good old 10 blue links list rather than having to read an essay to get where I'm going. We've seen evidence (first-hand and in studies) showing that conversion rates from traffic coming from AI platforms, including AI Mode, are much higher than traditional search. The user has done their research and they're clicking to a website to act... to purchase, enquire, sign-up, etc. The window to engage users from AI surfaces is narrow, and the traditional search results are likely to skew more toward commercial and transactional intent, so making those final conversion steps simple and frictionless could very well become an important SEO signal in the future. Looking a little further down the road, AI agents will soon be taking actions on your site on behalf of users, so that's another reason to start optimising conversion right now.
Google's head of AI, Jeff Dean, highlighted that index freshness is a core strength for Google's AI-powered search. This real-time index gives Google a significant advantage over closed Large Language Models (LLMs) that often rely on stale data from static training runs. While this may be true in terms of getting brands or information featured in base model responses, we live in an age of RAG (retrieval augmented generation) where those base model answers are updated by live web content found specifically to improve the quality of the response. The point Jeff Dean appears to be making is that Google's AI can benefit from a more current base model as well as having that RAG layer.
On a related theme, more data studies have shown that recent content is more likely to be surfaced in AI experiences, including ChatGPT and AI Overviews. In these tests, old content was given a more recent date (in the frontend but also crucially in the lastmod field in the sitemap), and the pages generally saw immediate positive impacts.
Sounds too good to be true? Yeah, we thought so as well, but then, to the surprise of everyone in the industry, Bing officially confirmed it!
My Take: Am I suggesting you should try to dupe ChatGPT, Co-Pilot and Google AI experiences by constantly refreshing your lastmod for every page in your sitemap? No, because this trick won't last forever, it may even be penalised at some point, and there are more sustainable and ethical ways of optimising, like focusing on content quality, topical coverage and structured data. It's worth knowing, though, if only to get an understanding of where these AI models are at in terms of the complexity of their crawling, indexing and content extraction methods.
If you have any thoughts or questions, or would like to discuss how we can help you to optimise in light of these changes, please reach out!