Google June 2025 Core Update Done Rolling Out – Search Engine Roundtable

Google Core Explosion Logo
Google’s June 2025 core update is now done rolling out, it took 16 days and 18 hours to complete. The core update started on June 30, 2025 at around 10:34 am ET and ended on July 17, 2025 at around 4:18 am ET. Google posted saying, “The rollout was complete as of July 17, 2025.”
This was the second core update of the year, but we are expecting more core updates, more often, this year but so far, we only had two.
This update seemed much more significant, with a lot of volatility and even a nice amount of reported partial recoveries from the old September 2023 helpful content update.
There still seems volatility throughout today but maybe it will calm down tomorrow. I will say, I believe when Google says a core update is “done,” it means it is mostly done. These core updates, the super tail end, can linger on. But most of the ranking fluctuations for most of your pages, should be visible by now.

Google June 2025 Core Update Quick Facts

Here are the most important things that we know right now in short form:

  • Name: Google June 2025 Broad Core Update
  • Launched: June 30, 2025 at around 10:37 am ET
  • Completed: July 17, 2025 at around 4:18 am ET
  • Targets: It looks at all types of content
  • Penalty: It is not a penalty, it promotes or rewards great web pages
  • Global: This is a global update impacting all regions, in all languages.
  • Impact: The normal core update – updating some of the “core systems”. Google said this June update is a “regular update.”
  • Discover: Core updates impact Google Discover and other features, also feature snippets and more.
  • Recover: If you were hit by this, then you will need to look at your content and see if you can do better with Google’s core update advice.
  • Refreshes: Google will do periodic refreshes to this algorithm but may not communicate those updates in the future. Maybe this is what we saw the past couple of weeks or all those unconfirmed Google updates.

Google June 2025 Core Update Completion Video

Here is a video I put together a bit after posting this story:

Google June 2025 Core Update Volatility

Like most core updates, when the June 2025 core update was announced, we didn’t see much of any volatility. But then we saw volaitlity touch down on July 2nd.
Then on July 10th we reported on folks noticing partial recoveries from previous core updates and helpful content updates (again, not everyone). And then, even through today, the volatility was heated throughout and it still as not calmed down.

Google June 2025 Core Update Details

There wasn’t much unique information to this update outside of it taking three weeks, versus the typical two weeks.
And also this line, “This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.” That line makes it sound not super unique and just a normal core update.
John Mueller from Google said this is a “bigger Google Search updates,” I mean, all core updates are, but he said this on Bluesky.

Google Tracking Tools On June 2025 Core Update

As you can see, many of the tools are still pretty heated, even this morning. Maybe they will calm down tomorrow?
Semrush:
Semrush
Advanced Web Rankings:
Advancedwebranking
Mozcast:
Mozcast
Sistrix:
Sistrix
Cognitive SEO:
Cognitiveseo
SimilarWeb:
Similarweb
Accuranker:
Accuranker
Mangools:
Mangools
Wincher:
Wincher
Data For SEO:
Dataforseo
SERPstat:
Serpstat
Algoroo:
Algoroo
Wiredboard’s Aggregator of Tools – this report shows the aggregate of the tools above and plots them on one chart:
Wireboard

SEO Chatter on the June 2025 Core Update

Here is some of the more recent chatter from this site and WebmasterWorld:

Big changes started on Monday night. Bots took over, google traffic is -70% , sales are down , user engagement is zero. All the current websites have one thing in common: very, very, very low content. One low res picture + h1 + lots of keyword stuffing This current state is just the opposite of what Google says it wants websites to be

Search traffic has been lower since Monday, reversing the generally higher trend. A rollback of the latest update in progress? My site has been steadily gaining top three ranking terms (if that really means anything anymore). But it is bleeding backlink count at GSC. Either Google is discounting links now, or just not bothering to devote resources to count them. AI is far more important now than calculating backlinks perhaps…

My sales are effectively zero since April. But I think that this has equally as much to do with the current unstable political/economic climate driven by Washington. My sales were zooming from November to end of March. Odd how that worked. So yes Google is definitely always impacting sales, but for those of us selling higher priced discretionary goods it’s all about economic and political sentiment right now. I see even businesses in my field that are not as dependent on online sales and search doing very poorly now. Sales have cratered.

What on earth is going on?
My global widget site has lit up today like a Xmas tree with US traffic so far at 60.3%
This is how things used to be. I wonder if this will continue through the night?

Worst traffic figures I have seen in years, that’s what’s going on. GA realtime is glitched out but still the numbers are record low.
You would never tell they are low by looking at server logs – the server is as busy as it has ever been. But record high share of that is various scraper bots, scraping gigabytes of data daily.

June core update ’25 has finished. My sites are decimated for the time being, it seems.:(

Decimated…first in many many years operating.

Me too. Google banned my website. Traffic dead, Adsense dead.

Since July 8 traffic is steadily up 20% (1.0K UVs up to 1.2K daily average)

Core update notes (cont’d): Like I explained earlier, not all HCU(X) sites surged back… Some actually dropped even more, and they were down heavily already. Google clearly adjusted something on its end with regard to its systems that evaluate the helpfulness of content (to… pic.twitter.com/7RVR8hnEkY

Previous Broad Core Updates

Here is a list of the most recent core updates we’ve seen since Google started to confirm them. Previously we nicknamed them Phantom updates or unconfirmed updates.

Previous Helpful Content Update Impact

Here is the list of the previous Google helpful content updates:

Recent Unconfirmed Google Updates

While we didn’t see more core updates, more often, we did have a lot of unconfirmed updates including a big one this Saturday, June 28th. Then before that June 25 and 26th, then June 18th and then June 9th and then June 4th, May 29th, May 21st, May 16th, then May 12 and 13, May 8th (I didn’t cover May 1st, I probably should have) but then it was a couple of weeks, not since April 25th and then before that, 22nd and 23rd volatility and then around April 16th and then before that around April 9th.
So that is all, it is over. I hope some of you saw some nice, lasting, recoveries.
Forum discussion at WebmasterWorld.
The content at the Search Engine Roundtable are the sole opinion of the authors and in no way reflect views of RustyBrick ®, Inc
Copyright © 1994-2025 RustyBrick ®, Inc. Web Development All Rights Reserved.
This work by Search Engine Roundtable is licensed under a Creative Commons Attribution 3.0 United States License. Creative Commons License and YouTube videos under YouTube’s ToS.

source

OrangeSky Takes Off with SEO Solutions for Charter Operators, Elevating Aviation Marketing Strategy – Asbury Park Press

OrangeSky Takes Off with SEO Solutions for Charter Operators, Elevating Aviation Marketing Strategy  Asbury Park Press
source

IAS identifies AI-generated slop sites as major ad quality threat – PPC Land

Advertising verification company warns marketers about low-quality content sites driven by artificial intelligence tools, with quality inventory delivering 91% higher conversion rates.
Integral Ad Science published analysis on July 17, 2025, identifying AI-generated “slop sites” as a critical threat to digital advertising effectiveness. The company classifies these problematic websites as “ad clutter” due to their aggressive monetization strategies and artificially generated content designed primarily to capture advertising revenue rather than provide genuine user value.
The proliferation of AI-generated content creates unprecedented challenges for programmatic advertising. EMarketer forecasts that as much as 90% of web content may be AI-generated by 2026, with some artificial intelligence-driven sites producing up to 1,200 articles daily to maximize ad revenue through sheer volume.
Current web supply patterns demonstrate the scale of automated content generation. Analysis reveals that 41% of available web supply was published this week, 26% of available web supply was published today, and 6% represents content published this hour. These statistics highlight the rapid pace at which artificial intelligence tools generate new advertising inventory.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
IAS employs machine learning models to identify ad clutter sites through specific technical and content characteristics. The primary indicators include high ad-to-content ratios that prioritize advertising space over editorial material. Large total numbers of advertisements create high ad density across individual pages, fundamentally altering user experience in favor of revenue generation.
Auto-refresh advertisements represent a key technical marker of ad clutter operations. These sites implement high refresh rates designed to inflate impression counts without genuine user engagement. The presence of autoplay video advertisements further compromises user experience while maximizing revenue opportunities through forced content consumption.
Templated design structures indicate automated website creation processes. Ad clutter sites typically display standardized layouts that optimize advertising placement rather than editorial presentation or user navigation. These templates enable rapid deployment across multiple domains while maintaining consistent monetization strategies.
Content analysis reveals systematic patterns in artificial intelligence-generated material. Plagiarized content represents a fundamental characteristic of ad clutter operations, with automated systems aggregating and republishing existing material without attribution or editorial oversight. AI-generated content displays characteristic linguistic patterns including formulaic structures, repeated phrases, and logical inconsistencies that distinguish it from human-authored material.
When ad clutter sites conduct ad arbitrage operations – purchasing cheap traffic and aggressively monetizing resulting page views – IAS classifies these properties as Made-For-Advertising sites. This classification indicates the most problematic category of inventory that combines multiple concerning characteristics.
The financial implications for advertisers prove substantial and measurable. IAS analysis spanning over 40 global agencies and brands found traffic served on quality sites achieves 91% higher conversion rates compared to traffic served on ad clutter sites. This performance differential represents significant revenue impact for campaigns that inadvertently purchase ad clutter inventory.
Cost efficiency analysis demonstrates additional advantages of quality inventory. Quality sites deliver lower cost-per-conversion by 25% relative to ad clutter properties, indicating superior return on advertising investment. These metrics suggest that ad clutter sites not only fail to drive conversions but actively increase campaign costs through inefficient spending allocation.
Brand safety concerns extend beyond immediate performance metrics. According to IAS’s State of Brand Safety report, 57% of consumers consider spammy sites inappropriate content for brand advertising. Another 70% say they trust brands less when advertising appears near inappropriate content, creating long-term reputation risks that compound initial performance problems.
Attention measurement data reinforces quality inventory advantages. Ad Quality concerns rise as AI-Generated content drives surge in MFA sites reveals that Made-For-Advertising sites deliver 7% lower attention for display advertisements and 28% lower attention for video advertisements compared to quality inventory. These attention deficits correlate directly with reduced campaign effectiveness.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The advertising verification industry has developed sophisticated detection mechanisms for identifying artificial intelligence-generated content. Network of AI-generated fake news sites uncovered in advertising fraud scheme demonstrates how verification companies identify networks comprising hundreds of fraudulent properties using AI-generated content.
Technical patterns serve as primary indicators for automated content generation. Repetitive formatting structures across multiple articles indicate template-based production systems. Chatbot-generated text within articles displays characteristic linguistic markers including unnatural phrasing, contextual inconsistencies, and formulaic sentence structures that distinguish machine-generated content from human writing.
Placeholder content represents another detection method for identifying ad clutter sites. Automated content generation systems often insert generic text, incomplete sentences, or formatting artifacts that reveal minimal human editorial oversight. These technical markers enable verification systems to identify problematic inventory at scale.
Content aggregation analysis provides additional detection capabilities. AI-generated sites frequently plagiarize material from established publishers, modifying text minimally to avoid direct duplication detection. Machine learning algorithms can identify these modification patterns and trace content origins to detect unauthorized republishing activities.
Domain analysis reveals systematic patterns in ad clutter operations. Ads.txt fraud cases exceed 100 as AI schemes manipulate digital advertising demonstrates how fraudulent operations create deceptive domains like espn24.co.uk, nbcsportz.com, and cbsnewz.com designed to mimic legitimate news organizations while distributing AI-generated content.
Consumer sentiment research demonstrates significant concerns about artificial intelligence-generated content quality. Raptive study shows AI content cuts reader trust by half reveals that suspected AI content reduces reader trust by 50% and hurts brand advertisement performance by 14%. These findings indicate fundamental problems with AI-generated content effectiveness for advertising purposes.
Raptive implemented comprehensive countermeasures against ad clutter by banning AI slop content across its publisher network in 2023. The company subsequently rejected thousands of creators and removed dozens of sites that adopted AI-generated content strategies, demonstrating industry recognition of quality concerns.
Platform-level challenges complicate ad clutter prevention efforts. Analysis of leading demand-side platform blocklists revealed that over 90% of known AI-generated sites remained unlisted, indicating significant gaps in current prevention methodologies. This limitation necessitates dynamic detection systems capable of identifying new ad clutter operations in real-time.
Economic incentives driving ad clutter creation stem from platform monetization programs. Platform payments fuel AI slop flood across social media illustrates how TikTok Creator Fund, Meta’s Creator Bonus Program, YouTube Partner Program, and X’s revenue sharing create lucrative opportunities for exploiting generative AI tools to produce low-quality content at scale.
IAS offers specialized ad clutter and Made-For-Advertising pre-bid avoidance segments within leading demand-side platforms. These segments provide Quality Sync and Context Control Avoidance clients with filtering capabilities at no additional cost, enabling advertisers to exclude problematic inventory before campaign deployment.
Pre-bid filtering technology analyzes multiple data points in real-time to identify ad clutter characteristics. Machine learning algorithms evaluate domain reputation, content quality indicators, advertising density metrics, and technical implementation patterns to determine inventory suitability for brand advertising campaigns.
The company’s machine learning model processes various site characteristics simultaneously to generate ad clutter classifications. High ad-to-content ratios, large advertisement inventories, auto-refresh implementations, high refresh rates, autoplay video presence, templated designs, and AI-generated content indicators contribute to overall site scoring mechanisms.
Post-campaign analysis provides additional insights into ad clutter impact on advertising performance. Verification systems track conversion rates, attention metrics, and engagement patterns across different inventory types, enabling advertisers to optimize future campaigns based on historical performance data from quality versus ad clutter sites.
Geographic analysis capabilities identify concentration patterns for ad clutter operations. Many AI-generated sites originate from specific regions where labor costs enable large-scale content production, while domain registration patterns often show bulk purchases and systematic naming conventions that facilitate automated content distribution.
Legitimate publishers face revenue displacement from artificial intelligence-generated ad clutter operations. According to the ANA’s Programmatic Media Supply Chain Transparency study, advertisers waste an estimated $10 billion annually on Made-For-Advertising sites. This misallocated spending should redirect to higher-performing quality publishers who invest in editorial content and user experience.
The computational demands of AI content generation create substantial infrastructure requirements that affect the broader digital ecosystem. Each video generation, image creation, or text-to-speech conversion requires processing power from data centers operated by cloud infrastructure providers. Platform monetization programs essentially subsidize increased energy consumption through creator payments while contributing to content saturation that disadvantages quality publishers.
Quality publishers must differentiate their inventory from ad clutter sites through enhanced measurement and verification capabilities. Attention measurement, brand safety verification, and fraud detection become critical competitive advantages for publishers seeking to demonstrate inventory value to sophisticated advertisers.
Content authenticity verification represents an emerging requirement for premium publisher inventory. Publishers implementing editorial oversight, fact-checking processes, and human content creation can leverage these quality indicators to distinguish their offerings from AI-generated ad clutter sites in programmatic marketplaces.
Late 2018: DoubleVerify identifies first major ads.txt exploitation scheme involving bot-generated traffic and content scraping
2023: Raptive implements AI Slop ban across publisher network, rejecting thousands of creators using AI-generated content
January 15, 2025: DoubleVerify publicizes findings about Synthetic Echo network comprising over 200 AI-generated websites
April 14, 2025: DoubleVerify launches pre-bid video controls for TikTok brand safety
May 22, 2025DoubleVerify issues comprehensive industry alert documenting over 100 cases of ads.txt manipulation since 2017
June 14, 2024DoubleVerify releases Global Insights Report showing 19% year-over-year increase in MFA impression volume
June 17, 2025Meta announces generative AI advertising capabilities at Cannes Lions
June 23, 2025: HBO’s Last Week Tonight highlights platform monetization driving AI slop epidemic
July 10, 2025: WordStream publishes AI accuracy study findings
July 15, 2025: Vodafone increases news inventory 10% with AI brand suitability technology
July 17, 2025: IAS publishes comprehensive analysis of AI-generated slop sites and ad clutter impact on programmatic advertising
Ad Clutter: Low-quality websites characterized by high advertisement-to-content ratios, aggressive monetization strategies, and minimal editorial value designed primarily to generate advertising revenue. Ad clutter sites typically implement auto-refresh advertisements, autoplay videos, templated designs, and AI-generated content to maximize impression volume while providing poor user experiences. These properties represent problematic inventory for advertisers because they deliver significantly lower conversion rates, reduced attention levels, and potential brand safety risks. Ad clutter classification helps verification companies and advertisers identify and avoid inventory that consumes budgets without providing genuine business value or positive brand associations.
AI-Generated Slop Sites: Websites that utilize artificial intelligence tools to produce high-volume, low-quality content with minimal human oversight, designed specifically to capture advertising revenue rather than provide genuine user value. These sites employ automated content generation systems to create hundreds or thousands of articles daily, often plagiarizing existing material or producing formulaic content that lacks editorial standards. AI-generated slop sites represent a growing threat to advertising effectiveness because they flood programmatic marketplaces with inventory that appears legitimate but delivers poor performance outcomes. Detection methods focus on identifying linguistic patterns, content quality indicators, and technical characteristics that distinguish machine-generated content from human-authored material.
Made-For-Advertising (MFA) Sites: Websites designed exclusively to display advertisements rather than provide genuine editorial content or user value, representing the most problematic category of ad clutter inventory. MFA sites combine aggressive advertising implementations with content strategies optimized for impression generation rather than reader engagement. These properties typically conduct ad arbitrage operations, purchasing inexpensive traffic and monetizing it through excessive advertising density. MFA sites deliver significantly lower attention levels compared to quality inventory, with display advertisements receiving 7% lower attention and video advertisements receiving 28% lower attention. Advertisers waste an estimated $10 billion annually on MFA inventory that could be redirected to higher-performing quality publishers.
Pre-bid Filtering: Technology systems that enable advertisers to evaluate and exclude specific types of inventory before participating in real-time bidding auctions, providing protection against ad clutter and low-quality sites. Pre-bid filtering analyzes domain reputation, content classification, fraud indicators, technical implementation patterns, and brand safety parameters in real-time to determine inventory suitability. This approach prevents advertisers from purchasing problematic inventory rather than detecting issues after advertising spend has been committed. Pre-bid filtering operates through integrations between verification companies and demand-side platforms, providing automated analysis and exclusion capabilities that protect advertising investments while maintaining campaign efficiency and preventing exposure to ad clutter sites.
Content Classification: Automated systems that analyze website content, including text, images, and videos, to determine appropriateness for brand advertising and identify potential ad clutter characteristics. Content classification employs natural language processing, machine learning algorithms, and pattern recognition to evaluate editorial quality, detect AI-generated material, and assess advertising density relative to content volume. These systems provide real-time analysis capabilities that enable pre-bid filtering and post-campaign verification for programmatic advertising campaigns. Advanced content classification can identify plagiarized material, formulaic content structures, and technical indicators that distinguish quality editorial inventory from ad clutter sites designed primarily for revenue generation.
Attention Measurement: Advanced advertising measurement methodology that evaluates genuine consumer engagement with advertising content beyond traditional metrics, particularly valuable for distinguishing quality inventory from ad clutter sites. Attention measurement combines eye-tracking technology, machine learning analysis, and behavioral data to assess how effectively advertisements capture and maintain audience focus across different inventory types. This approach considers viewability duration, interaction patterns, scroll behavior, and engagement depth to provide insights into content effectiveness. Attention measurement reveals significant performance differences between quality sites and ad clutter sites, with ad clutter inventory consistently delivering lower attention levels that correlate with reduced conversion rates and campaign effectiveness.
Domain Reputation Analysis: Systematic evaluation of website domains to assess their legitimacy, content quality, and suitability for brand advertising, critical for identifying ad clutter operations and AI-generated slop sites. Domain reputation analysis examines registration patterns, content consistency, traffic sources, technical implementation, and historical performance data to identify potentially problematic inventory. This methodology can detect bulk domain purchases, systematic naming conventions, and geographic concentration patterns that indicate automated content operations. Domain reputation systems maintain databases of known ad clutter sites while identifying new properties that display similar characteristics, enabling real-time filtering and campaign protection across programmatic advertising platforms.
Traffic Quality Assessment: Comprehensive evaluation of website visitor behavior, engagement patterns, and interaction authenticity to identify legitimate audiences versus artificial traffic generation associated with ad clutter sites. Traffic quality assessment analyzes bounce rates, session duration, page views per session, geographic distribution, device characteristics, and behavioral consistency to detect non-human or incentivized traffic. Ad clutter sites often purchase low-quality traffic through ad arbitrage operations, resulting in visitor behavior patterns that differ significantly from organic audience engagement. Traffic quality metrics provide crucial insights for advertisers seeking to avoid inventory that generates impressions without genuine consumer interest or conversion potential.
Inventory Verification: Technology-driven processes that authenticate advertising inventory quality, detect ad clutter characteristics, and ensure brand safety compliance before and after campaign deployment. Inventory verification combines multiple analysis methodologies including content classification, domain reputation assessment, traffic quality evaluation, and technical implementation review to provide comprehensive inventory quality scores. These systems operate in real-time to support pre-bid filtering while providing post-campaign analysis for optimization purposes. Inventory verification helps advertisers distinguish between quality publisher inventory that supports campaign objectives and ad clutter sites that consume budgets without delivering meaningful business outcomes or positive brand associations.
Programmatic Supply Chain: The interconnected ecosystem of technology platforms, data providers, and verification services that facilitate automated buying and selling of digital advertising inventory, increasingly challenged by ad clutter proliferation. The programmatic supply chain includes supply-side platforms representing publishers, demand-side platforms representing advertisers, ad exchanges facilitating transactions, and verification companies providing quality assessment services. Ad clutter sites exploit supply chain dynamics by creating inventory that appears legitimate within automated systems while delivering poor performance outcomes. Understanding supply chain transparency becomes critical for advertisers seeking to avoid ad clutter inventory and redirect spending to quality publishers that provide genuine value.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Who: Integral Ad Science, programmatic advertisers, quality publishers, and digital marketing agencies confronting the proliferation of AI-generated ad clutter sites that compromise campaign effectiveness and waste advertising budgets.
What: Ad clutter sites represent low-quality websites utilizing artificial intelligence tools to generate high-volume content with aggressive advertising monetization strategies, characterized by high ad-to-content ratios, auto-refresh mechanisms, templated designs, and AI-generated content that delivers 91% lower conversion rates than quality inventory.
When: The announcement occurred on July 17, 2025, amid accelerating artificial intelligence content generation with EMarketer forecasting 90% of web content may be AI-generated by 2026, while current data shows 41% of web supply published this week and 6% published within the current hour.
Where: Ad clutter proliferation affects global programmatic advertising markets, with particular concentration in regions enabling low-cost automated content production, while fraudulent networks like Synthetic Echo operate over 200 properties using deceptive domains designed to mimic legitimate news organizations.
Why: Economic incentives drive ad clutter creation as demand-side platforms treat supply as a commodity, platform monetization programs reward high-volume content production, and artificial intelligence tools enable unprecedented content generation rates at minimal cost, creating inventory that appears legitimate but delivers poor advertising performance.

source

Google’s Chrome Emergency Patch Fixes a High-Severity Bug – Users Should Update Immediately – TechRepublic

Google’s Chrome Emergency Patch Fixes a High-Severity Bug – Users Should Update Immediately  TechRepublic
source

Samsung Galaxy A56 now available in the US for $499 with 6 years of updates – 9to5Google

Months after its siblings, the Galaxy A56 is finally going on sale in the US today, Samsung says, though you might still need to wait a little bit.
The Galaxy A56 was announced early this year as a $499 top-tier device in Samsung’s mid-range lineup. It launched internationally in late March, but Samsung was pretty vague about its US debut.
Now that day has finally arrived.
Samsung has confirmed that, starting today, Galaxy A56 is available in the US. The device is available from Samsung directly, as well as through both Amazon and Best Buy. From both retailers you’ll get a $50 gift card and the device ships immediately. Samsung’s website offers up to $150 in trade-in credits.
Galaxy A56 has a 6.7-inch FHD+ AMOLED display, a 50MP rear camera, Android 15 out of the box with One UI 7, six years of Android OS and security updates, and IP67 dust/water resistance. It also has 128GB of storage, a 5,000 mAh battery, and 45W charging. Under the hood, it features Exynos 1580 paired with 6GB of RAM.
Colors include Graphite, “Lightgray,” and Amazon-exclusive Olive.
Follow Ben: Twitter/XThreads, Bluesky, and Instagram
FTC: We use income earning auto affiliate links. More.
Check out 9to5Google on YouTube for more news:
Samsung is a technology conglomerate based out o…
Ben is a Senior Editor for 9to5Google.
Find him on Twitter @NexusBen. Send tips to schoon@9to5g.com or encrypted to benschoon@protonmail.com.

source

Google Core Update 2025 Analysis: Which Sites Were Affected? – Revista Merca2.0

Subscribe to Merca2.0 and access more than 3,500 exclusive articles for subscribers. Click Here
Subscribe to Merca2.0. Click Here
google core update july 2025
The Google algorithm update, known as the July 2025 Core Update, has officially concluded. Over a period of more than two weeks, this update triggered intense shifts in the ranking of websites. While it’s still early to determine the full scope of this Core Update, based on the insights of some renowned SEO experts, we provide a useful analysis for marketers.
A Core Update is a substantial update to Google’s core algorithm, typically rolled out four times a year. According to Google, its purpose is to enhance the search engine’s ability to present relevant and satisfying content to users, regardless of the site type. These adjustments do not target specific websites but rather recalibrate the global ranking of all pages based on multiple signals.
For the digital marketing and SEO sector, these updates are crucial because they can completely redefine a brand’s visibility in search results. Every Core Update presents a new opportunity or challenge depending on how each site has evolved in terms of content quality, user experience, site architecture, and compliance with Google’s guidelines.
The July 2025 core update began on June 30, 2025, at 10:37 a.m. ET and concluded on July 17, 2025, at 4:18 a.m. ET, completing in a span of 16 days and 18 hours. This was despite Google anticipating the rollout could extend up to three weeks.
The impact was soon noticeable: ranking volatility was detected around July 2 and intensified between July 11 and 14, according to experts such as Barry Schwartz, CEO of RustyBrick, and SEO analyst Glenn Gabe. Both agreed it was a “massive” update, and even after its conclusion, the algorithm might still show minor fluctuations.
One of the most notable phenomena after the implementation of the Google July 2025 Core Update was the partial recovery of some sites that had been heavily hit by the September 2023 Helpful Content Update (HCU). This recovery became noticeable between July 6 and 9, right in the middle of the core update’s rollout.
However, not all sites shared the same luck. According to Barry Schwartz, “many sites did not experience recoveries, and some even suffered further drops.” This situation shows that while certain content improvements can be rewarded in future updates, the recovery process is neither automatic nor guarantees immediate results.
In contrast, the sites that managed to improve their position reported increases in traffic and rankings, even securing spots in prominent Google features such as the Top Stories carousel and AI Overviews, which are becoming increasingly important in the search experience.
It is also worth noting that although Google officially declared the rollout completed on July 17, several analysts warn that the algorithm may not be fully stabilized yet, as some SEO monitoring tools still detect high volatility in rankings—a classic sign of “post-update residue.”
For SEO and marketing professionals, the first step after a Core Update is to thoroughly analyze traffic and rankings. It is recommended to:
Google reiterates the same advice it has shared during previous updates: create useful, reliable, and people-centered content. As the company itself stated:
“There’s nothing new or special that creators need to do for this update as long as they’ve been making satisfying content meant for people. For those that might not be ranking as well, we strongly encourage reading our creating helpful, reliable, people-first content help page.”
A key aspect revealed by this update is that full recovery can take months, since Google’s systems require time to validate a site’s long-term quality. While minor improvements may be noticeable between updates, the most significant changes typically come with each new core update.
Although Google announced in December 2024 that core updates would become more frequent to enable faster recoveries, so far, only two updates have been deployed in 2025: one in March and the June-July update. However, there is an expectation that at least one more will occur before the year ends.
In Google’s words:
“This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.”
Therefore, it is essential for brands and SEO and marketing professionals to stay alert. Best practices remain unchanged: optimize content quality, improve user experience, and pay close attention to Google’s quality guidelines.
Google’s core updates not only affect traditional search rankings but also products like Google Discover, which curates news and articles based on user interests. In fact, Google has confirmed that its Core Updates impact Discover, featured snippets, and other content visibility features.
This is critical for digital marketing strategies, as appearing in Discover or featured snippets can result in a substantial increase in organic traffic, especially on mobile devices.
Cancela en cualquier momento
Acceso exclusivo a rankings y radiografías.
Análisis profundos y casos de estudio de éxito.
Historial de la revista impresa en formato digital.

¡Disfruta de lo mejor del marketing sin costo alguno por un mes!
Uber said it plans to deploy 20,000 or more units of its Nuro Driver-equipped Lucid robotaxi over six years
Walmart has decided to lower the prices of several items so that the upcoming school year won’t be a financial burden
More than 24,000 pounds of Kayem Foods Inc. chicken sausages sold nationwide have been recalled because they could pose a danger to consumers
Connie Francis was known for her song “Pretty Little Baby,” which resurfaced in 2025 after going viral on social media
© Copyright 2017 MERCADOTECNIA PUBLICIDAD MARKETING NOTICIAS | Revista Merca2.0 – All Rights Reserved – The total or partial reproduction of the contents of this site is prohibited without the written authorization of Grupo de Comunicación Katedra S.A. de C.V. Privacy Policy (Spanish)
You don't have credit card details available. You will be redirected to update payment method page. Click OK to continue.
Usamos cookies para ofrecerte la mejor experiencia en nuestra web.
Puedes encontrar más información sobre qué cookies estamos usando o desactivarlas en los .
Esta web utiliza cookies para que podamos ofrecerte la mejor experiencia de usuario posible. La información de las cookies se almacena en tu navegador y realiza funciones tales como reconocerte cuando vuelves a nuestra web o ayudar a nuestro equipo a comprender qué secciones de la web encuentras más interesantes y útiles.
Las cookies estrictamente necesarias tiene que activarse siempre para que podamos guardar tus preferencias de ajustes de cookies.
Si desactivas esta cookie no podremos guardar tus preferencias. Esto significa que cada vez que visites esta web tendrás que activar o desactivar las cookies de nuevo.

source

Android Earthquake Alerts: A global system for early warning – Google Research

We strive to create an environment conducive to many different types of research across many different time scales and levels of risk.
Our researchers drive advancements in computer science through both fundamental and applied research.
We regularly open-source projects with the broader research community and apply our developments to Google products.
Publishing our work allows us to share ideas and work collaboratively to advance the field of computer science.
We make products, tools, and datasets available to everyone with the goal of building a more collaborative ecosystem.
Supporting the next generation of researchers through a wide range of programming.
Participating in the academic research community through meaningful engagement with university faculty.
Connecting with the broader research community through events is essential for creating progress in every aspect of our work.
July 17, 2025
Marc Stogaitis, Principal Software Engineer, Android
Using aggregated measurements from a global network of Android smartphones, we developed a system that detects earthquakes, delivers early warnings to users, and builds user trust with each successful alert.
Earthquakes are a constant threat to communities around the globe. While we’ve gotten good at knowing where they’re likely to strike, we still face devastating consequences when they do. What if we could give people a few precious seconds of warning before the shaking starts? Those seconds can be enough time to get off a ladder, move away from dangerous objects and take cover. For years, that’s been the goal of earthquake early warning (EEW) systems. But the expensive seismic networks on which they rely just don’t exist in many of the world’s most earthquake-prone regions.
In “Global earthquake detection and warning using Android phones”, published in Science, we show how we’ve turned the global network of Android smartphones into a powerful, pocket-sized earthquake detection system to help supplement official warnings systems. Over the last four years, the Android Earthquake Alerts system has detected thousands of earthquakes and sent alerts to millions of people in nearly 100 countries, often giving them the crucial moments they need to take cover before the shaking arrives. Evaluation of thousands of earthquakes, analysis of specific earthquake examples and direct user feedback allows the system to continuously improve its performance in key areas, like magnitude estimation, making the alerts more effective over time.
The accelerometer in an Android phone, the same sensor that flips the screen when it’s turned sideways, can also detect the ground shaking from an earthquake. If a stationary phone detects the initial, faster-moving P-wave of an earthquake, it sends a signal to our earthquake detection server, along with a coarse location of where the shaking occurred.
The system then quickly analyzes data from many phones to confirm that an earthquake is happening and estimate its location and magnitude. The goal is to warn as many people as possible before the slower, more damaging S-wave of an earthquake reaches them. The system sends out two types of alerts:
To receive alerts, users must have Wi-Fi and/or cellular data connectivity, and both Android Earthquake Alerts and location settings enabled. Alerts are sent based on a privacy-preserving, coarse location of the device. Users who do not wish to receive these alerts can turn off Earthquake Alerts in device settings.
Light green areas show the countries where the Android Earthquake Alerts System is currently detecting and delivering alerts. The areas alerted in individual earthquakes are shown in red where there was strong shaking (Modified Mercalli Intensity (MMI) 5+) and yellow for lighter shaking (MMI 3-4). The gray circles indicate other Android detections in regions where alerts were not issued. Android also delivers alerts generated by ShakeAlert in California, Oregon and Washington (dark green).
In April 2021, we began rolling out alerts generated by Android detections, starting in New Zealand and Greece. By the end of 2023, the system was active in 98 countries.
The system has now detected over 18,000 earthquakes, from small tremors of M1.9 to major quakes reaching M7.8. For the events significant enough to warn people, alerts were issued for over 2000 earthquakes, culminating in 790 million alerts being sent to phones worldwide.
The impact has been a ~10x change in the number of people with access to EEW systems. In 2019, only about 250 million people had access. Today, thanks in large part to the Android system, that number has increased to 2.5 billion.
The global reach of EEW has expanded dramatically with the introduction of the Android system.
One of the trickiest parts of an EEW system is estimating the magnitude of an earthquake in real-time. The magnitude tells us how big the earthquake is, which in turn determines how far the shaking will travel and who needs to be alerted. Getting this right is crucial — underestimate, and you might not warn people in danger; overestimate, and you risk sending out false alarms that erode public trust.
The challenge lies in the trade-off between speed and accuracy. The first few seconds of an earthquake provide limited data, but every second you wait to issue an alert is a second less of warning for those in the path of the shaking.
Over the last three years, we’ve continuously improved our magnitude estimation. The median absolute error of our first magnitude estimate has dropped from 0.50 to just 0.25. When we compare our system to established, traditional seismic networks, our accuracy is similar, and in some cases, even better.
Evolution of the Android Earthquake Alerts System’s earthquake magnitude error over the last three years. For one earthquake, the system does several detections as new data is gathered. The first estimate is important as it provides the maximum warning time, whereas the maximum magnitude estimate generates an alert for the largest area.
So, how well does it work in a real earthquake? Let’s look at three examples.
During a magnitude 6.7 earthquake in the Philippines in November 2023, our system sent out the first alert just 18.3 seconds after the quake started. People closest to the epicenter, who experienced the most intense shaking, received up to 15 seconds of warning. Those farther away, who still felt moderate shaking, got up to a minute of warning. In total, nearly 2.5 million people were alerted.
In a magnitude 5.7 earthquake in Nepal in November 2023, the first alert was issued 15.6 seconds after the earthquake began. People who experienced moderate to strong shaking had a warning time of 10 to 60 seconds. In this event, over 10 million alerts were delivered.
These charts show the warning time people received based on how far they were from the epicenter and the intensity of the shaking they experienced.
Moreover, in a magnitude 6.2 earthquake in Turkey in April 2025, the first alert was issued 8.0 seconds after the earthquake began. People who experienced moderate to strong shaking had a warning time of a few to 20 seconds. In this event, over 16 million alerts were delivered.
Animation showing phones detecting shaking as the 6.2 earthquake in Turkey progressed. Yellow dots are phones that detect shaking. The yellow circle is the P-wave’s estimated location and the red circle is for the S-wave. Note that phones can detect shaking for reasons other than an earthquake which is a source of noise the system needs to handle. UTC time during the event is displayed in the upper left.
The true test of any alert system is if people find it helpful. We included a simple survey in our alerts, and the feedback has been overwhelmingly positive. Of the more than 1.5 million people who responded, 85% found the alerts to be “very helpful.”
Here are some of the key takeaways:
A snapshot of what over 1.5 million users told us about their experience with the earthquake alerts.
What’s most exciting is that our system is constantly learning and improving. The data we collect is helping us to better understand earthquakes and build more accurate prediction models. In the future, this system could not only provide warnings but also deliver rapid post-earthquake information to emergency responders, helping them to quickly assess the areas most in need.
We’re excited to continue to show how the devices in so many of our pockets can be used to create a more informed and safer world.
June 30, 2025
June 27, 2025
June 24, 2025
Follow us

source

New Android 16 Beta Update Fixes 9 Big Bugs on Pixel – Droid Life

We may earn a commission when you click links to retailers and purchase goods. More info.
Google released the Android 16 QPR1 Beta 3 update today for its line-up of Pixel devices. This latest update brings us one step closer to a stable release (likely in September). As we get this 3rd beta build, remember that this is the big quarterly update of Android 16 that brings Material 3 Expressive to phones, and for many, is the true big Android 16 update with all of the changes to get excited about.
Android 16 QPR1 Beta 3 download: The new Android 16 QPR1 Beta 3 build will download to your device as BP31.250610.004, unless you own a Pixel 6 or Pixel 6 Pro. After mysteriously being left out of the July update, they are getting their own QPR1 Beta 3 build of BP31.250610.004.A1.
Android 16 QPR1 Beta 3 bug fixes: There are at least 9 “top” resolved issues with this update. Those 9 items actually cover numerous issues that Google was tracking from the launcher not completely displaying to notification issues and full phone restarts. This seems like a big bug fixer – here’s to hoping it improves the battery drain too.
Like previous QPR1 updates, this new Beta 3 is available for Pixel 6, 6 Pro, 6a, 7, 7 Pro, 7a, Fold, 8, 8 Pro, 8a, 9, 9 Pro, 9 Pro XL, 9 Pro Fold, 9a, and Pixel Tablet series devices. We weren’t sure about the future of the Pixel 6a, as it seemed up-in-the-air following it’s battery performance update. However, Google added it back to the QPR1 program today.
You can find factory images (here) and OTA files (here), but most of you should just wait and pull the over-the-air build through the Android Beta Program.
We’ll update this as we learn more.
// Google

source

Android 16 QPR1 Beta 3 is here to install with Google's latest bug fixes – Android Authority

Affiliate links on Android Authority may earn us a commission. Learn more.
July 17, 2025

Google just changed the way we’re going to be looking at new Android features with the introduction of its public Canary program, and while that’s going have a major impact on how Android 17 reveals itself to us, the company’s existing Beta programs are rolling right on like always. This summer we’ve already seen lots of progress towards what we’ll be getting from the next Pixel Drop in the form of Google’s Android 16 QPR1 releases, first with Beta 1 in May, and then Beta 2 in early June. Now it’s time for one of the final updates in this series, as we get our hands on Android 16 QPR1 Beta 3.
Google shares that Android 16 QPR1 Beta 3 is now available for users registered with the Android Beta for Pixel program. Like other recent releases, this one supports devices from the Pixel 6 series on up. The build ID is BP31.250610.004, and it includes the following fixes:
If you’re curious about what’s new in QPR1 Beta 3, check out this article for all the details.

source