Google Chrome is reportedly downloading a massive 4GB AI model onto users’ devices automatically — without asking permission, notifying users, or offering a straightforward opt-out.
According to security researcher Alexander Hanff, Chrome has been silently installing Gemini Nano, Google’s lightweight on-device AI model, as a file named weights.bin. The file is stored inside the OptGuideOnDeviceModel directory within Chrome user profiles and is automatically downloaded when Chrome detects compatible hardware.
There are no permission prompts, installation notices, or warnings. For most users, the download simply happens in the background.
What Is Gemini Nano?
Gemini Nano is Google’s compact AI model designed to run directly on local devices rather than relying entirely on cloud processing. It powers several Chrome AI features, including:
- “Help me write” text assistance
- On-device scam detection
- Chrome’s Summarizer API for websites
Some of these features are already enabled by default in newer Chrome versions.
What makes the situation more frustrating for users is that deleting the downloaded model doesn’t solve the issue. Reports indicate Chrome simply downloads the file again later.
Why Users Are Concerned
For many people, a silent 4GB download is far from insignificant.
Users with unlimited high-speed internet may never notice it. However, those using metered data plans, mobile hotspots, or limited broadband connections could end up paying real costs for a feature they never requested.
In countries where internet access remains expensive or bandwidth is limited, a background download of this size can consume a large portion of a monthly data allowance almost instantly.
Environmental Impact Raises Additional Questions
Hanff also highlighted the environmental impact of distributing such a large AI model at scale.
He estimates that if Gemini Nano were pushed to around 1 billion Chrome users — roughly 30% of Chrome’s global user base — the downloads alone could consume approximately 240 gigawatt-hours of energy and generate around 60,000 tons of CO2 emissions.
That estimate only accounts for downloading the model, not actually running it.
A Growing Pattern of Silent AI Installations?
This is not the first time concerns have been raised about software quietly deploying AI-related components onto users’ systems.
Hanff previously reported similar behavior involving Anthropic’s Claude Desktop application, which allegedly installed browser integration files across multiple Chromium-based browsers without clear user disclosure. In some cases, the files reportedly reinstalled themselves after removal.
These incidents are fueling broader concerns about whether tech companies are crossing a line by treating user devices as automatic deployment targets for AI features.
Possible Privacy and Legal Concerns
Hanff argues that these silent installations may conflict with European privacy regulations, including:
- The EU ePrivacy Directive
- GDPR transparency and consent requirements
While no court has ruled on these claims yet, the controversy raises important questions about user consent, transparency, and device ownership.
Critics argue that companies should not be able to install large software components onto personal devices without clear approval simply because the user installed the browser itself.
Local AI… But Not Entirely Local
Google could argue that running AI models locally improves privacy compared to cloud-based AI systems.
In theory, that’s true.
However, Hanff claims Chrome’s most visible AI feature — the “AI Mode” option appearing in the address bar — still routes requests to Google’s cloud servers instead of processing them entirely on-device.
This creates confusion for users who may reasonably assume that a locally stored AI model means their data remains fully private.
Users Want Transparency and Control
At the center of the backlash is a simple issue: user control.
People expect to decide:
- What gets installed on their devices
- How their storage is used
- How their bandwidth is consumed
- Whether removed software stays removed
Many users are questioning why modern software increasingly installs AI-related features automatically rather than asking for explicit permission first.
As AI becomes more deeply integrated into everyday software, the debate around consent, transparency, and user ownership is only likely to grow louder.
