The author set Location History to auto-delete on the shortest interval and even toggled it off at times, yet Google still produced multi-year historical location data in response to an ICE legal request. He argues the 2023 on-device Timeline migration was framed as a privacy upgrade but left behind server-side backups, derived data, and signals from linked services (Ads Data Hub, Find My Device, Fi, YouTube) that remained subpoenable in 2026.
Submitted the EFF post to HN where it reached 1351 points and 580 comments, amplifying the argument that Google's user-facing controls and its actual data retention diverged materially. The submission framing treats the gap between promise and practice as the newsworthy core of the story.
The editorial argues the developer reflex to read this as a Google-specific scandal misses the point. Every product built on retention-policy promises accumulates backups, derived data, and cross-service signals that outlive the user-facing deletion control — meaning the gap between marketing language and data-lifecycle reality is now being measured in deportation proceedings industry-wide.
Citing Google's transparency report, the editorial notes geofence warrants have narrowed since the 2023 Timeline migration while conventional account-targeted requests continue a decade-long climb. This suggests the on-device shift did remove one attack surface but simply redirected law enforcement toward the server-side residue that the privacy framing obscured.
An EFF staffer published a first-person account on Deeplinks describing how Google — despite years of public commitments around Location History auto-delete and the 2023 shift toward on-device Timeline storage — still had enough of his historical location data on its servers to respond to a law-enforcement request from U.S. Immigration and Customs Enforcement. The author, a U.S. citizen and longtime Google customer, had set Location History to auto-delete on the shortest available interval and had, at various points, toggled the feature off entirely. He received a notification that his account data had been produced in response to legal process. The recipient was ICE.
The post is narrow in its factual claims and wide in its implications. Google's 2023 announcement that Timeline data would move to the device was framed as a privacy upgrade; the EFF account argues, with receipts, that the migration left behind a multi-year server-side trail that was still subpoenable in 2026. The author is careful not to allege that Google violated a specific contract term — Location History's settings page has always carried caveats about backups, linked services, and legal obligations. The argument is that the marketing promise ("you're in control," "auto-delete means deleted") and the engineering reality (backups, derived data, Ads Data Hub, Find My Device, Fi, YouTube watch history with location signals) were never the same thing, and that the gap is now being measured in deportation proceedings.
Google has not publicly responded to the specific account at time of writing. The company's most recent transparency report shows U.S. government requests for user data continuing their decade-long climb, with geofence warrants narrowing after the 2023 Timeline migration but conventional account-targeted requests rising.
The developer reflex is to read this as a Google problem. It isn't. Every product built on "we'll delete it after N days" is one well-drafted discovery request away from breaking that promise, because retention policy is a business process, not a cryptographic property. Backups exist. Replicas exist. Derived features — the anonymized heatmaps, the "popular times" graphs, the ML training corpora — exist and are often exempt from the user-facing retention slider because they're considered de-identified. Legal process doesn't care about your marketing copy. It cares about what's on disk somewhere your lawyers can reach.
The second-order point is about threat-model drift. In 2018, when Google first rolled out auto-delete, the implicit threat model was advertiser creepiness and the occasional jealous spouse with a subpoena. In 2026, the threat model is an executive branch that has publicly committed to large-scale removal operations and is actively shopping for location, travel, and relationship data at every major platform. A privacy feature designed against a 2018 threat model is not a privacy feature against a 2026 one, and nobody at the vendor is obligated to tell you when the threshold quietly moves.
The third point is structural. On-device processing — Apple's differential privacy, Google's 2023 Timeline move, the general "edge ML" drumbeat — is genuinely better than server-side collection, but only when the migration is complete and the server-side copies are actually destroyed. The EFF account suggests Google's migration was partial: new Timeline writes went to the device, but the historical server-side corpus lingered long enough to be produced years later. That's the worst of both worlds from a user's perspective: the cognitive comfort of "it's on my phone now" plus the legal exposure of a server-side archive you didn't know still existed.
Community reaction on the HN thread (1,351 points, ~900 comments) split predictably. One camp argued the author should have known better — use GrapheneOS, de-Google your phone, don't opt into surveillance infrastructure and then act surprised. The more interesting camp, including several current and former Google engineers posting under their real names, pointed out that the retention plumbing inside Google is genuinely complicated, that the settings UI massively understates what "delete" means across 200+ internal systems, and that the honest answer to "is my data really gone?" is "probably mostly, eventually, for most definitions of gone." That's not a quote you can put on a settings page.
If you ship a product that stores user data and offers any kind of retention or deletion control, this story is your next postmortem waiting to happen. Three concrete moves:
Audit what "delete" actually does in your system. Write down every place a user record can live: primary DB, read replicas, analytics warehouse, search index, cache, object storage, backups, logs, third-party processors, ML training sets, derived aggregates. For each, document the actual deletion latency and whether deletion is cryptographic (key destruction), logical (tombstone), or physical (overwrite). Most teams discover the honest answer is "we tombstone the primary, the warehouse copy persists for 90 days, backups persist for 7 years, and the ML features were already baked into a model we can't un-bake." Put that on the settings page, not a marketing claim.
Treat legal process as a first-class threat in your design reviews. If your architecture's answer to a subpoena is "we'd have to comply," then your users' answer to "is my data safe" is "no, it isn't" — and the only durable fix is to not hold the data, or to hold it under keys you can't produce. Client-side encryption with user-held keys, zero-knowledge architectures, and aggressive minimization aren't privacy theater; they're the only designs that survive an adversarial legal environment. Signal's sealed-sender and Apple's Advanced Data Protection are the reference implementations. Everyone else is running on trust.
Stop writing retention promises you can't cryptographically enforce. "Auto-delete after 3 months" is a commitment your ops team makes to your users that your legal team can override on a Tuesday. If the product value requires the promise, build the promise into the data model — encrypt each record under a per-period key, rotate and destroy the keys on schedule, and the data is mathematically unrecoverable when the timer expires. This is more work than a cron job that runs `DELETE FROM events WHERE created_at < NOW() - INTERVAL '90 days'`. It is also the only version of the promise that's actually true.
The next 18 months will produce more of these stories, not fewer. Geofence warrants were the 2020–2023 flashpoint; server-side retention archives are the 2026 one, and the pattern will repeat for every "we moved it to the device" announcement that didn't also come with a verifiable destruction ceremony for the server-side history. The practitioner move is to assume every SaaS retention slider is, at best, advisory — and to build your own products so that the slider is backed by math, not by policy. Users won't read your privacy page. They will, eventually, read the EFF post about you.
The linked Google policy states:>We won’t give notice when legally prohibited under the terms of the request.The post states that his lawyer has reviewed the subpoena, but doesn't mention whether or not it contained a non-disclosure order. That's an important detail to address if the cl
weird everyone's focusing on privacy and google.... Not the actual insanity of a government targeting people who are legally allowed to be in the US.You can try to find a way to keep things private, and many of the people on HN likely have the capability to do so. But hiding from your governmen
I still don't understand. Who gave ICE such power, and who is ordering them to do all this? To me, ICE's actions are similar to those of a private army.
> While ICE “requested” that Google not notify Thomas Johnson, the request was not enforceable or mandated by a courtSounds like Google stopped caring.But... Why on earth do the people filing an administrative subpoena not have to notify the interested parties too? Why is it Google's respons
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
The First Amendment applies to everyone on US soil, not just citizens. That’s settled law. The government can revoke visas for legitimate immigration violations, but it’s not allowed to use immigration machinery as a pretext to punish political expression. That’s exactly what they are doing. It look