Mozilla formally designated the Prompt API as 'harmful' in its standards-positions repository. Their core technical objection is that web APIs must specify deterministic behavior — given input X, produce output Y — but the Prompt API's output depends entirely on which model each browser bundles, making cross-browser consistency impossible by design.
Submitted the Mozilla standards-position issue to Hacker News, drawing 229+ upvotes and 95 comments. The framing highlights the fundamental tension: document.querySelector() returns the same result in every browser, but a prompt that generates a useful summary in Chrome's Gemini Nano might produce garbage or fail entirely in another browser's model.
Google has been actively pushing built-in AI APIs through Chrome origin trials, exposing capabilities like window.ai.languageModel.create() backed by Gemini Nano. Their pitch centers on compelling developer benefits: zero-latency inference, offline capability, no API keys or costs, and no user data leaving the device — addressing privacy, performance, and accessibility concerns simultaneously.
The editorial argues Mozilla's objection cuts deeper than a typical standards disagreement. Every existing web API specifies behavior with defined inputs and outputs, but the Prompt API specifies a capability — the ability to run inference — without any way to constrain what that inference produces. This represents a fundamental category error that could set a precedent for future non-deterministic APIs on the web platform.
Mozilla has formally designated Google's Prompt API as "harmful" in its standards-positions repository — the official channel where Firefox's engineering team records whether proposed web specifications are good or bad for the open web. The Prompt API, part of Chrome's broader built-in AI initiative, exposes an on-device language model to web pages through a JavaScript interface, letting developers run inference directly in the browser without calling external APIs.
The position was filed as [issue #1213](https://github.com/mozilla/standards-positions/issues/1213) in Mozilla's standards-positions repo, drawing significant attention on Hacker News with 229+ upvotes. Mozilla's objection isn't about AI being bad — it's about what happens when an API's behavior is fundamentally defined by its implementation rather than its specification.
Google has been pushing its built-in AI APIs through Chrome origin trials, giving developers early access to capabilities like `window.ai.languageModel.create()` to prompt a bundled model (currently Gemini Nano). The pitch is compelling on the surface: zero-latency, offline-capable AI inference with no API keys, no costs, no data leaving the device.
The technical objection from Mozilla cuts deeper than typical standards disagreements. Web APIs work because `document.querySelector()` returns the same result in Chrome, Firefox, and Safari — the Prompt API, by definition, cannot make that guarantee. A prompt that generates a useful summary in Chrome's Gemini Nano might produce garbage in a hypothetical Firefox implementation running a different model, or fail entirely in Safari. The API's output is non-deterministic not just across runs, but across vendors.
This breaks something fundamental about how the web platform works. Every existing web API specifies *behavior*: given input X, produce output Y (or at minimum, output conforming to format Z). The Prompt API specifies a *capability* — "run this text through a language model" — without being able to specify what the output should be. Mozilla's argument is that this isn't a standards failure that can be fixed with better spec writing; it's inherent to the nature of the API.
Google's counter-position has merit too. The Chrome team argues that on-device AI represents an important capability gap: developers currently have to choose between expensive cloud API calls (with latency and privacy tradeoffs) and shipping their own models via WASM or WebGPU (with enormous download sizes and complexity). A built-in model solves both problems. They point to use cases like real-time translation, smart compose, content summarization, and accessibility features that benefit from instant, free, private inference.
The deeper tension is about who gets to define the AI layer of the web platform. If the Prompt API becomes widely adopted despite Mozilla's objection, Google effectively sets the model, the capabilities, and the behavioral baseline. Other browsers either reverse-engineer compatibility with Gemini Nano's quirks or ship their own models and accept that web pages will behave differently. Neither outcome looks like the interoperable web that standards bodies exist to protect.
The Hacker News discussion surfaced a nuanced middle ground that neither side has fully addressed: what about a *constrained* AI API? Rather than exposing free-form prompting, a standardized API for specific tasks — classification, summarization, translation with defined quality metrics — could potentially be specified in a way that's testable and interoperable. Mozilla's Translation API position, notably, is more favorable, suggesting the objection really is about the open-ended nature of prompting rather than AI capabilities in browsers generally.
If you're building with Chrome's Prompt API today, treat it the way you'd treat any Chrome-only feature: useful for progressive enhancement in controlled environments (Chrome extensions, kiosk apps, internal tools), but not something to depend on for cross-browser web applications. Any code targeting `window.ai` or the Prompt API will not work in Firefox and almost certainly not in Safari, with no timeline for that to change.
The practical architecture hasn't changed: if you need client-side AI inference that works everywhere, you're still looking at WebAssembly + WebGPU with models like ONNX Runtime Web, Transformers.js, or MediaPipe. These approaches are heavier (model downloads range from 50MB to several GB) but they're standards-based and cross-browser. For most production use cases, a thin API layer calling a cloud model remains the pragmatic choice.
For teams evaluating on-device AI strategies, the Mozilla position is a strong signal that the Prompt API won't become a universal web standard anytime soon. Plan your abstraction layers accordingly. If you do use it, wrap it behind a capability-detection pattern that falls back to your own inference or cloud calls:
```javascript async function summarize(text) { if ('ai' in window && 'languageModel' in window.ai) { const session = await window.ai.languageModel.create(); return session.prompt(`Summarize: ${text}`); } return fallbackSummarize(text); // your API or WASM model } ```
Build the fallback first, treat the Prompt API as an optimization, and you won't get burned regardless of how the standards fight plays out.
This is shaping up to be one of the defining standards battles of the AI era. Google has the leverage of 65%+ browser market share and a working implementation that developers can use today. Mozilla has the moral authority of the standards process and a technically sound argument. The most likely outcome: the Prompt API ships in Chrome, a subset of developers adopt it for Chrome-specific use cases, and the standards community eventually coalesces around task-specific AI APIs (translation, classification, summarization) that can actually be specified interoperably. The free-form prompt box in the browser? That stays a Chrome feature, not a web standard.
^ didnt realize who posted the opposition - this is Jake Archibald, a longtime googler on the Chrome team, now joining Mozilla and posting opposition to the Chrome API. no wonder the criticism is so well argued. most be a relief to not have to toe the party line on this one.
I am against this.1) This will be a new source of fingerprinting information and this is difficult to fake to fool fingerprinting scripts, so it can be abused for "device verification". There should be no ability to "verify" a browser, and anyone should be able to emulate any bro
> Browsers and operating systems are increasingly expected to gain access to language models.[0]Are they?[0] https://github.com/webmachinelearning/prompt-api/blob/main/R...
Why is it that Google is fixated on bolting on ever more junk and turning browsers into Homermobiles[0] instead of putting those vast resources towards fixing the numerous structural weaknesses in everything that browsers are already capable of? Why not focus on foundational things that will improve
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
The objections seem clear: tight-coupling of prompts to models, and model neutrality in the TOU.From https://github.com/mozilla/standards-positions/issues/1213 :"A personal example: I created a system prompt for creating announcements for a home automation system.