By surfacing this story, kevcampb highlights that Atlassian changed a settings default without a consent dialog or feature announcement — burying it in admin controls most teams never touch. The framing emphasizes this as a silent opt-in rather than a transparent policy change.
The editorial explicitly calls out the 'default-on pattern' as an industry playbook, comparing it to Zoom and Adobe's previous attempts. It emphasizes that the critical detail is the absence of a consent dialog — this is a settings change buried in admin controls most teams never touch.
The editorial argues that Atlassian's move is 'arguably worse' than Zoom or Adobe's similar attempts because it targets developer infrastructure — Jira histories, Confluence runbooks, and Bitbucket code reviews constitute an organization's operational brain, representing a fundamentally different category of exposure than creative files.
The editorial frames this as part of a documented pattern citing Zoom (August 2023) and Adobe (June 2024) as predecessors. It positions Atlassian's move not as an isolated incident but as evidence that default-on AI data collection has become a deliberate industry playbook that vendors attempt until backlash forces reversal.
Atlassian has quietly enabled a default-on setting across its Cloud products — Jira, Confluence, Trello, and Bitbucket — that allows the company to use customer data for training its AI models. The change, which affects organizations on all Cloud plan tiers, means that unless an admin actively navigates to the product settings and toggles the data collection off, your team's tickets, wiki pages, code comments, and project boards are now fair game for Atlassian's AI training pipeline.
The critical detail: this isn't a new feature announcement with a consent dialog. It's a settings default that changed, buried in admin controls most teams never touch. The setting lives under the Atlassian Intelligence configuration in the organization admin panel. For companies with multiple Atlassian products, each product's setting may need to be disabled individually.
The story gained traction rapidly on Hacker News, accumulating 273 points — a signal that this isn't just a niche privacy concern but a structural frustration among the developer community that uses these tools daily.
### The Default-On Pattern Is Now an Industry Playbook
This is not the first time a major SaaS vendor has tried this. In August 2023, Zoom updated its terms of service to claim rights to use customer data for AI training, triggering a firestorm that forced a rapid policy reversal. In June 2024, Adobe's Creative Cloud terms update implied that uploaded content could be used for machine learning, leading to a similar backlash and clarification cycle.
Atlassian's move is arguably worse than both predecessors because it targets developer infrastructure — the repositories, project plans, internal documentation, and issue trackers that constitute an organization's operational brain. A designer's Photoshop files are sensitive; your company's entire Jira history, Confluence runbooks, and Bitbucket code review threads are a different category of exposure entirely.
The pattern is consistent: ship the default quietly, wait for someone to notice, then frame the opt-out as evidence of user choice. The problem is that "choice" is meaningless when the default is set to benefit the vendor and the toggle is hidden behind three navigation levels.
### The Compliance Angle Is Real
For any organization subject to GDPR, HIPAA, SOC 2 Type II, or FedRAMP requirements, a default-on data sharing setting is not just inconvenient — it's a potential compliance violation. GDPR in particular requires explicit consent for new data processing purposes, and quietly changing a default does not meet that bar.
If you're a Confluence customer storing internal security documentation, architecture decision records, or incident postmortem details, those documents may now be flowing into a training pipeline you never approved. The question isn't whether Atlassian anonymizes or aggregates the data — it's whether your data governance policy permits any external processing beyond the core SaaS functionality you contracted for.
Enterprise customers on Atlassian's Premium and Enterprise plans often have specific data processing agreements (DPAs) in place. Whether this new default conflicts with those DPAs is a question every affected organization's legal team should be asking immediately.
### What the Community Is Saying
The Hacker News discussion (273 points and climbing) reveals several recurring themes. The most common reaction is fatigue — developers are tired of auditing SaaS vendor policy changes that unilaterally expand data usage. Several commenters noted they've started maintaining opt-out checklists for every SaaS tool in their stack, treating vendor AI settings the way they treat cookie banners: assume hostile defaults.
Others pointed to the asymmetry of the relationship. Atlassian charges $7.75-15.25 per user per month for Cloud plans. Customers are already paying for the product. Using their data to train AI models that Atlassian will then monetize (via Atlassian Intelligence features that may eventually carry premium pricing) means customers are subsidizing their vendor's AI roadmap with both money and data.
A smaller but notable group questioned whether Atlassian's AI ambitions are even worth the trade. Atlassian Intelligence features — AI-powered search, summarization, and suggestions across Jira and Confluence — have received mixed reviews since launch, with many users finding them too generic to be useful for technical workflows.
### Immediate Action Items
Step 1: Check your settings now. Log into your Atlassian admin console at `admin.atlassian.com`. Navigate to each product's settings and look for the Atlassian Intelligence or AI data collection toggle. Disable it for every product if your organization hasn't explicitly decided to participate.
Step 2: Audit your DPA. If you have a Data Processing Agreement with Atlassian, check whether the new default conflicts with the agreed-upon data processing scope. If it does, you have grounds to escalate with your Atlassian account team.
Step 3: Add this to your SaaS governance process. Every SaaS vendor in your stack is going to try this at some point. Build a quarterly review into your security team's workflow that checks AI/ML data settings across your critical tools. This isn't paranoia — it's operational hygiene in 2026.
### The Broader Architectural Question
This incident reinforces a trend that's been building for two years: the SaaS trust model is degrading. When your project management tool, your wiki, and your code review platform can unilaterally decide to use your data for purposes beyond the core service, the implicit contract of cloud software — "we host it, you control it" — breaks down.
For teams with strict data requirements, this is accelerating interest in self-hosted alternatives. Gitea and Forgejo for code hosting, Outline or BookStack for documentation, and Plane or Linear (which has made explicit no-training-data commitments) for project management. The migration cost is real, but so is the cost of perpetual vendor surveillance.
For teams that stay on Atlassian — which will be most teams, because switching costs in enterprise tooling are brutal — the lesson is to treat your SaaS admin panels like your firewall rules: review them regularly, assume defaults will change in the vendor's favor, and document every opt-out decision.
The SaaS industry is running a slow-motion experiment to find out how much data harvesting customers will tolerate before they leave. So far, the answer appears to be "quite a lot," because migration costs are high and institutional inertia is real. But each incident like this moves the needle. Atlassian will likely issue a clarification or make the opt-out more prominent — that's the playbook — but the default will have already done its work for the organizations that didn't catch it in time. The real question is whether regulators, particularly in the EU, will eventually force the industry to a default-off standard for AI training data. Until then, the burden falls on you.
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.