By surfacing the New Yorker profile to the HN community, Hon highlights the piece's central thesis: the for-profit conversion effectively eliminated the last structural constraint on Altman's authority that wasn't appointed by or aligned with his financial interests. The framing of the submission foregrounds the trust question in the headline itself.
The editorial argues that OpenAI's conversion removed 'the last structural constraint on Altman's authority that wasn't appointed by Altman or aligned with his financial interests.' It frames the November 2023 board firing and reconstitution as evidence that governance mechanisms proved decorative rather than load-bearing.
The editorial notes that developer communities don't typically send governance profiles to 1,300+ points — product launches and security disclosures, yes, but a magazine piece questioning CEO trustworthiness is 'new territory.' It argues this signals developers now view OpenAI through the lens of counterparty risk rather than technical admiration.
The New Yorker's deeply reported profile frames its inquiry around Altman specifically — not AI policy in the abstract, but whether this particular individual, who survived a board firing and consolidated control over a $300B+ company with 200M+ weekly users, can be trusted with outsized influence over AI's trajectory. The piece examines the architecture of power around him and whether existing checks are substantive or cosmetic.
The New Yorker published a deeply reported profile of Sam Altman — "Sam Altman May Control Our Future — Can He Be Trusted?" — timed to the magazine's April 13, 2026 issue. The piece landed on Hacker News and racked up 1,355 points, making it one of the highest-scoring non-technical stories on the platform this year.
The profile arrives at a specific inflection point. OpenAI completed its conversion from a nonprofit research lab to a for-profit entity in early 2026, a move that had been contested by state attorneys general, former board members, and Elon Musk's legal team. The conversion effectively removed the last structural constraint on Altman's authority that wasn't appointed by Altman or aligned with his financial interests. The New Yorker piece examines not just the man but the architecture of power around him — who checks Sam Altman, and whether those checks are load-bearing or decorative.
The timing matters. OpenAI reportedly crossed $10 billion in annualized revenue in late 2025, with a valuation north of $300 billion. ChatGPT has over 200 million weekly active users. The company that started as a research nonprofit in 2015 with a mission to ensure AI "benefits all of humanity" is now one of the most valuable private companies in history, and its CEO survived a board firing that would have ended most executives' careers.
The Hacker News score is the story within the story. Developer communities don't typically send governance profiles to 1,300+ points. Product launches, yes. Security disclosures, yes. A magazine piece asking whether a CEO can be trusted? That's new territory. It suggests the developer community's relationship with OpenAI has shifted from evaluating capabilities to evaluating counterparty risk.
The profile reportedly catalogs a pattern that practitioners have observed in fragments: Altman's removal and reinstatement by the OpenAI board in November 2023, the subsequent reconstitution of that board with members more aligned to the company's commercial trajectory, the quiet unwinding of the profit cap that was supposed to limit investor returns to 100x, and the final conversion that dissolved the nonprofit's control entirely. Each step, taken individually, had a plausible business rationale. Taken together, they describe a systematic removal of constraints.
Critics — including former OpenAI researchers, the original board members who attempted the firing, and external governance experts — have argued that Altman's consolidation of control is precisely the scenario OpenAI was designed to prevent. The nonprofit structure existed because the founders believed AGI development was too consequential to be governed by market incentives alone. That structure is now gone. The new board, while populated with credentialed individuals, was assembled after the coup failed, by the person the coup targeted.
Altman's defenders — and there are many, including Microsoft CEO Satya Nadella and the majority of OpenAI employees who threatened to follow Altman to Microsoft during the 2023 crisis — make a different case. They argue that Altman is among a tiny number of people with both the technical intuition and operational skill to navigate AGI development at scale. The defense is essentially that the right benevolent dictator is better than a committee, especially when the committee already proved it couldn't execute a leadership transition without nearly destroying the company. This is a coherent position. It's also unfalsifiable until the moment it isn't.
For developers building on OpenAI's APIs, the governance question has a concrete dimension that the New Yorker profile doesn't fully explore. When your production stack depends on a single AI provider, you're not just making a technical bet — you're making a governance bet. You're betting that the company's priorities will remain aligned with your needs, that pricing will stay rational, that API terms won't shift under you, and that the company won't pivot in ways that break your product.
OpenAI has already demonstrated willingness to make abrupt strategic shifts: the GPT Store launch and quiet de-emphasis, the DALL-E pricing restructuring, the shifting API rate limits and model deprecation timelines. None of these were malicious. All of them imposed real costs on developers who had built around assumptions that changed. In a company with strong independent governance, these decisions would face institutional friction. In a company where the CEO's authority is functionally unchecked, the speed of strategic pivots is limited only by the CEO's attention span.
This isn't an argument that Altman is making bad decisions. It's an argument that the error-correction mechanism is gone. Every system — software, organizational, democratic — needs feedback loops that can override the primary decision-maker. OpenAI's original structure provided one. The current structure provides shareholder lawsuits and public opinion, neither of which operate on a useful timescale for a company moving as fast as OpenAI.
The practical takeaway isn't "stop using OpenAI." Their models remain best-in-class for many workloads, and switching costs are real. But the governance shift should change how you architect.
Treat OpenAI as a vendor, not a platform. Abstraction layers between your code and any single AI provider aren't premature optimization anymore — they're risk management. If you haven't built a provider-switching capability, the New Yorker profile is your reminder that the company you're depending on just removed its last internal guardrail.
Price your counterparty risk. When evaluating build-vs-buy or OpenAI-vs-alternatives, the governance structure of your AI provider should be a weighted factor, the same way you'd evaluate the bus factor of an open-source dependency. A $300B company controlled by one person with no effective board check has a different risk profile than an open-weight model you can self-host.
Watch the talent signals. The New Yorker piece reportedly details several high-profile departures from OpenAI's safety and alignment teams. When senior researchers leave a company because they believe governance is insufficient, that's not gossip — it's a leading indicator. The same way you'd monitor GitHub commit velocity on a critical dependency, monitor the departure rate of OpenAI's safety-focused staff.
The 1,355 Hacker News points on a trust profile — not a model release, not a benchmark, not a security bug — mark a phase transition in how the developer community relates to AI companies. We've moved from the "what can it do" era to the "who controls it" era. That shift is healthy, overdue, and likely irreversible. The question isn't whether Sam Altman can be trusted. The question is whether "trust this person" is an acceptable architecture for the most consequential technology since the internet. For practitioners, the answer should be the same one we'd give for any other single point of failure: probably not, so build accordingly.
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.