NetSecOps researchers documented how the Phantom Pulse RAT required no special permissions beyond what every Obsidian community plugin already receives: unrestricted filesystem access, network access, and arbitrary code execution. Their analysis traces the attack to Obsidian's architectural decision to run community JavaScript in the same Electron process with no sandbox, no permission model, and no capability restrictions.
The editorial frames this as the 'original sin of every plugin ecosystem' — granting all-or-nothing access by default. It emphasizes that a plugin claiming to render Mermaid diagrams has the same access as one explicitly reading the filesystem, because both get everything, making malicious behavior indistinguishable from legitimate plugin operation at the permissions level.
The editorial argues that compromising Obsidian is more damaging than typical supply chain attacks because developers store architecture decisions, API keys, interview notes, startup ideas, and personal journals there. The point is that this attack doesn't just give adversaries code — it gives them the full context around how and why systems are built, plus sensitive personal data.
The Hacker News submission gained 124 points and 66 comments, reflecting significant community engagement. The editorial characterizes the developer reaction as 'a mix of alarm and resignation,' with the consensus being that a supply chain attack through an unsandboxed plugin ecosystem like Obsidian's was only a matter of time.
Security researchers at NetSecOps discovered that an Obsidian community plugin was abused in a targeted campaign to deploy the Phantom Pulse RAT — a remote access trojan designed for persistent, stealthy access to compromised machines. The attack was documented after unusual command-and-control traffic was traced back to an Obsidian plugin running inside a developer's note-taking environment.
The campaign leveraged Obsidian's community plugin ecosystem, which allows users to install third-party JavaScript plugins that run with full Node.js privileges. The malicious payload didn't require any special permissions beyond what every Obsidian community plugin already gets: unrestricted filesystem access, network access, and the ability to execute arbitrary code. The Phantom Pulse RAT established persistence, exfiltrated data, and opened a reverse shell — all from inside a Markdown editor.
The story picked up significant traction on Hacker News (score: 124), where the developer community reacted with a mix of alarm and resignation. The consensus: this was inevitable.
This is not the first plugin supply chain attack, and it won't be the last. But it's notable for *where* it happened. Obsidian has become the default second brain for a substantial slice of the developer population. It's where people store architecture decisions, API keys (yes, really), interview notes, startup ideas, and personal journals. Compromising Obsidian doesn't just give you code — it gives you context.
The attack surface here is architectural. Obsidian's plugin system runs community JavaScript in the same Electron process as the main app. There is no sandbox. There is no permission model. There is no capability restriction. A plugin that claims to render Mermaid diagrams has the same access as a plugin that explicitly asks to read your filesystem — because both get everything by default.
This is the original sin of every plugin ecosystem that chose developer convenience over security boundaries: VS Code extensions, Chrome extensions (pre-Manifest V3), npm packages, and now Obsidian plugins. The pattern is always the same: start with an open, permissive model to bootstrap an ecosystem, accumulate millions of users, and then retrofit security after the first major incident.
The Phantom Pulse RAT itself is a capable piece of malware. Remote access trojans in this class typically include keylogging, screen capture, credential harvesting, and reverse shell capabilities. Deployed inside a knowledge management tool that developers leave running all day, the dwell time potential is enormous. Most developers don't monitor their note-taking app for outbound C2 traffic.
What makes this attack particularly effective is the trust model. Developers who would never run an unknown npm package without auditing it will casually install an Obsidian plugin with 200 GitHub stars because "it's just a note-taking plugin." The cognitive gap between 'this is a development tool' and 'this is my personal notes app' is exactly the gap attackers exploit.
Let's be precise about the failure modes across plugin ecosystems, because this isn't an Obsidian-specific problem — it's a design pattern problem:
VS Code extensions run in a separate extension host process but still get broad filesystem and network access. Microsoft added a workspace trust model in 2021 after multiple malicious extensions were found on the marketplace. The trust model helps, but extensions can still exfiltrate data from trusted workspaces.
Chrome extensions went through a painful migration to Manifest V3, which restricts background scripts and enforces declarative permissions. The transition took years and broke legitimate extensions. But it meaningfully reduced the attack surface.
npm packages remain the Wild West. Every `node_modules` directory is a supply chain liability, and the ecosystem has seen repeated attacks (event-stream, ua-parser-js, colors/faker). The difference: npm attacks target build pipelines. Obsidian plugin attacks target the developer's personal machine directly.
Obsidian's community plugin model currently sits where Chrome extensions were pre-Manifest V3: powerful, permissive, and relying primarily on community review rather than technical controls. Obsidian does review plugins submitted to their community directory, but the review process is human-driven and cannot catch sophisticated obfuscated payloads or plugins that update themselves post-installation.
The lesson for platform builders: if your plugin model grants ambient authority — full access by default, no permission prompts, no sandboxing — you have built a malware delivery platform. The only question is when it gets used as one.
Audit your Obsidian plugins now. Open Settings → Community Plugins and review every installed plugin. For each one: Is the source repo still active? Does the maintainer have a track record? Is there a recent, unexplained update? If you have plugins you installed once and forgot about, disable them. The attack surface of a plugin you don't use is pure downside.
Check for IOCs. Look for unexpected outbound network connections from the Obsidian process. On macOS, `lsof -i -P | grep Obsidian` will show active connections. On Linux, `ss -tp | grep -i electron` can help. Any connection to an IP or domain you don't recognize warrants investigation. If you find something suspicious, isolate the machine and rotate credentials — especially SSH keys, cloud provider tokens, and API keys stored in your vault.
Treat your notes app as part of your threat model. If you store any sensitive information in Obsidian — and most developers do — it deserves the same scrutiny as your CI/CD pipeline. Consider: - Keeping secrets out of your vault entirely (use a proper secrets manager) - Running Obsidian with network restrictions (Little Snitch on macOS, firewall rules on Linux) - Pinning plugin versions and reviewing changelogs before updating - Using Obsidian's restricted mode for vaults containing sensitive data
For teams, this incident is a reminder that developer workstation security is supply chain security. A compromised developer laptop with access to production credentials, internal wikis, and deployment pipelines is a far more valuable target than a compromised build server. And the entry point was a note-taking plugin.
Obsidian will likely respond to this incident by tightening their plugin review process, and they should. But the real fix is architectural: plugins need a permission model, a sandbox, or both. The Electron ecosystem makes this hard — sandboxing arbitrary JavaScript that needs filesystem access is a genuinely difficult engineering problem. But Chrome solved it (painfully) with Manifest V3, and the developer tool ecosystem needs to follow. Until then, every extensible developer tool with an open plugin model is one social engineering campaign away from becoming a malware vector. The attackers know where developers spend their time. Your notes app is no longer neutral territory.
I don't know when Obsidian gathered the hate I'm seeing here, but 'bad plugins' is a failure mode of most everything that has plugins.Personally it feels similar to being mad at Windows if you were to install an exe someone emailed you and it turned out to be a virus.You can inst
This is a misleading headline. It makes it seem like another supply chain attack where some good plug-in was taken over and used to deliver malware. Thats not the case here. Victims are invited to collaborate on a synced vault which comes preloaded with a non official plug-in that delivers the rat.
I really like Obsidian. I use it every day and I don't use any community plugins because the permissions aren't up to snuff. I hope for a day where a plugin defines what it will need and that gets presented to me as a user.I have to imagine the Obsidian team is going to respond seriously t
> The victim is prompted to enable the "Installed community plugins" synchronization feature.Obsidian has the proper protections in place to prevent this type of attack, and the victims are being convinced to ignore them. This is just a successful social engineering event. I hate to see
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
Obsidian CEO here. There is a major update coming soon for plugin security. I think it will address many of the concerns people have raised in this thread. It's a hard problem but we are working on it.That said, the headline is misleading. This article is about a social engineering attack that