Tweede Golf's viral post argues that async Rust never graduated beyond its initial minimum viable product. Six years after async/await stabilized, critical features like AsyncDrop are still absent, forcing developers into workarounds that undermine Rust's core RAII guarantees and ownership model.
The editorial argues that the real damage is philosophical: Rust's ownership model was built to eliminate explicit resource cleanup, yet every production async service must resort to manual `.close().await` patterns — essentially recreating Java's try-finally in a language designed to make it unnecessary. The workarounds (block_on in drop, tokio::spawn, must_use guards) all fail to compose and none are zero-cost.
The editorial distinguishes between AsyncDrop as a missing feature and cancellation unsafety as an active footgun. When a future is dropped at any .await point, partial work can be silently lost — a class of bug that hits every team using tokio::select! and that teams only discover in production, not through the compiler.
The Tweede Golf post garnered 431 points on Hacker News, indicating strong agreement from the practitioner community. The editorial notes that every Rust team independently discovers the same workarounds for AsyncDrop, suggesting the post named a widely-felt but previously uncoalesced frustration with async Rust's stalled evolution.
Every production Rust service with a database connection pool contains some version of this pattern:
```rust let conn = pool.acquire().await?; // ... use conn ... conn.close().await?; // If you forget this, or if ? early-returns above, cleanup is skipped ```
This is explicit resource cleanup. In 2026. In a language whose entire ownership model was designed to eliminate it. The reason it exists: Rust has no `AsyncDrop`. The `Drop` trait's signature is `fn drop(&mut self)` — synchronous, no `.await` allowed. Any type that needs to send a graceful-close frame, flush a buffer, release a distributed lock, or return a connection to a pool simply cannot do async work in its destructor.
Tweede Golf's viral post (352 HN points) calling async Rust a permanent MVP resonated because it named what practitioners already knew: six years after async/await stabilized, the language still forces you to choose between RAII and async I/O.
The workarounds are all ugly:
- `block_on` inside `drop()` — panics if called from within a tokio runtime, deadlocks on single-threaded executors - `tokio::spawn` inside `drop()` — fire-and-forget with no error handling, no completion guarantee before process exit, requires `take()`-ing ownership out of `self` - Explicit `.close().await` methods — the Java `try-finally` pattern that Rust's type system was built to make unnecessary - `#[must_use]` destructor guards — lint-level warnings, not compiler-enforced
None of these compose. None of them are zero-cost. And every Rust team discovers them independently.
AsyncDrop is a missing feature. Cancellation unsafety is an active footgun. In Rust's async model, a future can be dropped at any `.await` point, and if it has done partial work, that work may be silently lost.
The canonical example hits every team that uses `tokio::select!`:
```rust loop { tokio::select! { Some(msg) = rx.recv() => { process(msg).await; // If cancelled here, msg was consumed // from the channel but never processed. // It's gone. } _ = interval.tick() => { do_periodic_work().await; } } } ```
The message is pulled from the channel, the other branch wins, and the message vanishes. No error. No log. No panic. Just silent data loss in production.
Buffered I/O is worse:
```rust tokio::select! { _ = buf_writer.write_all(data) => {}, _ = shutdown_signal.recv() => { // write_all was cancelled mid-write. Some bytes made it to the // underlying writer, some are in the internal buffer, some are // lost entirely. BufWriter's internal state is now inconsistent. } } ```
`write_all` is not cancellation-safe — it consumes bytes from the input, and if cancelled, those bytes are gone. Tokio now documents cancellation safety for every async method in its API docs, which is genuinely helpful. But it shifts the entire burden to the developer. There's no `CancellationSafe` trait, no compiler lint, no type-system enforcement. You read the docs for every method in your `select!`, or you ship bugs.
Tokio's default multi-threaded runtime requires futures to be `Send`. This means every type held across an `.await` point must be thread-safe. When it isn't, you get errors like:
``` error: future cannot be sent between threads safely --> src/routes.rs:45:48 | 45 | Router::new().route("/api/data", get(handler)) | ^^^^^^^ = help: within `impl Future
You're staring at `*mut ()` inside a transitive dependency's source code. The actual cause — maybe a `Cell`, maybe an `Rc`, maybe a tracing span capturing non-Send data — is buried layers deep. The fix might be restructuring your code to not hold the offending value across an await, but identifying *which* value and *where* is the real time sink.
This compounds at the API design level. Every trait method returning a future must decide: require `Send` (excluding single-threaded runtimes) or don't (preventing `tokio::spawn`). The ecosystem is bifurcated. Libraries pick tokio or accept a compatibility tax.
Credit where due — progress has happened:
- Async closures + `AsyncFn` traits — stabilized in Rust 1.85 (February 2025), resolving a real pain point for higher-order async code - Async traits (native) — finally stable after years of the `async-trait` proc macro boxing futures and erasing performance benefits - Return-position `impl Trait` in traits — stabilized in 1.75, unblocking native async trait methods
But the gaps remain structural:
| Feature | Status (May 2026) | |---|---| | AsyncDrop | No accepted RFC. Deep design challenges around panic unwinding and implicit `.await` insertion | | Async iterators (`Stream` in std) | `async gen` blocks progressing, but `AsyncIterator` trait not in stable std | | Cancellation safety enforcement | No type-system solution proposed | | Runtime-agnostic async I/O traits in std | Absent — `tokio::io` remains the de facto standard |
The honest assessment: async closures took roughly four years from initial exploration to stabilization. AsyncDrop is architecturally harder. If it ships before 2028, that would be fast by Rust's standards.
Rust's async model does something no other systems language achieves: truly zero-cost futures. A Rust future is a state machine compiled to a struct — no heap allocation per task, no runtime thread pool required, no garbage collector involvement. A tokio server can handle millions of concurrent connections with memory usage that would make a Go runtime blush.
The defenders argue the pain is a deliberate tradeoff, not an accidental one. Rust chose to expose the hard problems (cancellation, Send bounds, pinning) rather than hide them behind runtime magic that would cost performance. The other languages that make async "easy" — Go, Kotlin, JavaScript — all pay for it in ways their users don't see until they hit scale.
This is true. It's also increasingly beside the point.
The question isn't whether Rust's async model *can* be zero-cost. It's whether shipping the syntax in 2019 and the semantics in 2026-and-counting was the right sequencing. The async/await syntax made async Rust *look* approachable while the underlying model remained expert-only. Teams adopted it based on the syntax's promise, then discovered the gaps the hard way.
TweedeGolf's "MVP" framing sticks because it names the pattern precisely: ship the minimum, promise the rest, iterate slowly while your users absorb the cost. Six years in, the iteration is real but the cost is too.
Great article! Love these types of deep dives into optimizations. Hope the project goal works out!I've felt before that compilers often don't put much effort into optimizing the "trivial" cases.Overly dramatic title for the content, though. I would have clicked "Async Rust O
Async seems like an underbaked idea across the board. Regular code was already async. When you need to wait for an async operation, the thread sleeps until ready and the kernel abstracts it away. But We didn’t like structuring code into logical threads, so we added callback systems for events. Then
I don’t understand why Rust even has panics if its primary goal is safety. We should be able to prove that the code has no paths that may panic ever. I’ve been looking at this all week. It’s very difficult to make a program that is guaranteed not to panic. My understanding is that the panic handler
This is the type of ugly but necessary discussions that have been happening in c++ for a while.I never really liked the viral nature of async in rust when it was introduced.I wish rust the best of luck and with more people like this rust could have a brighter future.
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
Agree with the other commenters that the title is a bit too dramatic. The content was well written and got the point across.I still don’t have enough experience to have a strong opinion on Rust async, but some things did standout.On the good side, it’s nice being able to have explicit runtimes. Inst