> In Erlang, it feels so elegant – you write your code in a linear way, no coloring of functions, no async/await, and it still works in a high performant, I would say, scheduled way with [inaudible]. My idea was “how can I have this experience in Rust?”.
I think long term, we will view async, like we view the near/far/huge pointers - something that was required in the past for performance but is now obsolete.
With Java’s project Loom, kernel improvements in efficiency of threading, and projects like this, for the vast majority of programmers, using async for concurrency will not be worth it.
I tend to agree. While asnyc/await (and callbacks and promises before) solves an efficiency problem via compiler magic and language extensions, it's not obvious whether it's necessarily the best solution to the problem. The approach that Lunatic seems to take is building another runtime in userspace which takes care of scheduling while allowing methods to continue to look synchronous. A stackful coroutine framework would probably have allowed for the same without requiring WASM, so I'm kind of wondering whether that was considered too.
But ideally we would just use kernel threads, and maybe look into fixing performance for the use-cases where they don't work that well at the moment (which are not a lot actually!). It's so much simpler to understand for most programmers, tools (like debuggers, profilers, etc) just work, and suffers much less from interoperability issues.
> While asnyc/await (and callbacks and promises before) solves an efficiency problem via compiler magic and language extensions, it's not obvious whether it's necessarily the best solution to the problem.
The nice thing about Rust is that if kernel threads solve your problem, you can just use kernel threads. Some common reasons for using async, however, include:
1. Handling a larger number of connections than the kernel can support using threads.
2. Writing co-routines that operate over streams. This is sort of the in-process version of Unix CLI tools communicating by pipelines. For various reasons, it often seems to be much better handled in async frameworks.
3. Using networking libraries that happen to be async. This is actually fairly common in Rust, and it's arguable not a good reason to use async. On the other hand, Rust has tools for wrapping async code in a sync API, and several of the most popular async libraries do this.
Rust, however, does force you to choose between kernel threads and async coroutines. Rust does not support transparent green threads. This was an explicit design decision, largely driven by two goals:
- Rust lives extremely "close to the machine".
- Rust prefers to make the costs of operations explicitly visible.
Both of these concerns pushed heavily towards removing Rust's original green thread runtime support. However, these two concerns are very specific to languages like Rust that focus on making expensive things visible. Other languages which are more forgiving of runtime overhead would probably be better served by supporting transparent green threads.
I don't think you should add something complicated like async/await purely for efficiency reasons, but it's good for expressivity too. Coroutines are good; I think you'd end up inventing coroutines and CPS (aka calling a callback at the end of your function instead of returning) just because they're useful, but once you have them, they're unsafe and unstructured i.e. you can forget to call them when you meant to.
Async/await does more than just concurrency; it explodes your function into a rather complicated state machine, but at the same time gives you structured coroutine control flow that prevents some kinds of bugs.
> once you have them, they're unsafe and unstructured i.e. you can forget to call them when you meant to.
At least in the case of asymmetric coroutines, such as what Lua supports, this is absolutely not the case. They're very structured--the caller invokes the callee, which always yields back to the caller. It's almost the same as simply calling another function. Yes, objects can persist beyond the invocation lifetime, but their lifetime is scoped to the lifetime of the coroutine object itself, which the caller manifestly holds. Async Rust is effectively built on asymmetric coroutines, AFAIU. The difficulties and differences lie in stack management--specifically, where and how that occurs, and how those details leak.
> Async/await does more than just concurrency; it explodes your function into a rather complicated state machine
All compilers explode functions into state machines. The question is how and where that state is managed. Non-async Rust, like many compiled languages, heavily relies on the underlying environment (hardware, operating system, ABI, etc) to set and restore stack and instruction pointers. A language like Lua uses virtual stacks, which are VM data structures that operate independently from the underlying "C stack". Async Rust sits somewhere in the middle.
Async Rust is fundamentally a practical compromise and has little to do with safety, per se. If anything, managing lifetime guarantees might have been easier if Rust was able to push stack management down further in the runtime model, separating concerns more fully. Doing so would have complicated FFI interoperability, however, so it was a non-starter. Much the same is true for Swift. But if you completely control the runtime environment, such as through a VM or gating FFI access through a bridge, then it's difficult to not choose coroutines.
My comment was about comparing async/await as an evolution of other "async models" compared to just using native threads - which is mostly about efficiency.
It's not so much about comparing an async callback based solution to asycn/await - which is as far as I understand what your comment is about. I agree that directly standardizing async/await or coroutines instead of a lower-level abstraction (callbacks, promises, etc) can improve both efficiency and ease of use.
- The syntax isn't as good. (this is more like a lot of minor annoyances than one big point, but you can see it discussed in the Swift pitches)
- It's not visible to compiler optimizations so it can't optimize coroutines.
- It's not visible to the OS scheduler, or a green thread scheduler. Any pattern where you start an async task and then later wait on it leads to priority inversions, because until your more important thread is waiting on it, the scheduler doesn't know it's going to, so it can't inherit the priority.
> A stackful coroutine framework would probably have allowed for the same without requiring WASM
Interestingly Rust actually had this built in originally, but it was taken out of the language since it could be added back in as a library, and the devs thought it was important that Rust did not hide any costs from the user.
Which I can certainly buy. Especially once you consider use cases like embedded development. That said, it'd also suuuuure be nice to be able to opt-in to something like that and avoid the async headache.
I fully agree with the decision to remove the mandatory runtime mandate - which allows Rust to run everywhere.
The quoted sentence was more about the solution that Lunatic took to enable "synchronously looking code" - they apparently opted for WASM as an additional runtime layer to accomdate user-space scheduling. But it might have been possible to achieve similar goals by using stateful coroutines/fibers (similar to boost::coroutine, boost::context, etc).
I guess there might be a win in using WASM due to them being able to insert cooperative yielding instructions into the generated code. But the tradeoff is that WASM raw execution speed is lower than native.
It increases efficiency, in the same sense as prior solutions for the same problem increased efficiency. E.g. if you write a webserver which intends to handle LOTS of active connections (5-7 digits of them), you might end up being able to handle N connections with a classic OS thread per request model (as e.g. original Apache used it).
Going to an fully async model - as used e.g. in Nginx - will allow to make it 2xN connections for the same hardware, in exchange for having to deal with a lot more complexity during implementation. Which means you can either serve 2x as much traffic, or pay 50% of hardware and energy cost for the same amount => This is the increase in efficiency.
Goig from a "low level async" (callbacks, reactor/proactor/etc) model to async/await intends to mostly keep the increase in efficiency, while making the programming model a bit more bearable and less error-prone.
Loom looks cool but this comes up a lot but I think async/await covers cases Loom doesn't. When dealing with a situations where you do need to share a single OS thread (GPU commands) or want to use cooperative multitasking (very common in UI) you still probably want something like async/await.
Maybe that just means you pass data between executors in those contexts but I think that will end up looking like async/await anyway.
I can make my own library, or modify the library given above, or write a wrapper - I have more choice than "use this concrete syntax rule for that kind of things".
Moreover, values with effects are first class values, I can pass them, extend them, combine them, in some rare cases inspect them, abstractly interpret them, and last, but not least, evaluate them. Most often than not, this is not the case when they are language feature.
This seems like a very cool idea. The concurrency model in Erlang/Elixir/... eliminates the issues of shared mutable state and the supervision trees catch unexpected data-related problems, etc.
FWIW, the Elixir/Phoenix community seems to have settled on LiveView, which performs most of the processing at the server, assisted by some small JS routines which handle updates, etc.
I think the Elixir/Phoenix community is very excited about LiveView, but I wouldn't say settled. I'd guess only a very small fraction of serious Phoenix apps use it, and most new ones are probably still going to go with some sort of SPA, at least for the time being.
Do you know of any projects that seek to run elixir on web assembly in the browser in such a way that it can interact with a standard elixir beam backend?
One under-appreciated aspect of async is that it gives you a nice API for composing operations.
Yes, async can be harder to introspect, especially with a debugger, and it introduces function colouring.
But in return it gives you a clear type-system level indication that computations are not pure and don't resolve immediately.
You also get simple tools to combine, map/modify, join and potentially cancel individual asynchronous tasks. You can in theory get a similar API with threads and/or channels, but I've never seen anything that was anywhere as convenient as async.
>But in return it gives you a clear type-system level indication that computations are not pure and don't resolve immediately.
Is that actually useful ? You can have "BlockingLongCall()" without async and it could block your thread from progressing for minutes - or you could have "await FunctionThatResumesImmediatelyOnTheCaller()" and it doesn't even context switch.
> You can in theory get a similar API with threads and/or channels, but I've never seen anything that was anywhere as convenient as async.
I mean you can use the exact same interfaces in .NET and threat tasks as Threads with a custom scheduler implementation (that fires up a thread for each Task you start). And Tasks in .NET implement blocking operations as well as async ones.
> Is that actually useful ? You can have "BlockingLongCall()" without async and it could block your thread from progressing for minutes - or you could have "await FunctionThatResumesImmediatelyOnTheCaller()" and it doesn't even context switch.
Very much so. Async is - I would guess 95% of the time - a proxy for IO operations.
Error handling is different: a pure function can and should be bug-free, while an IO function can always crash due to external factors. Performance tuning is different: a slow pure function needs optimization, parallelization, and worst case a faster CPU, while a slow async function usually needs you to dive into the plumbing part of your code to see if you're using an API wrong or something like that.
But in imperative OO style you're touching IO all over the place - this is why people complain about Task/Promise proliferation. And especially if you have something like interfaces where some implementation could require IO while other might not. In my experience you can't really use it to narrow down anything.
My first thought: Oh no, some Erlang lunatic trying to make webassembly even more bad. After reading about newly invented bicycle: What a relief, WebAssembly itself remain untouched by esoteric languages lunatics.
Lunatic is an Erlang-inspired runtime for WebAssembly - https://news.ycombinator.com/item?id=28008737 - July 2021 (40 comments)
Lunatic – An Erlang-Inspired Runtime for WebAssembly - https://news.ycombinator.com/item?id=26403879 - March 2021 (2 comments)
Launch HN: Lunatic (YC W21) – An Erlang Inspired WebAssembly Platform - https://news.ycombinator.com/item?id=26367029 - March 2021 (39 comments)
Show HN: Lunatic – Actor System for Rust/WebAssembly - https://news.ycombinator.com/item?id=25160474 - Nov 2020 (47 comments)