Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async: How does cross-task communication work? #459

Open
alexcrichton opened this issue Feb 28, 2025 · 4 comments
Open

async: How does cross-task communication work? #459

alexcrichton opened this issue Feb 28, 2025 · 4 comments

Comments

@alexcrichton
Copy link
Collaborator

I've talked with @lukewagner and @dicej in various degrees about this problem before, but I wanted to try to write up an issue with my thoughts about something concrete at least. At a high level I'm curious how an "async mutex" would work within a component, but I'm going to attempt to make this more specific as well.

Let's say I have a hypothetical setup like this:

  • Component A has an export, run.
  • An external entity (host or other component), calls run to create task T1.
  • Task T1 acquires an async mutex, but then decides it has taken too long and performs an async yield.
  • An external entity then calls run again to create task T2.
  • Task T2 attempts to acquire the mutex, sees that it's locked, and needs to block.
    • I'm assuming the callback ABI here, so this would return. Is this allowed? Returning while blocking on nothing?
  • The yield for T1 finishes and then it drops the lock.
    • How does this "wake up" T2? How is the "current task" context switched?

My vague undersatnding is that T2 traps when it returns waiting for the lock, but I'm also still booting up on all the canonical ABI pieces.

@alexcrichton
Copy link
Collaborator Author

I wrote up some other somewhat related thoughs at bytecodealliance/wit-bindgen#1182

@lukewagner
Copy link
Member

This is a great question. Indeed, this is the last thing on the "todo" list to add to the 0.3.0 proposal; I've just been waiting for someone to hit the concrete question so we could consider the solution in that context.

So one observation is that: if we didn't care about using callbacks and use cases where we want to wait on an async mutex alongside other I/O operations, we could just use memory.atomics.{wait,notify}. The C-M doesn't say anything about these instructions atm, but I was recently realizing it should; basically, we should treat memory.atomics.wait as a cooperative yield points (just like waitable-set.wait or yield), noting that memory.atomics.wait already "hooks" into the host in a way that is used by browser embeddings.

If we do care about callback and being able to wait on multiple things, I think we want something analogous to what the browser did with Atomics.waitAsync() as a canon built-in:

canon memory.atomics.wait-async $t $shared? $memory : [addr:i32 expected:$t timeout:i64] -> [result:i32]

where $t is either i32 or i64. Symmetric with Atomics.waitAsync, the result is packed with the possible results:

  • not-equal: the expected value was not equal to *addr.
  • timed-out: if *addr was expected and timeout was 0
  • blocked: containing a packed index into the waitables table

Like Atomics.waitAsync, notification could use the regular memory.atomics.notify instruction from core wasm code. Lastly, just like how in the browser memory.atomics.wait traps on the main thread, memory.atomics.wait could trap under async callback, thereby requiring wait-async.

My hope here is that the symmetry (with core wasm and the browser) would make it easier to integrate in core parts of the toolchain. But happy to discuss alternatives; it seems like there are a number of solutions that could work. I think the main thing is some new built-in that produces a waitable-table index that can be waited on like normal and signaled from wasm without requiring calling an import.

@alexcrichton
Copy link
Collaborator Author

At least for the use case I was thinking of Rust's support we'd definitely want to support the callback ABI. While memory.atomics.wait-async could work I'd also want to clarify that the blocking operation of an async mutex isn't actually blocking the current thread, it's more that it did some synchronization to figure out it needs to block but after that it's going to suspend. In Rust-parlance the future returns Poll::Pending and is arranged to receive a notification when the lock is unlocked. Later when another task unlocks the mutex it'll send a signal to the blocked task (via Rust APIs) and that's the part that needs to interact with the component model.

That's why memory.atomics.wait-async could work in theory but it's also not a strict parallel to memory.atomics.wait. In that sense there's other possible designs, too, which aren't dependent on memory. I haven't fully thought this through but something could be along the lines of:

  • canon task.id : [] -> [i32]
  • canon task.wake_sibling : [i32] -> []

Where when a task blocks on an async mutex it's "to be notified" handle knows the original task's ID (by calling task.id). When the lock is unlocked that notification will see that it's unlocking from a separate task via task.wake_sibling. Although as I write this down I realize that this still doesn't give the CM knowledge of the original task blocking, it just called task.id and then didn't block on anything and also didn't return...

@lukewagner
Copy link
Member

Agreed we could do something custom that is perhaps simpler. My initial thoughts were exactly along the lines of what you wrote. The question that put me on the wait-async line of thinking was: "when I task.wake_sibling, do I always want to wake-one or wake-all or maybe wake-N?" and then I started to feel like I was re-designing memory.atomics.notify.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants