Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add riscv-peripheral crate #164

Merged
merged 29 commits into from
Feb 14, 2024
Merged

Add riscv-peripheral crate #164

merged 29 commits into from
Feb 14, 2024

Conversation

romancardenas
Copy link
Contributor

Crate for standard RISC-V peripherals.

I've been using these peripheral APIs myself on an E310x and work properly.
Let me know if you want any extra change before merging it to master

@romancardenas romancardenas requested a review from a team as a code owner December 7, 2023 11:57
@romancardenas romancardenas changed the title Add riscv-peripheral crate Add riscv-peripheral crate Dec 7, 2023
#[inline]
fn poll(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
match self.mtime.read().wrapping_sub(self.t0) < self.n_ticks {
true => Poll::Pending,
Copy link
Member

@Dirbaio Dirbaio Dec 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on Pending you should store the waker somewhere, and wake it when the timer expires. Otherwise the future will hang (except with very dumb block_on-like executors that poll in a hot loop, not waiting for wakes)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comment! I left asynchronous stuff for later once Rust 1.75 was out. I'll address it ASAP

Copy link
Contributor Author

@romancardenas romancardenas Dec 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, @Dirbaio I addressed your comments.

I'm pretty new to asynchronous programming, but if I'm not wrong this implementation is just a soft polling, as the executor must poll from time to time the Future. I was wondering about how an interrupt-driven approach would look like.

I guess that I would need a static placeholder to store all the pending contexts and wake them when the MachineTimer interrupt is triggered. However, it seems quite difficult to dimension this, as I don't know the number of contexts that may live concurrently in an asynchronous application. Can you send me a few pointers to get more insights about this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I'm not wrong this implementation is just a soft polling, as the executor must poll from time to time the Future

if you read the docs for Future, it says you must wake the waker when you're ready to make progress

When a future is not ready yet, poll returns Poll::Pending and stores a clone of the Waker copied from the current Context. This Waker is then woken once the future can make progress.

and it says executors are allowed to never poll again (and shouldn't!) until the waker gets woken .

The poll function is not called repeatedly in a tight loop – instead, it should only be called when the future indicates that it is ready to make progress (by calling wake()).

"soft polling" is not really a thing in async Rust, wakers are mandatory. You can make a Future implementation that ignores wakers, and it'll still work with a dumb "poll in a loop" executor, but that implementation is still "wrong" because it's not fulfilling the Future contract, which says waking the waker is mandatory.

However, it seems quite difficult to dimension this, as I don't know the number of contexts that may live concurrently in an asynchronous application

yes. if you make this require an owned peripheral and it's not cloneable you have the guarantee only one delay is running at a time, so you know 1 waker is enough.

If you do want infinite amount of delays then you need some kind of "timer queue" that stores all timers, keep setting the interrupt to happen at the next expiring timer, and wake the wakers as they expire.

embassy-time has a "reusable" timer queue because I found reinventing this at every HAL is not maintainable, it has a driver trait that the HAL can implement to get a full timer queue "for free"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So... async stuff was more scary than I expected 😮‍💨

My latest push provides a few extern "Rust" functions that must be implemented for pushing/popping Timers to a timer queue. If I'm not wrong, my approach would be compatible with embassy-time or any other interface by properly implementing the _riscv_peripheral* functions.

@romancardenas romancardenas force-pushed the add-peripheral branch 2 times, most recently from c915335 to cc8be16 Compare December 15, 2023 11:45
@romancardenas
Copy link
Contributor Author

I reimplemented the asynchronous delay for the (A)CLINT peripheral. Now, it relies on three extern "Rust" fn _riscv_peripheral_aclint_* functions that allows us to push/pop wakers to/from a timer queue such as the one provided by embassy-time.

@romancardenas
Copy link
Contributor Author

Now it finally points to embedded-hal 1.0 🥳

@rust-embedded/riscv please could you take a look to this PR?

@romancardenas romancardenas mentioned this pull request Jan 10, 2024
@romancardenas
Copy link
Contributor Author

ping @rust-embedded/riscv

Copy link
Contributor

@almindor almindor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this approach, LGTM.

@romancardenas romancardenas added this pull request to the merge queue Feb 14, 2024
Merged via the queue into master with commit ddf131f Feb 14, 2024
95 checks passed
@romancardenas romancardenas deleted the add-peripheral branch February 14, 2024 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants