You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, first I want to say this crate is great. It removes a lot of pain from starting with OS dev.
I ran into a minor annoyance: mapping a large range of memory is pretty slow. In my code I'm trying to identity map all the machine's physical memory. It takes a few seconds in a release build and quite a bit longer in a debug build. I suspect a lot of this is from traversing the same tables over and over when repeatedly calling map_to.
I think methods for mapping entire ranges could be faster: instead of a loop that traverses down all levels each iteration, there could be loops at each level.
Is this a reasonable request? I'm interested in writing up a PR if it's something you'd accept.
The text was updated successfully, but these errors were encountered:
This honestly seems quite reasonable (especially if it gives a huge speedup). Algorithmically, I'm not sure about the best way to go about things, but improvements can definitely be made.
My preferred way to do this would be to have a new method: map_range_with_table_flags which has a default implementation based on map_to_with_table_flags, but is overridden by mappers to be more efficient.
Then the following methods can be provided:
map_range
identity_map_range
EDIT: For a short-term workaround, using larger 2M/1G pages is a way to very quickly identity map the entire space.
My preferred way to do this would be to have a new method: map_range_with_table_flags which has a default implementation based on map_to_with_table_flags, but is overridden by mappers to be more efficient.
Sounds reasonable to me. This method would then behave like the Mapper::map method introduced in #136 and allocate the target frames from the frame allocator, right?
Hey, first I want to say this crate is great. It removes a lot of pain from starting with OS dev.
I ran into a minor annoyance: mapping a large range of memory is pretty slow. In my code I'm trying to identity map all the machine's physical memory. It takes a few seconds in a release build and quite a bit longer in a debug build. I suspect a lot of this is from traversing the same tables over and over when repeatedly calling
map_to
.I think methods for mapping entire ranges could be faster: instead of a loop that traverses down all levels each iteration, there could be loops at each level.
Is this a reasonable request? I'm interested in writing up a PR if it's something you'd accept.
The text was updated successfully, but these errors were encountered: