-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MMIO regions are artificially constrained to 2^24 bytes #182
Comments
There's a slight disagreement between the linker and loader here. The loader reserves the top 8 bits for future expansion, the linker treats bits 24-27 as part of the length, so that assigning them back to the length does not require a toolchain revision (assigning them to permissions is, but will require more fields in the JSON report anyway). There's no reason that we couldn't support larger sizes in the loader, but I wanted to reserve space if we wanted to add more fine-grained permissions. For the PLIC specifically, it appears that you have a PLIC with over 32K HART contexts. This is almost certainly a lot more than you need. The PLIC that we have has a lot fewer contexts and so is much smaller. The masking happens here. This is called here to build the capability. Masking the low 24 bits gives a Do you have a use case for MMIO regions over 2^24 bytes? On systems with a lot of memory, we may want the heap to be in that space. We could potentially reclaim some of the unused permission bits. I'd like to keep at least 1-2 bits in reserve, but I don't think we will need all of the 4 that I've currently reserved. I we introduced a new relocation, we should keep the size + an exponent in a fixed number of bits, but that's a bit more work. |
On Mon, Mar 18, 2024 at 6:26 AM David Chisnall ***@***.***> wrote:
There's a slight disagreement between the linker and loader here. The
loader reserves the top 8 bits for future expansion, the linker treats bits
24-27 as part of the length, so that assigning them back to the length does
not require a toolchain revision (assigning them to permissions is, but
will require more fields in the JSON report anyway).
There's no reason that we couldn't support larger sizes in the loader, but
I wanted to reserve space if we wanted to add more fine-grained permissions.
Not supporting a size is fine but the behaviour (silently trapping) is not.
For the PLIC specifically, it appears that you have a PLIC with over 32K
HART contexts. This is almost certainly a lot more than you need. The PLIC
that we have has a lot fewer contexts and so is much smaller.
This is real hw. I will double-check the size.
The masking happens here
<https://github.com/microsoft/cheriot-rtos/blob/1f097b4aa2fe96b5c751fee7da33a9f43c4137a7/sdk/core/loader/types.h#L929>.
This is called here
<https://github.com/microsoft/cheriot-rtos/blob/1f097b4aa2fe96b5c751fee7da33a9f43c4137a7/sdk/core/loader/boot.cc#L473>
to build the capability. Masking the low 24 bits gives a size() of 0, so
we construct a 0-length capability.
Do you have a use case for MMIO regions over 2^24 bytes? On systems with a
lot of memory, we may want the heap to be in that space. We could
potentially reclaim some of the unused permission bits. I'd like to keep at
least 1-2 bits in reserve, but I don't think we will need all of the 4 that
I've currently reserved. I we introduced a new relocation, we should keep
the size + an exponent in a fixed number of bits, but that's a bit more
work.
At the moment I'm just trying to understand the issues of porting code from
an existing platform to one w/ just cheri(ot) added in the simplest way. A
real system with cheri support will likely be rather different so it's hard
to say whether or not 2^24 is sufficient.
|
Completely agreed on the second part. We should at least have an I expect that anyone doing bring-up on a new board will run with loader debug turned on (as you are doing), at least until first successful boot, and the loader should complain about invalid things, rather than just giving 0-length / untagged capabilities.
The PLIC has different regions for different HART contexts. My expectation was that a multi-core scheduler (note: ours is not currently multi-HART-aware) would import the different contexts as different MMIO capabilities. Our existing targets that use a RISC-V standard PLIC import it with a 0x400000-byte length.
Please also file issues against the book if there are areas where the documentation is lacking (I am aware that there are quite a few, but help prioritising the ones that people hit first would be very useful). This is still quite work-in-progress, but it's a priority for me to kick it into a usable state in the next few months. |
This addresses the 'shouldn't silently fail' part of #182. A subsequent change may allow it to also work, depending on use cases.
This addresses the 'shouldn't silently fail' part of #182. A subsequent change may allow it to also work, depending on use cases.
I've merged a fix that stops this being a silent failure (if you compile with loader-debug enabled), but I'm leaving this open to track whether we should relax these requirements. |
Updating the title to reflect the unaddressed part of this. |
Specify a large mmio region, e.g. in the board spec:
The result generates an import table entry for the builtin plic support:
But the loader reports:
which results in an invalid cap/ptr being constructed for plicPrios & co in StandardPlic::StandardPlic.
Looks like this is due to:
but I don't see where the data are written to identify why the length/size reads back as zero (I assume they are read directly on startup).
The text was updated successfully, but these errors were encountered: