pub trait MapSize: PageSize {
    // Required methods
    fn ensure_chain_for<A: FrameAlloc, M: PhysMapper>(
        aspace: &AddressSpace<'_, M>,
        alloc: &mut A,
        va: VirtualAddress,
        nonleaf_flags: VirtualMemoryPageBits,
    ) -> Result<PhysicalPage<Size4K>, MapSizeEnsureChainError>;
    fn set_leaf<M: PhysMapper>(
        aspace: &AddressSpace<'_, M>,
        leaf_tbl_page: PhysicalPage<Size4K>,
        va: VirtualAddress,
        pa: PhysicalAddress,
        leaf_flags: VirtualMemoryPageBits,
    );
}Expand description
§Page-size–directed mapping behavior
MapSize encodes, at the type level, how to:
- ensure the non-leaf page-table chain exists for a given virtual address, and
 - install the correct leaf entry for that page size.
 
Implementations for Size1G, Size2M, and Size4K decide where to
stop the walk and which entry to write, so callers don’t branch at
runtime. This keeps the mapping code zero-cost and compile-time checked.
§What ensure_chain_for returns
It returns the target table frame (4 KiB page) into which you will write
the leaf entry for Self:
- For 1 GiB pages (
Self = Size1G): returns the PDPT frame (you will write a PDPTE withPS=1). - For 2 MiB pages (
Self = Size2M): returns the PD frame (you will write a PDE withPS=1). - For 4 KiB pages (
Self = Size4K): returns the PT frame (you will write a PTE withPS=0). 
Newly created non-leaf entries are initialized with nonleaf_flags
(e.g., present | writable), and any conflicting huge leaves are split
on demand by allocating and linking the next-level table.
§Typical flow
// Decide size with the type parameter S, no runtime branching:
let leaf_table = S::ensure_chain_for(aspace, alloc, va, nonleaf_flags)?;
S::set_leaf(aspace, leaf_table, va, pa, leaf_flags);§Safety & alignment
- Physical alignment is asserted (debug) by callers via 
pa.offset::<S>() == 0. - The mapper (
PhysMapper) must yield writable views of table frames. - If you mutate the active address space, perform the required TLB
maintenance (
invlpgper page or CR3 reload). 
Required Methods§
Sourcefn ensure_chain_for<A: FrameAlloc, M: PhysMapper>(
    aspace: &AddressSpace<'_, M>,
    alloc: &mut A,
    va: VirtualAddress,
    nonleaf_flags: VirtualMemoryPageBits,
) -> Result<PhysicalPage<Size4K>, MapSizeEnsureChainError>
 
fn ensure_chain_for<A: FrameAlloc, M: PhysMapper>( aspace: &AddressSpace<'_, M>, alloc: &mut A, va: VirtualAddress, nonleaf_flags: VirtualMemoryPageBits, ) -> Result<PhysicalPage<Size4K>, MapSizeEnsureChainError>
Ensure that the non-leaf chain for va exists down to the table that
holds the leaf for Self, allocating and linking intermediate
tables as needed.
§Returns
The 4 KiB frame (as PhysicalPage<Size4K>) of the table where the
leaf for Self must be written:
Size1G→ PDPT frameSize2M→ PD frameSize4K→ PT frame
§Behavior
- Initializes newly allocated non-leaf tables to zeroed state and links
them with 
nonleaf_flags. - If a conflicting huge leaf is encountered at a higher level, it is split by allocating the next-level table and relinking.
 
§Errors
"oom: pdpt" / "oom: pd" / "oom: pt"if allocating an intermediate table frame fails.
Sourcefn set_leaf<M: PhysMapper>(
    aspace: &AddressSpace<'_, M>,
    leaf_tbl_page: PhysicalPage<Size4K>,
    va: VirtualAddress,
    pa: PhysicalAddress,
    leaf_flags: VirtualMemoryPageBits,
)
 
fn set_leaf<M: PhysMapper>( aspace: &AddressSpace<'_, M>, leaf_tbl_page: PhysicalPage<Size4K>, va: VirtualAddress, pa: PhysicalAddress, leaf_flags: VirtualMemoryPageBits, )
Install the leaf entry for va → pa in the leaf_tbl_page
returned by ensure_chain_for, with the given leaf_flags.
Size1G: writes a PDPTE (PS=1) into the PDPT atva.Size2M: writes a PDE (PS=1) into the PD atva.Size4K: writes a PTE (PS=0) into the PT atva.
Callers should assert (in debug) that pa is aligned to Self:
debug_assert_eq!(pa.offset::<Self>().as_u64(), 0).
Dyn Compatibility§
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.