#
b9eff432 |
|
19-Jun-2018 |
Joel Beeren <joel.beeren@data61.csiro.au> |
x86: extract createSafeMappingEntries_* into separate functions for ease of verification
|
#
1051e550 |
|
18-Jun-2018 |
Joel Beeren <joel.beeren@data61.csiro.au> |
x86: Unify page map/remap logic with existing verified ARM behaviour
|
#
f5725b0e |
|
22-May-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Check mapped ASID is correct Checks that the ASID present in the passed PML4 object matches the global ASID table. This check ensures that the object has not actually been unmapped due to some other operation that could not update the field in the capability.
|
#
3f15e6ca |
|
10-May-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Different perform for Map and Remap of huge pages Previously Map and Remap of huge (1GiB) pages, which were mode specific to x64, would ultimately call the same perform function that would update the cap and then set the hardware mapping. Setting the cap (or re-setting as the case is) for remap is both redundant and not how other verified platforms implement their remap invocation. For simplicity of verification it is easier to change perform to not update the cap in the case of remap.
|
#
7b8f6106 |
|
08-May-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Consistently compare against PPTR_USER_TOP PPTR_USER_TOP is the last valid user address and so any address greater than it should not be used. Due to pages being at least 4K aligned it is also correct to do a >= comparison, but it is clearer and more consistent with verification to perform just >.
|
#
30738188 |
|
13-Mar-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: modeUnmapPage returns whether or not tlb needs invalidation Whilst it is always correct to invalidate the TLB even if no page was unampped this provides a small optimization and makes the modeUnmapPage behaviour mirror the general arch behaviour
|
#
050f3884 |
|
05-Mar-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Explicitly copy from TLBBITMAP_PPTR PPTR_USER_TOP is not neccessarily the start of the kernel mappings as it could be well before the start of the kernel. As a result we can prevent copying known empty entries by copying from the actual kernel bottom, which is TLBBITMAP_PPTR
|
#
374da850 |
|
16-Jan-2018 |
Bamboo <bamboo@keg.ertos.in.nicta.com.au> |
[STYLE_FIX]
|
#
29695d26 |
|
07-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: SKIM window to mitigate Meltdown (CVE-2017-5754) on x86-64 Introduces a kernel option that, when enabled, reduces the kernel window in a user address space to just be Static Kernel Image and Microstate (SKIM), instead of the full kernel address space. This isolates the important kernel data from the user preventing a Meltdown style attack being able to violate secrecy. The kernel text and read only data, i.e. anything that is static from boot, is not secret and can be allowed in the SKIM window and potentially read by the user. Additionally to switch to and from the actual kernel address space a small amount of state needs to also be in the SKIM window. This is only an implementation for x86-64, although the same design is applicable to ia32
|
#
03dfde52 |
|
07-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Use already defined X86_GLOBAL_VSPACE_ROOT This #define already provides an abstraction over the name of the variable that contains the global mappings and removes the need to assume the KernelPML4 is the global one
|
#
6d1c8883 |
|
05-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Rename global paging structures to kernel Currently the kernel address space and paging structures are used as the global ones. This commit renames the paging structures from Global to Kernel to reflect this and allows for separate global structures to be introduced in the future.
|
#
96842a2c |
|
05-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: If PCIDs are not enabled forbid them in any CR3 values When PCIDs are not enabled the low 12 bits of CR3 are not all ignored as bits 3 and 4 control memory typing of the translation. To prevent setting these to 'random' values we should leave the low bits of CR3 0 when not using PCIDs. A new makeCR3 wrapper is defined that is used instead of cr3_new that, based on CONFIG_SUPPORT_PCID, either fills in the PCID or 0 as required.
|
#
4edbbbd4 |
|
03-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Separate notion of current vspace root with current user vspace root This introduces functions for specifically manipulating the user vspace root, either as an abstract root or the specific CR3 value, in a way that is separated from the current vspace root. Currently they are one and the same, but this separation allows for having the kernel translation (and hence the active vspace root) be different the currently active user.
|
#
3a7fdde8 |
|
05-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Define per node arch global state Global state in this context is state/datastructures that need to be available at all times, both in user and kernel mode, by the hardware for correct operation. The purpose of creating a separate per-node structure for it is so that there is the option of treating it specially in the future from other per node state
|
#
a9dc424a |
|
05-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Remove IRQ stack from per core data structure The IRQ stack represents state that needs to be available in all contexts, both when user code is running and when kernel code is running. Separating its definition from all the other mode state provides the option in the future for treating it differently.
|
#
3d998667 |
|
12-Oct-2017 |
Bamboo <bamboo@keg.ertos.in.nicta.com.au> |
[STYLE_FIX]
|
#
574331e3 |
|
12-Oct-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Define and use missing makeUserXXX functions for paging structures This provides a consistent style where all paging structure entries for user requested mappings are created via makeUserXXX style functions, instead of the current mix of wrappers and directly invoking bitfield _new functions
|
#
ca6b006a |
|
12-Oct-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Use existing makeUserPDPTEInvalid function
|
#
490c743b |
|
12-Oct-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Declare makeUserPDPTEInvalid instead of makeUserPDPTEHugePageInvalid This is clearer as we do not specifically want an invalid huge page mapping, rather this function is meant to create a null/invalid entry in the PDPTE
|
#
9c6fb53c |
|
11-Oct-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Have single makeUserPDEInvalid function makeUserPDEPageTableInvalid and makeUserPDELargePageInvalid were both just creating the same kind of thing, an invalid mapping, which really should be thought of as the third type of PDE entry type.
|
#
9539538c |
|
10-Oct-2017 |
Bamboo <bamboo@keg.ertos.in.nicta.com.au> |
[STYLE_FIX]
|
#
e83090f4 |
|
10-Oct-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Return slot directly from lookupPML4Slot lookupPML4Slot can never fail, so this change removes the return struct wrapper and just directly returns the looked up slot without the needless `status`. Whilst the extra error handling was not harmful or incorrect it is superfluous and creates a mismatch with verification.
|
#
128829a9 |
|
11-Sep-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Rationalize names of paging structures Changes the name of paging structure entries to more clearly indicate the kind of object they map to. Generally this is changing a `pde_small` to `pde_pt` to indicate that this mapping refers to a page table, removing an inconsistency where `pde_large` indicated that the mapping was for a large page. For the same reason the `ept_pde_4k` type is changed to `ept_pde_pt` type to reflect what is present in the actual entry. `pde_large` is left as 'large' and not explicitly given a size as code common between ia32 and x86-64 manipulates these entities and 'large' is already a used abstraction over the two potential page sizes so there is need to introduce a formal abstraction layer and make the names in the structures more specific.
|
#
f85fb62f |
|
21-Aug-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
SELFOUR-1062: Hide all VT-x related code behind #ifdef guards The VT-x implementation is not going to be verified at the moment, and so the code for it needs to be hidden from verification, which we do by #ifdef'ing it out if the VT-x is not enabled. As a result the VT-x configuration depends on a non verification target
|
#
ee28936d |
|
18-Jun-2017 |
Hesham Almatary <hesham.almatary@data61.csiro.au> |
SMP: Introduce ENABLE_SMP_SUPPORT - Make it more readable and less confusing compared to the 'CONFIG_MAX_NUM_NODES > 1' check
|
#
40c61e5c |
|
18-Jun-2017 |
Anna Lyons <Anna.Lyons@data61.csiro.au> |
Fix licenses (the rest)
|
#
958239a9 |
|
13-Jun-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Restructure performASIDPoolInvocation Swap the `if` cases in this function so that the default else case becomes one that always happens, with an `assert`, and the vtx case becomes guarded with a config_set This ensures that * All code paths have either a check or assertion on having a correct cap type * Compiler can see that all code paths will correctly set `asid_map` before using it
|
#
bc380859 |
|
21-May-2017 |
Bamboo <bamboo@keg.ertos.in.nicta.com.au> |
[STYLE_FIX]
|
#
bac7826a |
|
21-May-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
Avoid using bitfield generated *_ptr_new functions These functions can cause slow down during verification C parsing and do not currently have any proofs about them. More importantly there is no performance impact from just calling the regular *_new functions instead
|
#
9ca253a3 |
|
07-May-2017 |
Anna Lyons <Anna.Lyons@data61.csiro.au> |
SELFOUR-879: expose index and entry constants
|
#
eccaae51 |
|
20-Feb-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
s/D61/DATA61/ in license headers for consistency
|
#
017d7863 |
|
05-Dec-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: VT-x related cap and object definitions * Adds object and cap definitions for VT-x structures (VCPU and EPT). * Extends the asid_map implementation to support ASIDs in the EPT * Adds size definitions for VCPU and EPT objects
|
#
3ba3f2de |
|
04-Dec-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Add BSS regions for BOOT and PHYS code Uninitialized data structures in BOOT and PHYS code currently get placed in sections that are allocated in the file of the final image. Whilst these sections will get reclaimed during kernel boot, so no runtime memory is being wasted, it results in kernel images that are much larger to load and transport than necesary. This change adds explicit BSS regions for both BOOT and PHYS code and moves all appropriate data structures into them
|
#
a1eddc75 |
|
23-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Make boot paging structures `VISIBLE` These structures are used from the boot code in traps.S and must be `VISIBLE` to prevent the compiler removing them
|
#
d6d8fb54 |
|
14-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Multicore TLB coherency
|
#
a0cb9e67 |
|
09-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Support multiple kernel stacks Adds support for per-core kernel stacks through the use of thread local storage and swapgs. In addition to the main kernel stack the IRQ stack also needs to be made per core
|
#
fc56575e |
|
08-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Define TLB bitmap region Defines the TLB bitmap to be located, virtually, just below the kernel window. To get an initialized bitmap in each new address space we change copyGlobalMappings to copy from USER_TOP (which includes the initialized TLB bitmap in the global address space) instead of just the kernel window base
|
#
d14a973a |
|
21-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Pass address space when invalidating ASID For efficient TLB coherency on x86-64 it will be useful to know the address space that the ASID being invalidated was last assigned to
|
#
6f908324 |
|
06-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Access core local state correctly
|
#
056dbf81 |
|
02-Nov-2016 |
amrzar <azarrabi@nicta.com.au> |
SELFOUR-634: support for TLB and cache management
|
#
da409659 |
|
27-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Match ASID changes on ia32
|
#
580ecbc1 |
|
12-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Use explicit interrupt stack This commit uses the Interrupt Stack Table introduced in x86-64 to force interrupts to *always* use a new explicit stack. The legacy behaviour is for a new stack to only be used if switching privilidge levels, meaning if we got an interrupt in user mode we would get a new stack (prior to pushing registers), but an interrupt in kernel mode would push registers directly to the current (kernel) stack. Using the IST allows interrupts from kernel mode to also switch to a new stack prior to pushing registers, allowing for much more consistent semantics when dealing with interrupts.
|
#
235a02ec |
|
12-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Use FLAGS_* defines instead of magic numbers
|
#
f062dcdc |
|
11-Oct-2016 |
Bamboo <bamboo@keg.ertos.in.nicta.com.au> |
[STYLE_FIX]
|
#
6d07c443 |
|
10-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x64: Partial hardware breakpoint support This only implementes debug support if using the SYSCALL kernel invocation method, will not work yet with SYSENTER
|
#
6294225c |
|
10-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Rename [ER]FLAGS to FLAGS Having a different name for the FLAGS register creates an unnecessary difference between ia32 and x86_64 code since regardless of the name/size the bits in the register mean exactly the same thing
|
#
7f9970e5 |
|
20-Dec-2015 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
x64: Add x86_64 support
|