#
512a0200 |
|
19-Mar-2020 |
Qian Ge <qian.ge@data61.csiro.au> |
replacing all ifndef with pargma once All the kernel header files now use pargma once rather than the ifndef, as the pre-processed C files do not change while header files are protected with pargma once. This will also solve any naming issues caused by ifndef.
|
#
79da0792 |
|
01-Mar-2020 |
Gerwin Klein <gerwin.klein@data61.csiro.au> |
Convert license tags to SPDX identifiers This commit also converts our own copyright headers to directly use SPDX, but leaves all other copyright header intact, only adding the SPDX ident. As far as possible this commit also merges multiple Data61 copyright statements/headers into one for consistency.
|
#
554f812d |
|
08-Nov-2016 |
Anna Lyons <Anna.Lyons@data61.csiro.au> |
mcs: scheduling context donation over ipc After this commit, threads blocked on an endpoint can recieve a scheduling context from the thread that wakes the blocked thread.
|
#
3207abee |
|
20-Mar-2019 |
Curtis Millar <curtis.millar@data61.csiro.au> |
RFC-3: Update context for x86 to use FS and GS. TLS_BASE virtual register is replaced with FS_BASE and GS_BASE virtual registers. The FS_BASE and GS_BASE virtual registers are moved to the end of the context so they need not be considered in the kernel exit and entry implementation. Removed tracking of ES, DS, FS, and GS segment selectors on kernel entry and exit. ES and DS are clobbered on kernel entry with the RPL 3 selector for a DPL 3 linear data segment. FS is clobbered on exit with the RPL 3 selector for the DPL 3 segment with FS_BASE as the base. This is done on exit to reload the value from the GDT. GS is clobbered on exit with the RPL 3 selector for the DPL 3 segment with GS_BASE as the base. This is done on exit to reload the value from the GDT. Kernel entry and exit code is refactored, simplified, and improved in light of the above changes. x64: update verified config to use fsgsbase instr The verification platform for x64 relies on the fsgsbase instruction.
|
#
7fc45c4e |
|
18-Mar-2019 |
Anna Lyons <Anna.Lyons@data61.csiro.au> |
style: set code width to 120
|
#
d0930f67 |
|
18-Mar-2019 |
Anna Lyons <Anna.Lyons@data61.csiro.au> |
style: consistently attach return type Add attach-return-type to astyle
|
#
3d10ef0c |
|
18-Mar-2019 |
Anna Lyons <Anna.Lyons@data61.csiro.au> |
style: correct parenthesis padding Use astyle's unpad-paren to unpad all parentheses that are not included by pad-header, pad-oper, and pad-comma.
|
#
b1e799a4 |
|
28-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Config option for RSB flush on context switch This option can be enabled to prevent a user from performing a Spectre like attack on another user through polluting the RSB.
|
#
2423c620 |
|
28-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Config option for branch prediction barrier on context switch This option can be enabled to prevent a user from performing a Spectre like attack on another user through polluting the indirect branch predictor.
|
#
f0594ac9 |
|
28-Jan-2018 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Implement IBRS based Spectre mitigations Provides the ability to enable the IBRS hardware Spectre mitigation strategies, as well as completes the software mitigation by disabling jump tables in compilation. The hardware mitigations are largely provided "for completeness" in the hopes that they eventually become less expensive. For the moment there is no reason to turn on any beyond STIBP if running in multicore
|
#
57fa0e0f |
|
07-Aug-2017 |
Hesham Almatary <hesham.almatary@data61.csiro.au> |
Share linker.h between architectures
|
#
3e57e647 |
|
19-Oct-2016 |
Hesham Almatary <hesham.almatary@data61.csiro.au> |
SELFOUR-501: x86 - Remove PAE support
|
#
df977382 |
|
09-Jan-2017 |
Donny Yang <work@kota.moe> |
x64: Rearrange endpoint_cap structure to improve fastpath speed This looks like we're just swapping the positions of capEPBadge and capEPPtr, but it turns out that the bitwise op being performed on capEPPtr to set the high bits were part of the data dependency critical path, so this actually does improve the speed by moving the bitwise op to capEPBadge (albeit it's now an AND instead of an OR) I initially set the field size to 32 bits, but it turns out that causes gcc to emit an instruction (mov r32, r32) that causes the instruction decoder to switch to the legacy decode path for the rest of the fast path for some reason.
|
#
c68a69f8 |
|
19-Dec-2016 |
Donny Yang <work@kota.moe> |
x64: Rearrange cnode_cap structure to improve fastpath speed
|
#
cca128ea |
|
04-Jan-2017 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
ia32: Always use IRET instead of sysexit when single stepping Previous code to return to user level performed popf sysexit The popf was just before the sysexit as there is a one instruction delay on the trap flag taking effect and ensured we did not attempt to single step the kernel. Unfortunately there is not a one instruction delay on enabling the interrupt flag, and as a result an interrupt can be taken prior to executing the sysexit instruction. It is possible to exploit this to escalate a user level thread such that it is running with CPL0 This commit changes the restore paths to perform sti sysexit Which will correctly delay interrupts until the completion of sysexit. As the popf is now being done earlier to prevent single stepping the kernel we return via an iret, instead of sysexit, for threads that have single stepping enabled. To achieve this we * When loading debug state if we enable the Trap flag we also manipulate the register state such that the iret return path will be picked * As fastpath_restore does not have an iret return path we forbid the fastpath from switching to threads that have single stepping enabled
|
#
564b9839 |
|
05-Dec-2016 |
Donny Yang <work@kota.moe> |
x86: Avoid writing the fs/gs base if we don't have to
|
#
fbafb777 |
|
28-Nov-2016 |
Donny Yang <work@kota.moe> |
x64: Always set the high bits of certain pointers in the fastpath seL4 is always in the top of memory, so the high bits of pointers are always 1. The autogenerated unpacking code doesn't know that, however, so will try to conditionally sign extend (in 64-bit mode), which wastes cycles in the fast path. Instead, we can do the unpacking ourselves and explicitly set the high bits.
|
#
d73d0e8f |
|
24-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Write FS and GS base when restoring user context This commit moves the write to FS and GS base, allowing for a much more efficient write to GS base under x86-64 SMP. When writing GS base was in Arch_switchToThread it was neccessary to write to an MSR such that when swapgs was performed on kernel exit the new value of GS base would be retrieved. Unfortunately writing to an MSR is very expensive and we would much prefer to use the writegsbase instructions instead. By moving this code to restore user context we are able to call swapgs earlier and then use the normal writegsbase instruction
|
#
1c312610 |
|
23-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Switch to NULL FPU state if suspect no one using it Adds a heuristic to switch to a NULL fpu state if we think the FPU is not presently in use. A NULL fpu state is more efficient as we do not have to enable/disable the FPU when switching threads
|
#
f0d599f5 |
|
23-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Add FORCE_INLINE to some fastpath functions The compiler fails to realize that inlining these functions is a performance benefit due to fact that after inlining their bodies can be optimized with other inlined functions.
|
#
edc811a8 |
|
17-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Add likely to isValidVTableRoot_fp Although this function is called from the fastpath inside of an `unlikely` macro and the function itself gets inlined, the knowledge that this conjunction is unlikely is somehow lost. Explicitly putting a `likely` here fixes it
|
#
2c49729d |
|
08-Nov-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Refactor tlb_bitmap to be mode generic Refactors the TLB bitmap code to be generic across ia32 and x86-64.
|
#
a228c492 |
|
30-Oct-2016 |
amrzar <azarrabi@nicta.com.au> |
Incuede TLBBitmap in PD to keep track of cores currently accessing this PD
|
#
25bb9437 |
|
24-Oct-2016 |
amrzar <azarrabi@nicta.com.au> |
SELFOUR-635: support for TCB operations This will update TCB invocations to consider multicore environment, this may include: - adds the affinity invocation to transfer TCB between different cores and update TCB structure for core ID - checking the thread/core state before performing TCB operation, e.g. deleting the runnable TCB, etc
|
#
66dfc2e7 |
|
29-Jul-2016 |
Kent McLeod <kent.mcleod@nicta.com.au> |
Change ia32 to use fs register for IPC buffer gs register is used by gcc for TLS and the IPC buffer gets in the way
|
#
7fbde1bb |
|
14-Jun-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
SELFOUR-287: 32-bit vt-x implementation This is an implementation of vt-x for x86 kernels running in ia32 mode.
|
#
5f7fa2fc |
|
19-Oct-2016 |
Hesham Almatary <hesham.almatary@data61.csiro.au> |
Benchmark: Pack arch-independent benchmark-related files into separate directories
|
#
59f50c71 |
|
19-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: use vspace_root_t type switchToThread_fp is passed variables of type vspace_root_t, not pde_t, this commit brings the two into line
|
#
b01cf7f0 |
|
12-Oct-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Make stack.h a mode specific header The functionality of setKernelEntryStackPointer is all ia32 specific and this commit moves this to a mode specific include location
|
#
3f9eb7c8 |
|
06-Oct-2016 |
amrzar <azarrabi@nicta.com.au> |
SELFOUR-632: implement cores non-architecture dependent structres
|
#
bebfcf6d |
|
23-Jun-2016 |
Kofi Doku Atuah <kofi.dokuatuah@nicta.com.au> |
SELFOUR-499: X86, ARM: Add userspace invocations for hardware debugging This commit implements the body of SELFOUR-499. The API exposes the x86 DR0-7 and ARM coprocessor 14 features to userspace by virtualizing them as context- switched registers in the TCB. Implemented as TCB invocations. This feature is only built when CONFIG_HARDWARE_DEBUG_API is selected. * Add low-level support routines for setting, unsetting, getting, enabling and disabling breakpoints. * Add support for single-stepping as well. ^ Single-stepping is not supported on ARMv6 since the hardware doesn't have support. ^ ARM implements single-stepping as instruction breakpoints configured to fault on every instruction -- this is achieved through the "mismatch" mode, which is only supported from ARMv7 onwards. * Also support explicit software break requests, a la "BKPT" and "INT $3". * New invocations: * seL4_TCB_SetBreakpoint(). * seL4_TCB_GetBreakpoint(). * seL4_TCB_UnsetBreakpoint(). * seL4_TCB_ConfigureSingleStepping(). * New constants: ^ Event types: ^ seL4_InstructionBreakpoint. ^ seL4_DataBreakpoint. ^ seL4_SoftwareBreakRequest. ^ Access types: ^ seL4_BreakOnRead. ^ seL4_BreakOnWrite. ^ seL4_BreakOnReadWrite. ^ Exports: ^ seL4_NumHWBreakpoints. ^ seL4_NumExclusiveBreakpoints. ^ seL4_NumExclusiveWatchpoints. ^ seL4_NumDualFunctionMonitors. ^ seL4_FirstBreakpoint. ^ seL4_FirstWatchpoint. ^ seL4_FirstDualFunctionMonitor. See documentation in the seL4 API manual.
|
#
2cbc7123 |
|
28-Sep-2016 |
amrzar <azarrabi@nicta.com.au> |
SELFOUR-630:preliminary booting application processors - update core detection code and Kconfig file - update kernel stack managment so that BSP does not use boot stack before IPI APs - move arch dependant data to a single structure - add cache line size to Kconfig - add cpu indexing and apic id mapping - boot APs to halting state - add guard for kernel stack if there is only one core
|
#
4044e204 |
|
21-Sep-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
Revert "Merge pull request #358 in SEL4/sel4 from ~AZARRABI/sel4:multicore to master" This reverts commit ce2f666bb811c5e4c779829fcb09d5a189ebcdbb, reversing changes made to dc183f96b81f2344d7d0d910fc430f924eaae940.
|
#
fbc071b4 |
|
12-Sep-2016 |
amrzar <azarrabi@nicta.com.au> |
SELFOUR-630:preliminary booting application processors - update core detection code and Kconfig file - update kernel stack managment so that BSP does not use boot stack before IPI APs - move arch dependant data to a single structure - add cache line size to Kconfig - add cpu indexing and apic id mapping - boot APs to halting state - add guard for kernel stack if there is only one core
|
#
a217102b |
|
27-Jul-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
Consolidate benchmark entry/exit Move the benchmark pre/post ambles into the now existing entry/exit hook functions
|
#
3c05b79a |
|
27-Jul-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
Provide generic C entry/exit hook routines It can be deseriable to run code before/after user mode, but not have to write it in assembly. This commit adds such stubs that get called as the first/last C code when coming in and out of the kernel
|
#
d33e4854 |
|
13-Jul-2016 |
Kofi Doku Atuah <kofi.dokuatuah@nicta.com.au> |
Use UNREACHABLE() instead of while(1)
|
#
2576eef3 |
|
13-Jul-2016 |
Kofi Doku Atuah <kofi.dokuatuah@nicta.com.au> |
x86: POPF immediately before SYSEXIT. * Change restore_user_context() and fastpath_restore() to call POPF just before SYSEXIT. * Also change the way registers are loaded such that we don't access memory that is not guarded by the current position of the stack pointer. The reason for this change is that EFLAGS.TF, the Trap Flag, only takes effect on the instruction AFTER the instruction that sets EFLAGS.TF. The reason Intel/AMD did it this way is to allow the kernel to enable EFLAGS.TF for userspace, without it taking effect on kernel instructions BEFORE the CPU actually has returned to userspace. EFLAGS.TF enables single-stepping. So the full picture is that since we executed other instructions between POPF and SYSRET, those instructions were triggering single-stepping IN the kernel. To solve this, we must put POPF immediately before SYSRET.
|
#
c6247d36 |
|
27-Jul-2016 |
Hesham Almatary <hesham.almatary@data61.csiro.au> |
SELFOUR-526: Refactor benchmark/debug syscall kernel entry
|
#
09358f9b |
|
23-Jun-2016 |
Hesham Almatary <Hesham.Almatary@nicta.com.au> |
SELFOUR-448 Benchmark: Track thread's CPU utilisation time
|
#
4d76c71e |
|
01-Jun-2016 |
Adrian Danis <Adrian.Danis@data61.csiro.au> |
x86: Calculate register offset from structures The address being calculated is the end of the user context array. There is no need for this to be done as a magic number offset from the tcb_t, this commit takes an index into the actual array, using the constant that is defined as the length of that array.
|
#
e61a1056 |
|
03-Feb-2016 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
SELFOUR-56: Remove diminish rights from IPC Diminish rights were to prevent a user from sending a writeable cap over a read only endpoint. It turns out this 'security' can be worked around without difficulty (by putting caps in a cnode and sending the cnode) making the current diminish rights implementation functionally useless. Removing diminish rights has the benefit of simplifying all the IPC paths.
|
#
dd593539 |
|
06-Dec-2015 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
x86: More portable user mode IO port restriction For x86-64, to disable IO instructions in user mode requires a IO permission map being set up properly in TSS. Setting the IO map base field of TSS larger than the TSS works for 32-bit, but not 64-bit. This commit sets up a IO permission map usable for both 32-bit and 64-bit kernel and changes the TSS to use the mapping. The IO permission bitmap is appened to the bitfield generated tss_t, resulting the tss_io_t structure.
|
#
d20ca20a |
|
13-Jan-2016 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
x86: Rename ia32->x86 This is a stylistic commit to make names of variables/constants and functions in the kernel more consistent. That is, things that are not IA32 specific, but are generic x86, get renamed to having an x86 name
|
#
eb5b792b |
|
09-Feb-2016 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
trivial: style
|
#
be6b6be1 |
|
24-Nov-2015 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
x86: FS/GS base MSRs when FS/GS_BASE_MSR are used to set the base addreses, user applications should not touch FS/GS regiters; so the kernel should load proper selectors once, establishing limits and other attributes for the segments.
|
#
210fc1f3 |
|
24-Nov-2015 |
Adrian Danis <Adrian.Danis@nicta.com.au> |
x86: Factor out 32-bit specific parts of the fastpath
|