#
3d8ee5c5 |
|
14-Sep-2018 |
Nick Maniscalco <maniscalco@google.com> |
[kernel][arm64][mp][timer] Make arch_spinloop_pause a no-op on arm64 This change removes arch_spinloop_signal and changes arm64's arch_spinloop_pause to issue YIELD rather than WFE. The purpose of this change is to eliminate potential bugs that may arise from WFE with no corresponding SEV. In two places (timer.cpp and mp.cpp) arch_spinloop_pause, in conjunction with arch_spinloop_signal, is used to create a kind of condition variable, suspending execution until another CPU calls arch_spinloop_signal. Elsewhere (like some UART drivers), it is simply used as a hinting no-op in busy loops. On x64, arch_spinloop_pause is PAUSE and arch_spinloop_signal is empty. On arm64, arch_spinloop_pause is WFE (Wait For Event) and arch_spinloop_signal is SEV (Send Event). WFE suspends execution until an event is signaled (via SEV, global monitor transition, etc.). This means that any use of WFE without a corresponding SEV (or other mechanism like Load-Exclusive) could potentially suspend the CPU for an indefinite period of time. Test: On VIM2 and Eve, ran the following: - k ut sync_ipi_tests - k ut timer - k timer_stress 60 ZX-2562 #comment followup Change-Id: If2b8facef4845865d5bfe7a4d0089cd5aef791a6
|
#
e633bdc3 |
|
20-Aug-2018 |
Mark Seaborn <mseaborn@google.com> |
[kernel] Rename "in_int_handler" to "blocking_disallowed" The name "in_int_handler" is misleading because the field is only set to true for part of the interrupt handler -- the part during which blocking operations are disallowed. Rename the field to reflect its actual meaning. Bug: none Test: build Change-Id: Iab41d3650ea805b8de0b13de76b64c5819f48b2f
|
#
35a1ba79 |
|
27-Feb-2018 |
David Stevens <stevensd@google.com> |
[kernel][sched][x86] Use monitor/mwait when idle When rescheduling idle cpus, use monitor/mwait instead of relying on IPIs. This change adds some reschedule-specific arch hooks, instead of relying on the arch IPI hooks. The x86 percpu state includes a variable that tracks whether the cpu is running the idle thread. The idle thread monitors and mwaits on that variable. Then other threads can reschedule the idle cpu by clearing the monitored variable. ZX-1713 ZX-1293 #done Change-Id: I5f7bf073e3e5b6e1e5fa4febc412f52f40773e2d
|
#
048443c0 |
|
27-Feb-2018 |
Mark Seaborn <mseaborn@google.com> |
[kernel] Move arch_in_int_handler() definition to an arch-neutral file This will make it easier to add more fields to the percpu struct while minimising duplication between x86 and ARM64. ZX-1690 Change-Id: Ibd1fd7f5c5b339fd29fecb73fdbcbd20466a8122
|
#
c5cca972 |
|
25-Jan-2018 |
Aaron Green <aarongreen@google.com> |
[vdso] Add system_get_cpu_features This CL adds a VDSO call to get CPU feature bits on arm64. __get_cpuid should be used instaed on x86-64. ZX-1552 Change-Id: I71247a289c318ce84d90528b0079f31463af4a19
|
#
773c5842 |
|
29-Jan-2018 |
Mark Seaborn <mseaborn@google.com> |
[kernel][interrupts] Add comment to document role of arch_in_int_handler() Also add the prototype for arch_set_in_int_handler() to this arch-neutral header to act as additional documentation. ZX-1490 Change-Id: I01b1ba932ac8005ff2e33f377dbe929b7811a9c3
|
#
a17f89f6 |
|
16-Jan-2018 |
Mark Seaborn <mseaborn@google.com> |
[kernel] Fix #include cycle between arch/ops.h and arch/$ARCH/mp.h There was an #include loop involving these headers: kernel/include/arch/ops.h kernel/arch/x86/include/arch/arch_ops.h (or arm64 version) kernel/arch/x86/include/arch/x86/mp.h (or arm64 version) The cycle makes it difficult to change these headers without something going wrong. We can break the cycle by removing mp.h's #include of arch/ops.h: * Do #include <kernel/cpu.h> to get cpu_num_t. * Move __CPU_ALIGN out of arch/ops.h to a separate header. ZX-1490 Change-Id: I7971a2e0bfa957d292e5a3bb2c6df903f5ffd14c
|
#
94c28cc6 |
|
03-Nov-2017 |
Travis Geiselbrecht <travisg@google.com> |
[zircon] replace the global ASSEMBLY define with compiler emitted __ASSEMBLER__ Change-Id: I186098dfacb67f8ade7c4052cd477ef0e384c0e1
|
#
afe7dab1 |
|
27-Oct-2017 |
Travis Geiselbrecht <travisg@google.com> |
[kernel] remove CACHE_LINE since it's now fully dynamic on both architectures Move the few places where we statically allocate data to MAX_CACHE_LINE, which is defined as the highest known cache line on any given architecture. Change-Id: I819cb3bbc16e02de03db521e37e36e8b89dd6c18
|
#
0601e9df |
|
30-Aug-2017 |
Travis Geiselbrecht <travisg@google.com> |
[kernel][mp] add new header with types and routines to deal with cpu numbers Add a few more types and switch some apis to using those. No functional change. Change-Id: I67add1247cf36d9e6a55f15dd809ffe4bafe06fd
|
#
458677b0 |
|
25-Sep-2017 |
George Kulakowski <kulakowski@google.com> |
[arch] Remove unused <kernel/atomic.h> inclusion Change-Id: Idccd3341028292ddba4ecd643192b22ea687c2bf
|
#
f3e2126c |
|
12-Sep-2017 |
Roland McGrath <mcgrathr@google.com> |
[zx] Magenta -> Zircon The Great Renaming is here! Change-Id: I3229bdeb2a3d0e40fb4db6fec8ca7d971fbffb94
|
#
a3d4dbfb |
|
17-Jul-2017 |
George Kulakowski <kulakowski@google.com> |
[kernel][arch] Remove old assertion about per-arch atomics. Everything we support now uses atomics derived from the compiler builtins, whether C11, mxtl, or the kernel's atomic.h. Change-Id: I1b734f0ebf45941ca83798276b371954f7242859
|
#
8aaf4e9d |
|
17-Jul-2017 |
George Kulakowski <kulakowski@google.com> |
[kernel][atomics] Move private/magenta/atomic.h into the kernel This is now only used by kernel C code. Everything else is either C++ and is using mxtl::atomic, or is in userspace and can use the standard C11 versions. Change-Id: I113d3e1c7fe6a1bbe3d0e54cd6cbac8b62ccd194
|
#
a2681dcb |
|
02-Jun-2017 |
Travis Geiselbrecht <travisg@google.com> |
[kernel] restructure some separate per cpu structures into a single kernel structure Allows for better cache locality since the structure is guaranteed to not be aliased between cpus. Also sets the kernel up for per cpu scheduler queues and other goodies. Change-Id: I5c90c1571018fa56e695a06eae6c1c50b7fbc4b7
|
#
e12874d1 |
|
11-Apr-2017 |
Aaron Green <aarongreen@google.com> |
[kernel][arch] 64 bit arch_cycle_count arch_cycle_count previously returned a uint32_t, presumably for arm32 support. All target platforms are now 64 bit, and the 32 bit counter can only measure limited durations due to overflows: at 1 GHz it wraps around every ~4.3 seconds. This CL moves it to 64 bits. Change-Id: Idf68f94ef0e617ba5a6b5ff05cc1f5f28492dcfe
|
#
9b8b4555 |
|
08-Dec-2016 |
John Grossman <johngro@google.com> |
[atomic] Hoist atomic wrappers up. Hoist atomic wrappers up to the public magenta level so they can be used with user-mode code as well as kernel code. Change-Id: I3e1159999a215ceb8b5229ea713a7c33ddcf2aca
|
#
c501a5a8 |
|
17-Sep-2016 |
Travis Geiselbrecht <travisg@google.com> |
[kernel] add per arch optimized routines for zeroing a page buffer Change-Id: I2edf8d12f8c568a455fa97504727340c02382ad8
|
#
f8aaa788 |
|
29-Sep-2016 |
Roland McGrath <mcgrathr@google.com> |
[kernel] Use compiler's types and values for <stdint.h> The compiler provides predefined macros for what it thinks all the types should be. Just do what the compiler says rather than second-guessing. However, the arm-eabi GCC target uses 'long' for 'int32_t' et al, which is unlike all other targets (including arm-linux), so override the compiler's choices for those. GCC doesn't give direct assistance in getting the <inttypes.h> PRI* macros right, though Clang, does. So we have conditionals for that, defining the same macros that Clang predefines. This requires cleaning up lots of printf-style formats that were sloppily using "whatever works", to use the proper <inttypes.h> macro for the types being used. Also some declarations of functions and typedefs using 'long long' are changed to use 'int64_t', etc. Change-Id: I35e303510d06f48548b958f844790a3acfbf2eea
|
#
9b7c44af |
|
25-Aug-2016 |
John Grossman <johngro@google.com> |
[system] Merge kernel compiler.h with global compiler.h Change-Id: Ia9f35fdb5321c82a3f844510ca93aada81d2f8c9
|
#
8dd90f76 |
|
08-Aug-2016 |
Carlos Pizano <cpu@google.com> |
[kernel][magenta] Add handle count to Dispatchers Now dispatchers have two counts, the traditional reference count and and now the handle count. Lifetime still works as usual but now we have knowlege of when the handle count reaches zero. At zero handles a virtual function is called that dispatchers can override. This is useful for freeing state that is user-mode facing or for providing the right semantics. Change-Id: I3a01b2e050b43a8487c90fac4996a9b3c958d4ae
|
#
1eec14f3 |
|
21-Jul-2016 |
Carlos Pizano <cpu@google.com> |
[kernel][arch] implement 64-bit atomic ops The new ops have both long long int and long long uint versions. Remove the downlevel (armv6 and arm-m) atomic support which was gated by "ARCH_IMPLEMENTS_ATOMICS 1". Now the atomics are only implemented by compiler builtins with work the architectures that we care about. Change-Id: I6acd305808d6af765803ed49d53b5912f12ddaa7
|
#
5b74afea |
|
10-Jun-2016 |
Todd Eisenberger <teisenbe@google.com> |
[cpu hotplug] Add support for shutting down a CPU This adds the kernel framework for shutting down CPUs, and support on x86-64 for actually doing so. Change-Id: Iabedda8dfac8d179401541ff2e5ebffcd629d4b9
|
#
53b9e1c8 |
|
15-Jun-2016 |
The Fuchsia Authors <authors@fuchsia.local> |
[magenta] Initial commit
|