History log of /openbsd-current/sys/arch/powerpc64/include/pmap.h
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 1.19 11-Dec-2023 kettenis

Implement per-CPU caching for the page table page (vp) pool and the PTE
descriptor (pted) pool in the arm64 pmap implementation. This
significantly reduces the side-effects of lock contention on the kernel
map lock that is (incorrectly) translated into excessive page daemon
wakeups. This is not a perfect solution but it does lead to significant
speedups on machines with many CPU cores.

This requires adding a new pmap_init_percpu() function that gets called
at the point where kernel is ready to set up the per-CPU pool caches.
Dummy implementations of this function are added for all non-arm64
architectures. Some other architectures can probably benefit from
providing an actual implementation that sets up per-CPU caches for
pmap pools as well.

ok phessler@, claudio@, miod@, patrick@


Revision tags: OPENBSD_7_1_BASE OPENBSD_7_2_BASE OPENBSD_7_3_BASE OPENBSD_7_4_BASE
# 1.18 12-Oct-2021 kettenis

Add (minimal) accounting for wired pages in userland pmaps.
This enables enforcing of RLIMIT_MEMLOCK on powerpc64.

ok mpi@


Revision tags: OPENBSD_7_0_BASE
# 1.17 30-May-2021 visa

Include <sys/mutex.h> and <sys/queue.h> earlier in powerpc* pmap.h
to avoid hidden header dependencies.

OK jsg@ deraadt@


# 1.16 11-May-2021 kettenis

A Data Segment Interrupt does not indicate whether it was the result
of a read or a write fault. Unfortunately that means we can't call
uvm_fault(), as we have to pass the right access_type. In particular,
passing PROT_READ for write access on a write-only page will fail.
Fix this issue by inserting an appropriate SLB entry when a mapping
exists at the fault address. A subsequent Data Storage Interrupt
will call uvm_fault() to insert a mapping for the page into the
page tables.

Fixes the sys/kern/fork-exit regress test.

Debugging done by bluhm@ and patrick@
ok bluhm@


Revision tags: OPENBSD_6_8_BASE OPENBSD_6_9_BASE
# 1.15 25-Aug-2020 kettenis

Clear user SLB upon context switch.


# 1.14 17-Aug-2020 kettenis

Switch to a per-proc SLB cache. Seems to make GENERIC.MP kernels
(much more) stable. Probably because we could restore an incoherent
SLB cache since there was no locking in the trap return path.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.18 12-Oct-2021 kettenis

Add (minimal) accounting for wired pages in userland pmaps.
This enables enforcing of RLIMIT_MEMLOCK on powerpc64.

ok mpi@


Revision tags: OPENBSD_7_0_BASE
# 1.17 30-May-2021 visa

Include <sys/mutex.h> and <sys/queue.h> earlier in powerpc* pmap.h
to avoid hidden header dependencies.

OK jsg@ deraadt@


# 1.16 11-May-2021 kettenis

A Data Segment Interrupt does not indicate whether it was the result
of a read or a write fault. Unfortunately that means we can't call
uvm_fault(), as we have to pass the right access_type. In particular,
passing PROT_READ for write access on a write-only page will fail.
Fix this issue by inserting an appropriate SLB entry when a mapping
exists at the fault address. A subsequent Data Storage Interrupt
will call uvm_fault() to insert a mapping for the page into the
page tables.

Fixes the sys/kern/fork-exit regress test.

Debugging done by bluhm@ and patrick@
ok bluhm@


Revision tags: OPENBSD_6_8_BASE OPENBSD_6_9_BASE
# 1.15 25-Aug-2020 kettenis

Clear user SLB upon context switch.


# 1.14 17-Aug-2020 kettenis

Switch to a per-proc SLB cache. Seems to make GENERIC.MP kernels
(much more) stable. Probably because we could restore an incoherent
SLB cache since there was no locking in the trap return path.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.17 30-May-2021 visa

Include <sys/mutex.h> and <sys/queue.h> earlier in powerpc* pmap.h
to avoid hidden header dependencies.

OK jsg@ deraadt@


# 1.16 11-May-2021 kettenis

A Data Segment Interrupt does not indicate whether it was the result
of a read or a write fault. Unfortunately that means we can't call
uvm_fault(), as we have to pass the right access_type. In particular,
passing PROT_READ for write access on a write-only page will fail.
Fix this issue by inserting an appropriate SLB entry when a mapping
exists at the fault address. A subsequent Data Storage Interrupt
will call uvm_fault() to insert a mapping for the page into the
page tables.

Fixes the sys/kern/fork-exit regress test.

Debugging done by bluhm@ and patrick@
ok bluhm@


Revision tags: OPENBSD_6_8_BASE OPENBSD_6_9_BASE
# 1.15 25-Aug-2020 kettenis

Clear user SLB upon context switch.


# 1.14 17-Aug-2020 kettenis

Switch to a per-proc SLB cache. Seems to make GENERIC.MP kernels
(much more) stable. Probably because we could restore an incoherent
SLB cache since there was no locking in the trap return path.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.16 11-May-2021 kettenis

A Data Segment Interrupt does not indicate whether it was the result
of a read or a write fault. Unfortunately that means we can't call
uvm_fault(), as we have to pass the right access_type. In particular,
passing PROT_READ for write access on a write-only page will fail.
Fix this issue by inserting an appropriate SLB entry when a mapping
exists at the fault address. A subsequent Data Storage Interrupt
will call uvm_fault() to insert a mapping for the page into the
page tables.

Fixes the sys/kern/fork-exit regress test.

Debugging done by bluhm@ and patrick@
ok bluhm@


Revision tags: OPENBSD_6_8_BASE OPENBSD_6_9_BASE
# 1.15 25-Aug-2020 kettenis

Clear user SLB upon context switch.


# 1.14 17-Aug-2020 kettenis

Switch to a per-proc SLB cache. Seems to make GENERIC.MP kernels
(much more) stable. Probably because we could restore an incoherent
SLB cache since there was no locking in the trap return path.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.15 25-Aug-2020 kettenis

Clear user SLB upon context switch.


# 1.14 17-Aug-2020 kettenis

Switch to a per-proc SLB cache. Seems to make GENERIC.MP kernels
(much more) stable. Probably because we could restore an incoherent
SLB cache since there was no locking in the trap return path.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.14 17-Aug-2020 kettenis

Switch to a per-proc SLB cache. Seems to make GENERIC.MP kernels
(much more) stable. Probably because we could restore an incoherent
SLB cache since there was no locking in the trap return path.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.13 23-Jul-2020 kettenis

Use per-pmap lock to protect userland SLB handling.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.12 21-Jul-2020 kettenis

Make pmap ready for GENERIC.MP.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.11 06-Jul-2020 kettenis

Hide most of the contents behind #ifdef _KERNEL. Reorganize the file a
bit to achieve this with a single #ifdef/#endif pair.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.10 02-Jul-2020 kettenis

Make the copyin(9) functions work when crossing a segment boundary.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.9 01-Jul-2020 kettenis

Switch to using a fixed segment for the copyin(9) functions.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.8 25-Jun-2020 kettenis

Include <machine/pte.h>.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.7 22-Jun-2020 kettenis

Handle data storage and data segment interrupts from userland as well.


# 1.6 22-Jun-2020 kettenis

Make return-to-user and kernel re-entry work. This adds a per-pmap SLB
cache. We might want to turn that in a per-proc cache at some point, but
this gets us to the point where we can sucessfully have init(1) do its
first system call.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.5 21-Jun-2020 kettenis

Implement copyin(9), copyout(9), copyinstr(9) and copyoutstr(9).


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.


# 1.4 17-Jun-2020 kettenis

More pmap bits, mostly from powerpc andd arm64.


# 1.3 12-Jun-2020 gkoehler

Teach powerpc64 ddb to x, w, break, step, trace.

Copy and adapt db_memrw.c from amd64, so ddb can read and write kernel
memory. It can now insert breakpoints in the kernel text. Change
__syncicache() to prevent an infinite loop when len isn't a multiple
of cacheline_size.

Get breakpoints and single-stepping to work.
Single-stepping uses msr bit PSL_SE (single-step trace enable).

Adapt db_trace.c db_stack_trace_print() from powerpc 32, but without
all its features. For now, powerpc64 trace doesn't print function
arguments and doesn't recognize traps.

"go for it" kettenis@


# 1.2 06-Jun-2020 kettenis

Bootstrap a kernel pmap and enable translations.


# 1.1 16-May-2020 kettenis

Planting the first seed for OpenBSD/powerpc64.