History log of /freebsd-9.3-release/sys/sparc64/sparc64/mp_exception.S
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
# 267654 19-Jun-2014 gjb

Copy stable/9 to releng/9.3 as part of the 9.3-RELEASE cycle.

Approved by: re (implicit)
Sponsored by: The FreeBSD Foundation

# 225736 22-Sep-2011 kensmith

Copy head to stable/9 as part of 9.0-RELEASE release cycle.

Approved by: re (implicit)


# 223720 02-Jul-2011 marius

Don't waste a delay slot.


# 222827 07-Jun-2011 marius

Fix a problem with r222813; given that we may only operate on interrupt
globals here but clobber %y save and restore the latter.


# 222813 07-Jun-2011 attilio

etire the cpumask_t type and replace it with cpuset_t usage.

This is intended to fix the bug where cpu mask objects are
capped to 32. MAXCPU, then, can now arbitrarely bumped to whatever
value. Anyway, as long as several structures in the kernel are
statically allocated and sized as MAXCPU, it is suggested to keep it
as low as possible for the time being.

Technical notes on this commit itself:
- More functions to handle with cpuset_t objects are introduced.
The most notable are cpusetobj_ffs() (which calculates a ffs(3)
for a cpuset_t object), cpusetobj_strprint() (which prepares a string
representing a cpuset_t object) and cpusetobj_strscan() (which
creates a valid cpuset_t starting from a string representation).
- pc_cpumask and pc_other_cpus are target to be removed soon.
With the moving from cpumask_t to cpuset_t they are now inefficient
and not really useful. Anyway, for the time being, please note that
access to pcpu datas is protected by sched_pin() in order to avoid
migrating the CPU while reading more than one (possible) word
- Please note that size of cpuset_t objects may differ between kernel
and userland. While this is not directly related to the patch itself,
it is good to understand that concept and possibly use the patch
as a reference on how to deal with cpuset_t objects in userland, when
accessing kernland members.
- KTR_CPUMASK is changed and now is represented through a string, to be
set as the example reported in NOTES.

Please additively note that no MAXCPU is bumped in this patch, but
private testing has been done until to MAXCPU=128 on a real 8x8x2(htt)
machine (amd64).

Please note that the FreeBSD version is not yet bumped because of
the upcoming pcpu changes. However, note that this patch is not
targeted for MFC.

People to thank for the time spent on this patch:
- sbruno, pluknet and Nicholas Esborn (nick AT desert DOT net) tested
several revision of the patches and really helped in improving
stability of this work.
- marius fixed several bugs in the sparc64 implementation and reviewed
patches related to ktr.
- jeff and jhb discussed the basic approach followed.
- kib and marcel made targeted review on some specific part of the
patch.
- marius, art, nwhitehorn and andreast reviewed MD specific part of
the patch.
- marius, andreast, gonzo, nwhitehorn and jceel tested MD specific
implementations of the patch.
- Other people have made contributions on other patches that have been
already committed and have been listed separately.

Companies that should be mentioned for having participated at several
degrees:
- Yahoo! for having offered the machines used for testing on big
count of CPUs.
- The FreeBSD Foundation for having sponsored my devsummit attendance,
which has been instrumental.
- Sandvine for having offered offices and infrastructure during
development.

(I really hope I didn't forget anyone, if it happened I apologize in
advance).


# 211071 08-Aug-2010 marius

- As it is not possible for sched_bind(9) to context switch with
td_critnest > 1 when not already running on the desired CPU read the
TICK counter of the BSP via a direct cross trap request in that case
instead.
- Treat the STICK based timecounter the same way as the TICK based one
regarding its quality and obtaining the counter value from the BSP.
Like the TICK timers the STICK ones also are only synchronized during
their startup (which might not result in good synchronicity in the
first place) but not afterwards and might drift over time, causing
problems when the time is read from different CPUs (see r135972).


# 182877 08-Sep-2008 marius

USIII and beyond CPUs have stricter requirements when it comes
to synchronization needed after stores to internal ASIs in order
to make side-effects visible. This mainly requires the MEMBAR #Sync
after such stores to be replaced with a FLUSH. We use KERNBASE as
the address to FLUSH as it is guaranteed to not trap. Actually,
the USII synchronization rules also already require a FLUSH in
pretty much all of the cases changed.
We're also hitting an additional USIII synchronization rule which
requires stores to AA_IMMU_SFSR to be immediately followed by a DONE,
FLUSH or RETRY. Doing so triggers a RED state exception though so
leave the MEMBAR #Sync. Linux apparently also has gotten away with
doing the same for quite some time now, apart from the fact that
it's not clear to me why we need to clear the valid bit from the
SFSR in the first place.

Reviewed by: nwhitehorn


# 182689 02-Sep-2008 marius

- USIII-based machines can consist of CPUs having different cache
sizes (and running at different frequencies) so move the cacheinfo
to the PCPU data. While at it, remove some redundant and/or unused
members from struct cacheinfo.
- In sparc64_init don't assume the first CPU node we find in the OFW
device tree is the BSP.


# 166105 19-Jan-2007 marius

Convert the remainder of the low hanging fruits regarding including
headers in .S directly rather than getting to their macros through
genassym.c/assym.s so there are less headers genassym.c has to be
kept in sync with.
While at it fix some stytle(9) bugs (indentation, prototype format,
sort headers, etc) and remove trailing whitespace.


# 116567 19-Jun-2003 jake

- Rename the IPI_WAIT macro to IPI_DONE.
- Don't require all receivers of ipis to wait for all other receivers,
only that the sender wait for all receivers. This should reduce the
amount of time spent with interrupts disabled, which may be a cause
of ipi timeouts.

Discussed with: tmm


# 114188 28-Apr-2003 jake

- Fix placement of cvs ids in previous commit to match .S files in libc.
- gcc uses 32 byte alignment for functions regardless of profiling, so
follow suit.


# 114085 26-Apr-2003 obrien

I was wrong, the ENTRY bits in asm.h did have a purpose -- for userland.
Restore the bits and remove them from asmacros.h. *.S will now be asm.h
consumers.

Approved by: jake


# 112399 19-Mar-2003 jake

- Remove unused cache flushing routines. These will not necessary work
on future UltraSPARC cpus for which the data cache is not direct mapped.
- Move UltraSPARC I and II (spitfire, blackbird, sapphire, sabre) specific
functions to spitfire.c, and add cheetah.c for UltraSPARC III specific
functions. Initially just cache flushing, but there are a few other
functions that will need to move here.
- Add an ipi handler for data cache flushing on UltraSPARC III.
- Use function pointers to select the right cache flushing functions based
on cpu_impl.

With this it is possible to boot single user from an mfs root on UltraSPARC
III systems, including spinning up secondary processors. There is currently
no support for the host to pci bridge, and no documentation for it is
publically available.

Thanks to Oleg Derevenetz for providing access to a system with UltraSPARC
III+ cpus.


# 108379 28-Dec-2002 jake

Use the meaningful mnemonics for ancillary state registers now that gas
is invoked properly to understand them.

%asr19 -> %gsr
%asr20 -> %set_softint
%asr21 -> %clear_softint


# 100718 26-Jul-2002 jake

Remove the tlb argument to tlb_page_demap (itlb or dtlb), in order to better
match the pmap_invalidate api.


# 98033 08-Jun-2002 jake

Remove test code.


# 97001 20-May-2002 jake

Add SMP aware cache flushing functions, which operate on a single physical
page. These send IPIs if necessary in order to keep the caches in sync on
all cpus.


# 92211 13-Mar-2002 jake

Fix braino.


# 92199 13-Mar-2002 jake

Make IPI_WAIT use a bit mask of the cpus that a pmap is active on and only
wait for those cpus, instead of all of them by using a count. Oops.
Make the pointer to the mask that the primary cpu spins on volatile, so
gcc doesn't optimize out an important load. Oops again.
Activate tlb shootdown ipi synchronization now that it works. We have
all involved cpus wait until all the others are done. This may not be
necessary, it is mostly for sanity.
Make the trigger level interrupt ipi handler work.

Submitted by: tmm


# 91783 07-Mar-2002 jake

Implement delivery of tlb shootdown ipis. This is currently more fine grained
than the other implementations; we have complete control over the tlb, so we
only demap specific pages. We take advantage of the ranged tlb flush api
to send one ipi for a range of pages, and due to the pm_active optimization
we rarely send ipis for demaps from user pmaps.

Remove now unused routines to load the tlb; this is only done once outside
of the tlb fault handlers.
Minor cleanups to the smp startup code.

This boots multi user with both cpus active on a dual ultra 60 and on a
dual ultra 2.


# 89051 08-Jan-2002 jake

Add initial smp support. This gets as far as allowing the secondary
cpu(s) into the kernel, and sync-ing them up to "kernel" mode so we can
send them ipis, which also work.

Thanks to John Baldwin for providing me with access to the hardware
that made this possible.

Parts obtained from: bsd/os