Searched hist:162211 (Results 1 - 5 of 5) sorted by relevance
/freebsd-11-stable/usr.bin/calendar/ | ||
H A D | sunpos.c | diff 251647 Wed Jun 12 05:54:41 MDT 2013 grog Handle some expression regressions. Explicitly use GNU cpp for preprocessing. Remove explicit debugging code. Change some variable names to be less confusing. Improve some comments. Improve indentation. PR: 162211 168785 MFC after: 2 weeks |
H A D | dates.c | diff 251647 Wed Jun 12 05:54:41 MDT 2013 grog Handle some expression regressions. Explicitly use GNU cpp for preprocessing. Remove explicit debugging code. Change some variable names to be less confusing. Improve some comments. Improve indentation. PR: 162211 168785 MFC after: 2 weeks |
H A D | parsedata.c | diff 251647 Wed Jun 12 05:54:41 MDT 2013 grog Handle some expression regressions. Explicitly use GNU cpp for preprocessing. Remove explicit debugging code. Change some variable names to be less confusing. Improve some comments. Improve indentation. PR: 162211 168785 MFC after: 2 weeks |
H A D | io.c | diff 251647 Wed Jun 12 05:54:41 MDT 2013 grog Handle some expression regressions. Explicitly use GNU cpp for preprocessing. Remove explicit debugging code. Change some variable names to be less confusing. Improve some comments. Improve indentation. PR: 162211 168785 MFC after: 2 weeks |
/freebsd-11-stable/sys/x86/x86/ | ||
H A D | busdma_bounce.c | diff 162211 Mon Sep 11 04:48:53 MDT 2006 scottl The run_filter() procedure is a means of working around DMA engine bugs in old/broken hardware. Unfortunately, it adds cache pressure and possible mispredicted branches to the fast path of the bus_dmamap_load collection of functions. Since it's meant for slow path exception processing, de-inline it and allow its conditions to be pre-computed at tag_create time and thus short-circuited at runtime. While here, cut down on the size of _bus_dmamap_load_buffer() by pushing the bounce page logic into a non-inlined function. Again, this helps with cache pressure and mispredicted branches. According to the TSC, this shaves off a few cycles on average. Unfortunately, the data varies quite a bit due to interrupts and preemption, so it's hard to get a good measurement. Real world measurements of network PPS are welcomed. A merge to amd64 and other arches is pending more testing. |
Completed in 143 milliseconds