• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /barrelfish-2018-10-04/doc/009-notifications/

Lines Matching defs:and

43 memory circular buffers and a polling mechanism which is designed to
47 each channel. This document describes the design and implementation of
51 most of the traces I took of Tim's IDC and THC test program (see
57 messages. Each polls for a while and then yields, and with 3 domains
58 it takes up to 10000 cycles to notice a message, and obviously the
65 \subsection{Polling and cache coherence}
68 cache line starts in shared (S) mode in the cache of both sender and
70 transition to the owned (O) state and an invalidation of the copy in
79 When sender and receiver threads are the only things running on each
82 domains, the message latency is determined by kernel- and user-mode
83 scheduling policies and is typically a function of the kernel clock
84 interrupt rate and the number of domains (and channels) in the system.
103 it is common to see $O(N)$ and even $O(N^2)$ URPC channels between
107 of cache lines), the number of channels can grow rapidly and this will
108 have an effect on polling costs and message latency.
116 could efficiently dispatch new messages, and if the FIFO overflowed
128 sending timely notifications to a remote kernel, domain and thread.
139 of head pointers and tail pointers (need only 1 byte per entry x
150 If the entry is zero then it writes the dest$\_$chanid and increments the private head pointer.
159 between Shared and Modified state on both sender and receiver. I
162 notification (for a shared L3) and 450 cycles cross-package. Note
168 aren't always polling, and in any event this would scale as O(N
182 dest$\_$core is unused and could be treated as 512 flag-bits c/f Simons
187 the IRQ and acking it on the receiver is probably between 500 and 1000
189 (I tried Richard's HLT in Ring0 with interrupts disabled trick and it
193 hyperthread so that interrupt latency and polling costs were
194 interleaved with normal processing...and only interrupt the
213 already propagates a notification cap between client and server. I
214 hand edited the bench.if stubs to allocate the caps and invoke the
218 destination core's incoming notification FIFO and sends an IPI with a
221 notification fifo and does a cswitch to the most recently notified
222 domain. The domain will get an activation and poll its URPC channels
233 the currently running DCB, and ideally the time at which it will next
235 therefore tell if it's worth sending an IPI and return immediately if
238 it isn't the currently running domain and so sends a notification IPI.
239 The monitor on core2 is preempted and the receiver domain gets to run.
244 without excessive penalty, which in turn allows the client and server
245 to remain in the polling loop and notice messages before they yield to
255 \chapter{Testing and Debugging}