Lines Matching defs:cyclic

50  *  The cyclic subsystem has been designed to take advantage of chip
56 * The cyclic subsystem is a low-level kernel subsystem designed to provide
58 * with existing terms, we dub such an interval timer a "cyclic"). Cyclics
60 * optionally bound to a CPU or a CPU partition. A cyclic's CPU or CPU
61 * partition binding may be changed dynamically; the cyclic will be "juggled"
62 * to a CPU which satisfies the new binding. Alternatively, a cyclic may
68 * The cyclic subsystem has interfaces with the kernel at-large, with other
70 * resume subsystem) and with the platform (the cyclic backend). Each
74 * The following diagram displays the cyclic subsystem's interfaces to
76 * the large arrow indicating the cyclic subsystem's consumer interface.
106 * cyclic_add() <-- Creates a cyclic
107 * cyclic_add_omni() <-- Creates an omnipresent cyclic
108 * cyclic_remove() <-- Removes a cyclic
109 * cyclic_bind() <-- Change a cyclic's CPU or partition binding
110 * cyclic_reprogram() <-- Reprogram a cyclic's expiration
115 * cyclic_offline() <-- Offlines cyclic operation on a CPU
119 * cyclic_suspend() <-- Suspends the cyclic subsystem on all CPUs
120 * cyclic_resume() <-- Resumes the cyclic subsystem on all CPUs
124 * cyclic_init() <-- Initializes the cyclic subsystem
135 * The cyclic subsystem is designed to minimize interference between cyclics
136 * on different CPUs. Thus, all of the cyclic subsystem's data structures
139 * Each cyc_cpu has a power-of-two sized array of cyclic structures (the
147 * heap is keyed by cyclic expiration time, with parents expiring earlier
153 * compares the root cyclic's expiration time to the current time. If the
155 * cyclic. Upon return from cyclic_expire(), the cyclic's new expiration time
158 * examines the (potentially changed) root cyclic, repeating the
160 * cyclic has an expiration time in the future. This expiration time
163 * shortly after the root cyclic's expiration time.
263 * the cyclic at cyp_cyclics[cyp_heap[number_of_elements]], incrementing
281 * To insert into this heap, we would just need to fill in the cyclic at
289 * because the cyclic does not keep a backpointer into the heap. This makes
296 * CY_HIGH_LEVEL to expire a cyclic. Cyclic subsystem consumers are
297 * guaranteed that for an arbitrary time t in the future, their cyclic
299 * there must be a one-to-one mapping between a cyclic's expiration at
308 * CY_HIGH_LEVEL but greater than the level of a cyclic for a period of
309 * time longer than twice the cyclic's interval, the cyclic will be expired
313 * number of times a cyclic has been expired and the number of times it's
314 * been handled in a "pending count" (the cy_pend field of the cyclic
316 * expired cyclic and posts a soft interrupt at the desired level. In the
317 * cyclic subsystem's soft interrupt handler, cyclic_softint(), we repeatedly
318 * call the cyclic handler and decrement cy_pend until we have decremented
331 * The producer (cyclic_expire() running at CY_HIGH_LEVEL) enqueues a cyclic
332 * by storing the cyclic's index to cypc_buf[cypc_prodndx] and incrementing
334 * CY_LOCK_LEVEL or CY_LOW_LEVEL) dequeues a cyclic by loading from
339 * enqueues a cyclic if its cy_pend was zero (if the cyclic's cy_pend is
341 * cyclic_softint() only consumes a cyclic after it has decremented the
400 * When cyclic_softint() discovers a cyclic in the producer/consumer buffer,
401 * it calls the cyclic's handler and attempts to atomically decrement the
407 * - If the cy_pend was decremented to 0, the cyclic has been consumed;
412 * to be done on the cyclic; cyclic_softint() calls the cyclic handler
422 * having cyclic_expire() only enqueue the specified cyclic if its
423 * cy_pend count is zero; this assures that each cyclic is enqueued at
427 * cyclic. In part to obey this constraint, cyclic_softint() calls the
428 * cyclic handler before decrementing cy_pend.
440 * on the CPU being resized, but should not affect cyclic operation on other
507 * the cyclic subsystem: after cyclic_remove() returns, the cyclic handler
510 * Here is the procedure for cyclic removal:
515 * 3. The current expiration time for the removed cyclic is recorded.
516 * 4. If the cy_pend count on the removed cyclic is non-zero, it
518 * 5. The cyclic is removed from the heap
525 * The cy_pend count is decremented in cyclic_softint() after the cyclic
530 * until cyclic_softint() has finished calling the cyclic handler. To let
531 * cyclic_softint() know that this cyclic has been removed, we zero the
534 * caught during a resize (see "Resizing", above) or that the cyclic has been
536 * cyclic handler cyp_rpend - 1 times, and posts on cyp_modify_wait.
540 * At first glance, cyclic juggling seems to be a difficult problem. The
541 * subsystem must guarantee that a cyclic doesn't execute simultaneously on
542 * different CPUs, while also assuring that a cyclic fires exactly once
545 * multiple CPUs. Therefore, to juggle a cyclic, we remove it from its
547 * in "Removing", above). We then add the cyclic to the new CPU, explicitly
549 * leverages the existing cyclic expiry processing, which will compensate
554 * Normally, after a cyclic fires, its next expiration is computed from
555 * the current time and the cyclic interval. But there are situations when
557 * is using the cyclic. cyclic_reprogram() allows this to be done. This,
558 * unlike the other kernel at-large cyclic API functions, is permitted to
559 * be called from the cyclic handler. This is because it does not use the
562 * When cyclic_reprogram() is called for an omni-cyclic, the operation is
563 * applied to the omni-cyclic's component on the current CPU.
565 * If a high-level cyclic handler reprograms its own cyclic, then
566 * cyclic_fire() detects that and does not recompute the cyclic's next
567 * expiration. However, for a lock-level or a low-level cyclic, the
568 * actual cyclic handler will execute at the lower PIL only after
572 * expiration to CY_INFINITY. This effectively moves the cyclic to the
575 * "one-shot" timers in the context of the cyclic subsystem without using
578 * Here is the procedure for cyclic reprogramming:
581 * that houses the cyclic.
583 * 3. The cyclic is located in the cyclic heap. The search for this is
586 * 4. The cyclic expiration is set and the cyclic is moved to its
589 * 5. If the cyclic move modified the root of the heap, the backend is
593 * the serialization used has to be efficient. As with all other cyclic
595 * during reprogramming, the cyclic must not be juggled (regular cyclic)
596 * or stopped (omni-cyclic). The implementation defines a per-cyclic
600 * an omni-cyclic is reprogrammed on different CPUs frequently.
603 * the responsibility of the user of the reprogrammable cyclic to make sure
604 * that the cyclic is not removed via cyclic_remove() during reprogramming.
606 * some sort of synchronization for its cyclic-related activities. This
607 * little caveat exists because the cyclic ID is not really an ID. It is
663 panic("too many cyclic coverage points");
847 cyclic_expire(cyc_cpu_t *cpu, cyc_index_t ndx, cyclic_t *cyclic)
850 cyc_level_t level = cyclic->cy_level;
853 * If this is a CY_HIGH_LEVEL cyclic, just call the handler; we don't
857 cyc_func_t handler = cyclic->cy_handler;
858 void *arg = cyclic->cy_arg;
861 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
865 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
876 if (cyclic->cy_pend++ == 0) {
881 * We need to enqueue this cyclic in the soft buffer.
883 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expire-enq", cyclic,
895 if (cyclic->cy_pend == 0) {
896 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "expire-wrap", cyclic);
897 cyclic->cy_pend = UINT32_MAX;
900 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expire-bump", cyclic, 0);
903 be->cyb_softint(be->cyb_arg, cyclic->cy_level);
911 * cyclic_fire() is the cyclic subsystem's CY_HIGH_LEVEL interrupt handler.
912 * Called by the cyclic backend.
923 * of the cyclic subsystem does not rely on the timeliness of the backend.
941 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
960 cyclic = &cyclics[ndx];
962 ASSERT(!(cyclic->cy_flags & CYF_FREE));
964 CYC_TRACE(cpu, CY_HIGH_LEVEL, "fire-check", cyclic,
965 cyclic->cy_expire);
967 if ((exp = cyclic->cy_expire) > now)
970 cyclic_expire(cpu, ndx, cyclic);
973 * If the handler reprogrammed the cyclic, then don't
978 if (exp != cyclic->cy_expire) {
980 * If a hi level cyclic reprograms itself,
988 if (cyclic->cy_interval == CY_INFINITY)
991 exp += cyclic->cy_interval;
994 * If this cyclic will be set to next expire in the distant
997 * a) This is the first firing of a cyclic which had
1000 * b) We are tragically late for a cyclic -- most likely
1014 hrtime_t interval = cyclic->cy_interval;
1022 cyclic->cy_expire = exp;
1027 * Now we have a cyclic in the root slot which isn't in the past;
1034 cyclic_remove_pend(cyc_cpu_t *cpu, cyc_level_t level, cyclic_t *cyclic)
1036 cyc_func_t handler = cyclic->cy_handler;
1037 void *arg = cyclic->cy_arg;
1040 ASSERT(cyclic->cy_flags & CYF_FREE);
1041 ASSERT(cyclic->cy_pend == 0);
1045 CYC_TRACE(cpu, level, "remove-rpend", cyclic, cpu->cyp_rpend);
1053 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
1057 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
1072 * cyclic_softint() is the cyclic subsystem's CY_LOCK_LEVEL and CY_LOW_LEVEL
1073 * soft interrupt handler. Called by the cyclic backend.
1096 * at the mercy of its cyclic handlers. Because cyclic handlers may block
1112 * cpu_lock or any lock acquired by any cyclic handler or held across
1144 cyclic_t *cyclic = &cyclics[buf[consmasked]];
1145 cyc_func_t handler = cyclic->cy_handler;
1146 void *arg = cyclic->cy_arg;
1149 CYC_TRACE(cpu, level, "consuming", consndx, cyclic);
1152 * We have found this cyclic in the pcbuffer. We know that
1166 * to call the cyclic rpend times. We will take into
1174 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
1178 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
1181 pend = cyclic->cy_pend;
1187 * This cyclic has been removed while
1190 * found this cyclic in the pcbuffer).
1197 cyclic_remove_pend(cpu, level, cyclic);
1211 cyclic = &cyclics[buf[consmasked]];
1212 ASSERT(cyclic->cy_handler == handler);
1213 ASSERT(cyclic->cy_arg == arg);
1218 cas32(&cyclic->cy_pend, pend, npend)) != pend) {
1224 * pend count on this cyclic. In this
1232 * (c) The cyclic has been removed by an
1235 * CYS_REMOVING, and the cyclic will be
1248 (cyclic->cy_flags & CYF_FREE)))));
1347 * to CY_HIGH_LEVEL. This CPU already has a new heap, cyclic array,
1604 * (a) We have a partition-bound cyclic, and there is no CPU in
1610 * (b) We have a partition-unbound cyclic, in which case there
1650 cyclic_t *cyclic;
1671 cyclic = &cpu->cyp_cyclics[ndx];
1673 ASSERT(cyclic->cy_flags == CYF_FREE);
1674 cyclic->cy_interval = when->cyt_interval;
1681 cyclic->cy_expire = (gethrtime() / cyclic->cy_interval + 1) *
1682 cyclic->cy_interval;
1684 cyclic->cy_expire = when->cyt_when;
1687 cyclic->cy_handler = hdlr->cyh_func;
1688 cyclic->cy_arg = hdlr->cyh_arg;
1689 cyclic->cy_level = hdlr->cyh_level;
1690 cyclic->cy_flags = arg->cyx_flags;
1693 hrtime_t exp = cyclic->cy_expire;
1695 CYC_TRACE(cpu, CY_HIGH_LEVEL, "add-reprog", cyclic, exp);
1734 * actually add our cyclic.
1757 cyclic_t *cyclic;
1771 cyclic = &cpu->cyp_cyclics[ndx];
1774 * Grab the current expiration time. If this cyclic is being
1776 * will be used when the cyclic is added to the new CPU.
1779 arg->cyx_when->cyt_when = cyclic->cy_expire;
1780 arg->cyx_when->cyt_interval = cyclic->cy_interval;
1783 if (cyclic->cy_pend != 0) {
1785 * The pend is non-zero; this cyclic is currently being
1790 * that we have zeroed out pend, and will call the cyclic
1792 * softint has completed calling the cyclic handler.
1799 ASSERT(cyclic->cy_level != CY_HIGH_LEVEL);
1800 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "remove-pend", cyclic->cy_pend);
1801 cpu->cyp_rpend = cyclic->cy_pend;
1802 cyclic->cy_pend = 0;
1811 cyclic->cy_flags = CYF_FREE;
1819 panic("attempt to remove non-existent cyclic");
1876 cyclic = &cpu->cyp_cyclics[heap[0]];
1881 be->cyb_reprogram(bar, cyclic->cy_expire);
1891 cyclic_t *cyclic = &cpu->cyp_cyclics[ndx];
1892 cyc_level_t level = cyclic->cy_level;
1910 * If the cyclic we removed wasn't at CY_HIGH_LEVEL, then we need to
1912 * for all pending cyclic handlers to run.
1921 * remove this cyclic; put the CPU back in the CYS_ONLINE
1943 * If cyclic_reprogram() is called on the same CPU as the cyclic's CPU, then
1945 * an X-call to the cyclic's CPU.
1955 cyclic_t *cyclic;
1978 panic("attempt to reprogram non-existent cyclic");
1980 cyclic = &cpu->cyp_cyclics[ndx];
1981 oexpire = cyclic->cy_expire;
1982 cyclic->cy_expire = expire;
1998 cyclic = &cpu->cyp_cyclics[heap[0]];
1999 be->cyb_reprogram(bar, cyclic->cy_expire);
2031 * cyclic_juggle_one_to() should only be called when the source cyclic
2042 cyclic_t *cyclic;
2051 cyclic = &src->cyp_cyclics[ndx];
2053 flags = cyclic->cy_flags;
2056 hdlr.cyh_func = cyclic->cy_handler;
2057 hdlr.cyh_level = cyclic->cy_level;
2058 hdlr.cyh_arg = cyclic->cy_arg;
2063 * expansion before removing the cyclic. This is to prevent us
2064 * from blocking while a system-critical cyclic (notably, the clock
2065 * cyclic) isn't on a CPU.
2074 * Prevent a reprogram of this cyclic while we are relocating it.
2081 * Remove the cyclic from the source. As mentioned above, we cannot
2082 * block during this operation; if we cannot remove the cyclic
2087 * cyclic handler is blocked on a resource held by a thread which we
2107 if (delay > (cyclic->cy_interval >> 1))
2108 delay = cyclic->cy_interval >> 1;
2111 * Drop the RW lock to avoid a deadlock with the cyclic
2120 * Now add the cyclic to the destination. This won't block; we
2122 * CPU before removing the cyclic from the source CPU.
2129 * Now that we have successfully relocated the cyclic, allow
2140 cyclic_t *cyclic = &cpu->cyp_cyclics[ndx];
2148 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2150 if ((dest = cyclic_pick_cpu(part, c, c, cyclic->cy_flags)) == NULL) {
2152 * Bad news: this cyclic can't be juggled.
2169 cyclic_t *cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
2174 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2175 ASSERT(cyclic->cy_flags & CYF_CPU_BOUND);
2177 cyclic->cy_flags &= ~CYF_CPU_BOUND;
2189 (!res && (cyclic->cy_flags & CYF_PART_BOUND)));
2199 cyclic_t *cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
2209 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2210 ASSERT(!(cyclic->cy_flags & CYF_CPU_BOUND));
2212 dest = cyclic_pick_cpu(part, d, NULL, cyclic->cy_flags | CYF_CPU_BOUND);
2216 cyclic = &dest->cyp_cyclics[idp->cyi_ndx];
2219 cyclic->cy_flags |= CYF_CPU_BOUND;
2239 * If we're on a CPU which has interrupts disabled (and if this cyclic
2319 * cyclic subsystem for this CPU is prepared to field interrupts.
2460 cyclic_t *cyclic = &cpu->cyp_cyclics[cpu->cyp_heap[0]];
2461 hrtime_t exp = cyclic->cy_expire;
2463 CYC_TRACE(cpu, CY_HIGH_LEVEL, "resume-reprog", cyclic, exp);
2524 * Prevent a reprogram of this cyclic while we are removing it.
2537 * CPU -- the definition of an omnipresent cyclic is that it runs
2549 * Remove the cyclic from the source. We cannot block during this
2551 * by the cyclic handler via cyclic_reprogram().
2553 * If we cannot remove the cyclic without waiting, we spin for a time,
2557 * succeed -- even if the cyclic handler is blocked on a resource
2580 * Drop the RW lock to avoid a deadlock with the cyclic
2589 * Now that we have successfully removed the cyclic, allow the omni
2590 * cyclic to be reprogrammed on other CPUs.
2595 * The cyclic has been removed from this CPU; time to call the
2615 * associated with the cyclic. If and only if this field is NULL, the
2616 * cyc_id_t is an omnipresent cyclic. Note that cyi_omni_list may be
2617 * NULL for an omnipresent cyclic while the cyclic is being created
2643 * cyclic_add() will create an unbound cyclic with the specified handler and
2644 * interval. The cyclic will run on a CPU which both has interrupts enabled
2653 * void *cyh_arg <-- Argument to cyclic handler
2673 * is set to 0, the cyclic will start to fire when cyt_interval next
2677 * _not_ explicitly supported by the cyclic subsystem (cyclic_add() will
2682 * For an arbitrary time t in the future, the cyclic handler is guaranteed
2686 * the cyclic handler may be called a finite number of times with an
2689 * The cyclic subsystem will not enforce any lower bound on the interval;
2695 * The cyclic handler is guaranteed to be single threaded, even while the
2696 * cyclic is being juggled between CPUs (see cyclic_juggle(), below).
2697 * That is, a given cyclic handler will never be executed simultaneously
2710 * apply. A cyclic may be added even in the presence of CPUs that have
2711 * not been configured with respect to the cyclic subsystem, but only
2712 * configured CPUs will be eligible to run the new cyclic.
2720 * A cyclic handler may not grab ANY locks held by the caller of any of
2722 * these functions may require blocking on cyclic handler completion.
2723 * Moreover, cyclic handlers may not make any call back into the cyclic
2745 * cyclic_add_omni() will create an omnipresent cyclic with the specified
2770 * The omni cyclic online handler is always called _before_ the omni
2771 * cyclic begins to fire on the specified CPU. As the above argument
2775 * allows the omni cyclic to have maximum flexibility; different CPUs may
2789 * by cyclic handlers. However, omni cyclic online handlers may _not_
2790 * call back into the cyclic subsystem, and should be generally careful
2800 * void * <-- CPU's cyclic argument (that is, value
2804 * The omni cyclic offline handler is always called _after_ the omni
2805 * cyclic has ceased firing on the specified CPU. Its purpose is to
2806 * allow cleanup of any resources dynamically allocated in the omni cyclic
2849 * this cyclic.
2862 * cyclic_remove() will remove the specified cyclic from the system.
2870 * removed cyclic handler has completed execution (this is the same
2872 * need to block, waiting for the removed cyclic to complete execution.
2874 * held across cyclic_remove() that also may be acquired by a cyclic
2885 * grabbed by any cyclic handler. See "Arguments and notes", above.
2925 * of a cyclic.
2934 * cyclic. If the specified cyclic is bound to a CPU other than the one
2935 * specified, it will be unbound from its bound CPU. Unbinding the cyclic
2937 * CPU is non-NULL, the cyclic will be subsequently rebound to the specified
2945 * attempts to bind a cyclic to an offline CPU, the cyclic subsystem will
2949 * specified cyclic. If the specified cyclic is bound to a CPU partition
2951 * partition. Unbinding the cyclic from its CPU partition may cause it
2953 * non-NULL, the cyclic will be subsequently rebound to the specified CPU
2957 * partition contains a CPU. If it does not, the cyclic subsystem will
2968 * cyclic subsystem will panic.
2971 * been configured with respect to the cyclic subsystem. Generally, this
2986 * grabbed by any cyclic handler.
3002 panic("attempt to change binding of omnipresent cyclic");
3055 * Prevent the cyclic from moving or disappearing while we reprogram.
3064 * For an omni cyclic, we reprogram the cyclic corresponding
3098 * Allow the cyclic to be moved or removed.
3130 * cyclic backend.
3137 * It is assumed that cyclic_mp_init() is called some time after cyclic
3175 * and there exists a P_ONLINE CPU in the partition. The cyclic subsystem
3176 * assures that a cyclic will never fire late or spuriously, even while
3189 * grabbed by any cyclic handler. While cyclic_juggle() _may_ be called
3192 * Failure to do so could result in an assertion failure in the cyclic
3206 * We'll go through each cyclic on the CPU, attempting to juggle
3229 * cyclic_offline() offlines the cyclic subsystem on the specified CPU.
3240 * and the cyclic subsystem on the CPU was successfully offlines.
3241 * cyclic_offline returns 0 if some cyclics remain, blocking the cyclic
3246 * on cyclic juggling.
3270 * cyclic firing on this CPU.
3362 * into the cyclic subsystem, no lock may be held which is also grabbed
3363 * by any cyclic handler.
3370 cyclic_t *cyclic;
3402 cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
3404 if (cyclic->cy_flags & CYF_CPU_BOUND)
3408 * We know that this cyclic is bound to its processor set
3412 ASSERT(cyclic->cy_flags & CYF_PART_BOUND);
3435 * a partition-bound cyclic which is CPU-bound to the specified CPU,
3456 * returns failure. As with other calls into the cyclic subsystem, no lock
3457 * may be held which is also grabbed by any cyclic handler.
3464 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
3479 cyclic = &cyclics[idp->cyi_ndx];
3481 if (!(cyclic->cy_flags & CYF_PART_BOUND))
3484 dest = cyclic_pick_cpu(part, c, c, cyclic->cy_flags);
3488 * We can't juggle this cyclic; we need to return
3507 * cyclic_suspend() suspends all cyclic activity throughout the cyclic
3514 * cyclic_suspend() takes no arguments. Each CPU with an active cyclic
3520 * cyclic handlers from being called after cyclic_suspend() returns: if a
3522 * of cyclic_suspend(), cyclic handlers at its level may continue to be
3542 * The cyclic subsystem must be configured on every valid CPU;
3546 * cyclic entry points, cyclic_suspend() may be called with locks held
3547 * which are also acquired by CY_LOCK_LEVEL or CY_LOW_LEVEL cyclic
3575 * cyclic_resume() resumes all cyclic activity throughout the cyclic
3580 * cyclic_resume() takes no arguments. Each CPU with an active cyclic
3595 * The cyclic subsystem must be configured on every valid CPU;
3599 * cyclic entry points, cyclic_resume() may be called with locks held which
3600 * are also acquired by CY_LOCK_LEVEL or CY_LOW_LEVEL cyclic handlers.