Lines Matching refs:slabs

7  * and only uses a centralized lock to manage a pool of partial slabs.
61 * The role of the slab_mutex is to protect the list of all the slabs
78 * Frozen slabs
88 * CPU partial slabs
90 * The partially empty slabs cached on the CPU partial list are used
92 * These slabs are not frozen, but are also exempt from list management,
105 * the partial slab counter. If taken then no new slabs may be added or
106 * removed from the lists nor make the number of partial slabs be modified.
107 * (Note that the total number of slabs is an atomic value that may be
112 * slabs, operations can continue without any centralized lock. F.e.
113 * allocating a long series of objects that fill up slabs does not require
152 * Allocations only occur from these slabs called cpu slabs.
155 * operations no list for full slabs is used. If an object in a full slab is
157 * We track full slabs for debugging purposes though because otherwise we
173 * One use of this flag is to mark slabs that are
269 * Minimum number of partial slabs. These will be left on the partial
275 * Maximum number of desirable partial slabs.
276 * The existence of more partial slabs makes kmem_cache_shrink
394 struct slab *partial; /* Partially allocated slabs */
600 * slabs on the per cpu partial list, in order to limit excessive
601 * growth of the list. For simplicity we assume that the slabs will
1471 * Tracking of fully allocated slabs for debugging purposes.
1615 * @slabs: return start of list of slabs, or NULL when there's no list
1621 parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init)
1631 * No options but restriction on slabs. This means full
1632 * debugging for slabs matching a pattern.
1677 *slabs = ++str;
1679 *slabs = NULL;
1730 * slabs means debugging is only changed for those slabs, so the global
1766 * then only the select slabs will receive the debug option(s).
2460 * Management of partially allocated slabs.
2578 * Racy check. If we mistakenly see no partial slabs then we
2643 * instead of attempting to obtain partial slabs from other nodes.
2647 * may return off node objects because partial slabs are obtained
2653 * This means scanning over all nodes to look for partial slabs which
2785 * unfreezes the slabs and puts it on the proper list.
2912 * Put all the cpu partial slabs to the node partial list.
2951 int slabs = 0;
2958 if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
2967 slabs = oldslab->slabs;
2971 slabs++;
2973 slab->slabs = slabs;
3111 * Use the cpu notifier to insure that the cpu slabs are flushed when
3262 pr_warn(" node %d: slabs: %ld, objs: %ld, free: %ld\n",
4077 * have a longer lifetime than the cpu slabs in most processing loads.
4125 * other processors updating the list of slabs.
4655 * Increasing the allocation order reduces the number of times that slabs
4662 * and slab fragmentation. A higher order reduces the number of partial slabs
4677 * be problematic to put into order 0 slabs because there may be too much
4684 * less a concern for large slabs though which are rarely used.
4920 * Per cpu partial lists mainly contain slabs that just have one
5115 * The larger the object size is, the more slabs we want on the partial
5168 * Attempt to free all partial slabs on a node.
5377 * kmem_cache_shrink discards empty slabs and promotes the slabs filled
5381 * The slabs with the least items are placed last. This results in them
5405 * Build lists of slabs to discard or promote.
5416 /* We do not keep full slabs on the list */
5429 * Promote the slabs filled up most to the head of the
5437 /* Release empty slabs */
5571 * Basic setup of slabs
5648 /* Now we can use the kmem_cache to allocate kmalloc slabs */
5767 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
5780 pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
5980 SL_ALL, /* All slabs */
5981 SL_PARTIAL, /* Only partially allocated slabs */
5982 SL_CPU, /* Only slabs used for cpu caches */
5983 SL_OBJECTS, /* Determine allocated objects not slabs */
5984 SL_TOTAL /* Determine object capacity not slabs */
6039 x = slab->slabs;
6233 int slabs = 0;
6244 slabs += slab->slabs;
6248 /* Approximate half-full slabs, see slub_set_cpu_partial() */
6249 objects = (slabs * oo_objects(s->oo)) / 2;
6250 len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs);
6258 slabs = READ_ONCE(slab->slabs);
6259 objects = (slabs * oo_objects(s->oo)) / 2;
6261 cpu, objects, slabs);
6310 SLAB_ATTR_RO(slabs);
6699 * get here for aliasable slabs so we do not need to support