• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /netgear-R7000-V1.0.7.12_1.2.5/components/opensource/linux/linux-2.6.36/mm/

Lines Matching refs:slabs

7  * uses a centralized lock to manage a pool of partial slabs.
48 * the partial slab counter. If taken then no new slabs may be added or
49 * removed from the lists nor make the number of partial slabs be modified.
50 * (Note that the total number of slabs is an atomic value that may be
55 * slabs, operations can continue without any centralized lock. F.e.
56 * allocating a long series of objects that fill up slabs does not require
61 * to use. While we do that objects in the slabs may be freed. We can
75 * while handling per_cpu slabs, due to kernel preemption.
78 * Allocations only occur from these slabs called cpu slabs.
81 * operations no list for full slabs is used. If an object in a full slab is
83 * We track full slabs for debugging purposes though because otherwise we
101 * One use of this flag is to mark slabs that are
137 * Mininum number of partial slabs. These will be left on the partial
143 * Maximum number of desirable partial slabs.
144 * The existence of more partial slabs makes kmem_cache_shrink
799 * Tracking of fully allocated slabs for debugging purposes.
822 /* Tracking of the number of slabs for debugging purposes */
968 * No options but restriction on slabs. This means full
969 * debugging for slabs matching a pattern.
1267 * Management of partially allocated slabs
1316 * Racy check. If we mistakenly see no partial slabs then we
1350 * instead of attempting to obtain partial slabs from other nodes.
1354 * may return off node objects because partial slabs are obtained
1360 * scanning over all nodes to look for partial slabs which may be
1431 * Adding an empty slab to the partial slabs in order
1433 * to come after the other slabs with objects in
1437 * kmem_cache_shrink can reclaim any empty slabs from
1586 " node %d: slabs: %ld, objs: %ld, free: %ld\n",
1775 * have a longer lifetime than the cpu slabs in most processing loads.
1905 * Increasing the allocation order reduces the number of times that slabs
1912 * and slab fragmentation. A higher order reduces the number of partial slabs
1932 * be problematic to put into order 0 slabs because there may be too much
1939 * less a concern for large slabs though which are rarely used.
2391 * We could also check if the object is on the slabs freelist.
2445 * Attempt to free all partial slabs on a node.
2637 * adding all existing slabs to sysfs.
2664 * Conversion table for small slabs sizes / 8 to the index in the
2665 * kmalloc array. This is necessary for slabs < 192 since we have non power
2666 * of two cache sizes there. The size of larger slabs can be determined using
2847 * kmem_cache_shrink removes empty slabs from the partial lists and sorts
2848 * the remaining slabs by the number of items in use. The slabs with the
2852 * The slabs with the least items are placed last. This results in them
2907 * Rebuild the partial list with the slabs filled up most
2908 * first and the least used slabs at the end.
2955 * if n->nr_slabs > 0, slabs still exist on the node
3034 * Basic setup of slabs
3264 * Use the cpu notifier to insure that the cpu slabs are flushed when
3411 printk(KERN_ERR "SLUB %s: %ld partial slabs counted but "
3422 printk(KERN_ERR "SLUB: %s %ld slabs counted but "
3666 /* Push back cpu slabs */
3739 SL_ALL, /* All slabs */
3740 SL_PARTIAL, /* Only partially allocated slabs */
3741 SL_CPU, /* Only slabs used for cpu caches */
3742 SL_OBJECTS, /* Determine allocated objects not slabs */
3743 SL_TOTAL /* Determine object capacity not slabs */
3951 SLAB_ATTR_RO(slabs);
4423 * get here for aliasable slabs so we do not need to support