Lines Matching defs:from

80  *   If a slab is frozen then it is exempt from list management. It is
81 * the cpu slab which is actively allocated from by the processor that
85 * froze the slab is the only one that can retrieve the objects from the
92 * These slabs are not frozen, but are also exempt from list management,
106 * removed from the lists nor make the number of partial slabs be modified.
134 * are fully lockless when satisfied from the percpu slab (and when
152 * Allocations only occur from these slabs called cpu slabs.
164 * slab->frozen The slab is frozen and exempt from list processing.
323 unsigned long addr; /* Called from address */
350 ALLOC_FASTPATH, /* Allocation from cpu slab */
357 ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */
358 ALLOC_SLAB, /* Cpu slab acquired from page allocator */
359 ALLOC_REFILL, /* Refill cpu slab from slab freelist */
374 CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */
392 struct slab *slab; /* The slab from which we are allocating */
1164 void *from, void *to)
1166 slab_fix(s, "Restoring %s 0x%p-0x%p=0x%x", message, from, to - 1, data);
1167 memset(from, data, to - from);
1924 * Moreover, it should not come from DMA buffer and is not readily
2580 * slab from the n->partial list. Remove only a single object from the slab, do
2609 * Called only for kmem_cache_debug() caches to allocate from a freshly
2656 * Try to allocate a partial slab from a specific node.
2714 * Get a slab from somewhere. Search in increasing NUMA distances.
2731 * instead of attempting to obtain partial slabs from other nodes.
2736 * from other nodes and filled up.
2874 * Assumes the slab has been already safely taken away from kmem_cache_cpu
3132 * Called from CPU work handler with migration disabled.
3336 * Scan from both the list's head and tail for better accuracy.
3489 * And if we were unable to get a new slab from the partial slab lists then
3570 * slab is pointing to the slab from which the objects are obtained.
3633 * 1) try to get a partial slab from target node only by having
3635 * 2) if 1) failed, try to allocate a new slab from target node with
3639 * allocating new page from other nodes
3775 * reading from one cpu area. That does not matter as long
3824 * against code executing on this cpu *not* from access by
3972 * Otherwise we can simply pick the next object from the lockless free list.
4030 * @s: The cache to allocate from.
4491 "%s: Wrong slab cache. %s but object is from %s\n",
4499 * @s: The cache the allocation was from.
4502 * Free an object which was previously allocated from this
4599 /* Derive kmem_cache from object */
4708 * We may have removed an object from c->freelist using
4918 * smallest order from min_objects-derived/slab_min_order up to
5006 pr_err("SLUB: Unable to allocate memory from node %d\n", node);
5175 * it away from the edges of the object to avoid small
5176 * sized over/underflows from neighboring allocations.
5200 * overwrites from earlier objects rather than let
5214 * SLUB stores one object immediately after another beginning from
5330 * This is called from __kmem_cache_shutdown(). We must take list_lock
5488 * to/from userspace but do not fall entirely within the containing slab
5540 * fill those up and thus they can be removed from the partial lists.
5543 * being allocated from last increasing the chance that the last objects
5683 * since memory is not yet available from the node that