Lines Matching refs:free

30 // and the header size is two words with eager coalescing on free.
40 // free_t memory_area -- Marked as free, with appropriate size,
41 // and pointed to by a free bucket.
46 // For a normal allocation, the free memory area is added to the
47 // appropriate free bucket and picked up later in the cmpct_alloc()
48 // logic. For a large allocation, the area skips the primary free buckets
51 // cmpctmalloc does not keep a list of OS allocations; each is meant to free
52 // itself to the OS when all of its memory areas become free.
56 // cmpct_alloc()/cmpct_memalign() calls. Can be free and live in a free
59 // Memory areas, both free and allocated, always begin with a header_t,
64 // - FREE_BIT: The area is free, and lives in a free bucket.
68 // If the area is free (is_tagged_as_free(header_t*)), the area's header
69 // includes the doubly-linked free list pointers defined by free_t (which is a
70 // header_t overlay). Those pointers are used to chain the free area off of
71 // the appropriately-sized free bucket.
74 // An alloction of less than HEAP_LARGE_ALLOC_BYTES, which can fit in a free
99 // just-too-small memory areas on the free list. We would not find the 528
101 // free memory area, making fragmentation worse.
104 // Freed memory areas are eagerly coalesced with free left/right neighbors. If
105 // the new free area covers an entire OS allocation (i.e., its left and right
108 // Exception: to avoid OS free/alloc churn when right on the edge, the heap
109 // will try to hold onto one entirely-free, non-large OS allocation instead of
148 // If a header's |left| field has this bit set, it is free and lives in
149 // a free bucket.
176 // Bytes of usable free space in the heap.
221 dprintf(INFO, "\tsize %lu, remaining %lu, cached free %lu\n",
264 // byte spaces (not including the header). For 64 bit, the free list
266 // smaller than that (otherwise how to free it), but we have empty 8
311 // Returns true if this header_t is marked as free.
313 // The free bit is stashed in the lower bit of header->left.
404 // Frees |size| bytes starting at |address|, either to a free bucket or to the
406 // should point to what would be the header_t of the memory area to free, and
466 // The first 16 bytes of the region won't have free fill due to overlap
473 printf("Heap free fill check fail. Allocated region:\n");
639 // Once we get above the size of the free area struct (4 words), we
661 // Only 8-rounded sizes are freed or chopped off the end of a free
665 // precisely, we have to put the free space into a smaller
668 // don't have to traverse the free chains to find a big enough
691 // may be returned to the OS when we free the first allocation, and we
724 // This goes in a new OS allocation since the trim above removed any free
738 // No trim needed when the entire OS allocation is free.
808 // Look at free list entries that are at least as large as one page plus a
810 // them and free the page(s).
826 // The page will end with a smaller free list entry and a
837 // unlucky rounding could mean we can't actually free anything
845 // Right sentinel, not free, stops attempts to coalesce right.
860 // of the free area.
867 // free list buckets.
878 // Left sentinel, not free, stops attempts to coalesce left.
935 // We can't carve off the rest for a new free space if it's smaller than the
936 // free-list linked structure. We also don't carve it off if it's less than
943 void* free = (char*)head + rounded_up;
944 create_free_area(free, head, left_over);
945 FixLeftPointer(right, (header_t*)free);
1003 DEBUG_ASSERT(!is_tagged_as_free(header)); // Double free!
1008 // Coalesce with left free object.
1061 // Set up the usable memory area, which will be marked free.
1072 // Create a new free-list entry of at least size bytes (including the
1075 // The new free list entry will have a header on each side (the
1124 // Initialize the free list.