History log of /haiku/src/system/kernel/slab/ObjectCache.cpp
Revision Date Author Comments
# f40bacae 01-Sep-2021 Augustin Cavalier <waddlesplash@gmail.com>

kernel/slab: Add another missing include following previous commits.

Somehow, this only errored on riscv64. Go figure.


# a28bab47 13-Nov-2011 Michael Lotz <mmlr@mlotz.ch>

Add the object pointers to the panic messages.


# e32699b4 01-Nov-2011 Ingo Weinhold <ingo_weinhold@gmx.de>

mmlr + bonefish:
Add ObjectCache::ObjectAtIndex().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43086 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e1c6140e 01-Nov-2011 Ingo Weinhold <ingo_weinhold@gmx.de>

mmlr + bonefish:
* Add optional stack trace capturing for slab memory manager tracing.
* Add allocation tracking for the slab allocator (enabled via
SLAB_ALLOCATION_TRACKING). The allocation tracking requires tracing
with stack traces to be enabled for object caches and/or the memory
manager.
- Add class AllocationTrackingInfo that associates an allocation with
its respective tracing entry. The structure is added to the end of
an allocation done by the memory manager. For the object caches
there's a separate array for each slab.
- Add code range markers to the slab code, so that the first caller
into the slab code can be retrieved from the stack traces.
- Add KDL command "allocations_per_caller" that lists all allocations
summarized by caller.
* Move debug definitions from slab_private.h to slab_debug.h.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43072 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 72156a40 31-Oct-2011 Michael Lotz <mmlr@mlotz.ch>

bonefish+mmlr:
* Introduce "paranoid" malloc/free into the slab allocator (initializing
allocated memory to 0xcc and setting freed memory to 0xdeadbeef).
* Allow for optional stack traces for slab object cache tracing.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43046 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 3aea1d4f 15-Jul-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added ObjectCache::alignment, the object alignment and used the alignment for
incrementing the cache color cycle. Using the fixed value (8) would
potentially misalign the object again.
* Don't use CACHE_ALIGN_ON_SIZE for object caches any longer -- we have the
alignment parameter anyway (the flag is still used for the MemoryManager,
though).
* ObjectCache::InitSlab(): Slab coloring *was* done when CACHE_ALIGN_ON_SIZE
was given, i.e. exactly the wrong way around. Also the cache_color_cycle
computation was weird -- color 0 was used twice in a row.
* The "slabs" and "slab_cache" KDL commands also print the alignment, now.



git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37534 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 737b9891 13-Jul-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Patch by Lucian Adrian Grijincu (slightly modified by myself):
ObjectCache::ReturnObjectToSlab(): Check the returned object pointer for
obvious invalidity (out of bounds or misalignment).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37508 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1c164de7 01-Mar-2010 Axel Dörfler <axeld@pinc-software.de>

* Quick&dirty fix of a race condition that caused an endless loop in
object_cache_alloc(): the ObjectCache::total_objects count was increased in
ObjectCache::InitSlab(), but the slab was really only added at a later point
between the cache could be unlocked.
* If a second object_cache_reserve_internal() managed to be called while the
lock was unlocked, it would see that there has to be space available, and
will then return -- however, since the other thread could not yet place the
slab into the cache, object_cache_alloc() cannot find it.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35702 a95241bf-73f2-0310-859d-f6bbb57e9c96


# ff59ce68 24-Feb-2010 Axel Dörfler <axeld@pinc-software.de>

* The low resource handler now empties the cache depot's magazines; before,
they were never freed unless the cache was destroyed (I just wondered why
my system would bury >1G in the magazines).
* Made the magazine capacity variable per cache, ie. for larger objects, it's
not a good idea to have 64*CPU buffers lying around in the worst case.
* Furthermore, the create_object_cache_etc()/object_depot_init() now have
arguments for the magazine capacity as well as the maximum number of full
unused magazines.
* By default, you might want to initialize both to zero, as then some hopefully
usable defaults are computed. Otherwise (the only current example is the
vm_page_mapping cache) you can just put in the values you'd want there.
The page mapping cache uses larger values, as its objects are usually
allocated and deleted in larger chunks.
* Beware, though, I couldn't test these changes yet as Qemu didn't like to run
today. I'll test these changes on another machine now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35601 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f1bdb8b9 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* HashedObjectCache: Instead of entering the individual objects in the hash
table, we only enter the slab. This also saves us the link object per object.
* Removed the now useless {Prepare,Unprepare}Object() methods.
* SmallObjectCache: Unlock the cache while calling into the MemoryManager. We
need to do that to avoid an indirect violation of the CACHE_DONT_* policy.
* Simplified lower_boundary().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35285 a95241bf-73f2-0310-859d-f6bbb57e9c96


# b4e5e498 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

MemoryManager:
* Added support to do larger raw allocations (up to one large chunk (128 pages))
in the slab areas. For an even larger allocation an area is created (haven't
seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
would be added to the free lists instead of being removed from them. This
would corrupt the lists and also lead to all kinds of misuse of meta chunks.

object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
case. The function could deadlock itself, since
HashedObjectCache::CreateSlab() does allocate memory, thus potentially
reentering.
* object_cache_low_memory():
- I missed some returns when reworking that one in r35254, so the function
might stop early and also leave the cache in maintenance mode, which would
cause it to be ignored by object cache resizer and low memory handler from
that point on.
- Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
and too many slabs could be freed.
- Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
unlock the cache, the situation might have changed and their might not be an
empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
MemoryManager. object_cache_get_usage() would thus always return 0 and the
block cache would not be considered cached memory. This was only of
informational relevance, though.

slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
Allocations of those sizes aren't so common that the object depots yield any
benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
memory from the MemoryManager, and the hash tables for HashedObjectCaches use
the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
(USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
has virtually no lock contention. Handling for low memory situations is yet
missing, though.
* Improved the output of some debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96


# c2d63cfa 23-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Reworked the object cache resizer (renamed to object cache maintainer) and
low resource handler functions. Particularly fixed the race conditions
between those and delete_object_cache().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35254 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 86c794e5 21-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 08d66c12 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Always unlock the object cache while allocating memory. This is necessary for
the CACHE_DONT_SLEEP flag to work for real, since otherwise the thread could
block on the mutex held by a thread allocating memory. We use two condition
variables to prevent multiple threads from allocating slabs at the same time.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35206 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bb439b87 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Consequently propagate the CACHE_DONT_SLEEP flag.
* block_alloc(): Create B_FULL_LOCK area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35202 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 20ca0c5e 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* object_cache_return_object_wrapper(): Calling object_cache_free() is a bad
idea, since that would potentially add the object back to the object store
or lead to infinite recursion. When the object cache is destroyed it most
likely led to infinite loops, because the object would alternately be
removed from and added back to the object store.
* delete_object_cache(): Lock after destroying the object store, so we don't
deadlock.
* Use the object store on SMP machines. It seems to work, though I only
tested with the network stack and that seems to have problems of its own.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35182 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 825566f8 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Split the slab allocator code into separate source files and C++-ified
things a bit.
* Some style cleanup.
* The object depot does now have a cookie that will be passed to the return
hook.
* Fixed object_cache_return_object_wrapper() using the new cookie.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35174 a95241bf-73f2-0310-859d-f6bbb57e9c96


# a28bab47906fb22f5d97e8ae14dea80a49ffbe08 13-Nov-2011 Michael Lotz <mmlr@mlotz.ch>

Add the object pointers to the panic messages.


# e32699b4048dd37347a98122177cec936b2b36a8 01-Nov-2011 Ingo Weinhold <ingo_weinhold@gmx.de>

mmlr + bonefish:
Add ObjectCache::ObjectAtIndex().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43086 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e1c6140eaa641aa95fc6d82f0d5c53cf4fe41a16 01-Nov-2011 Ingo Weinhold <ingo_weinhold@gmx.de>

mmlr + bonefish:
* Add optional stack trace capturing for slab memory manager tracing.
* Add allocation tracking for the slab allocator (enabled via
SLAB_ALLOCATION_TRACKING). The allocation tracking requires tracing
with stack traces to be enabled for object caches and/or the memory
manager.
- Add class AllocationTrackingInfo that associates an allocation with
its respective tracing entry. The structure is added to the end of
an allocation done by the memory manager. For the object caches
there's a separate array for each slab.
- Add code range markers to the slab code, so that the first caller
into the slab code can be retrieved from the stack traces.
- Add KDL command "allocations_per_caller" that lists all allocations
summarized by caller.
* Move debug definitions from slab_private.h to slab_debug.h.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43072 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 72156a402f54ea4be9dc3e3e9704c612f7d9ad16 31-Oct-2011 Michael Lotz <mmlr@mlotz.ch>

bonefish+mmlr:
* Introduce "paranoid" malloc/free into the slab allocator (initializing
allocated memory to 0xcc and setting freed memory to 0xdeadbeef).
* Allow for optional stack traces for slab object cache tracing.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43046 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 3aea1d4f53835e8ccbd87a2bdb114dafb988a094 15-Jul-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added ObjectCache::alignment, the object alignment and used the alignment for
incrementing the cache color cycle. Using the fixed value (8) would
potentially misalign the object again.
* Don't use CACHE_ALIGN_ON_SIZE for object caches any longer -- we have the
alignment parameter anyway (the flag is still used for the MemoryManager,
though).
* ObjectCache::InitSlab(): Slab coloring *was* done when CACHE_ALIGN_ON_SIZE
was given, i.e. exactly the wrong way around. Also the cache_color_cycle
computation was weird -- color 0 was used twice in a row.
* The "slabs" and "slab_cache" KDL commands also print the alignment, now.



git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37534 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 737b9891287ef923faa9b067c1efae65d43702b7 13-Jul-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Patch by Lucian Adrian Grijincu (slightly modified by myself):
ObjectCache::ReturnObjectToSlab(): Check the returned object pointer for
obvious invalidity (out of bounds or misalignment).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37508 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1c164de7d21bd20efda561b647ebe08717699096 01-Mar-2010 Axel Dörfler <axeld@pinc-software.de>

* Quick&dirty fix of a race condition that caused an endless loop in
object_cache_alloc(): the ObjectCache::total_objects count was increased in
ObjectCache::InitSlab(), but the slab was really only added at a later point
between the cache could be unlocked.
* If a second object_cache_reserve_internal() managed to be called while the
lock was unlocked, it would see that there has to be space available, and
will then return -- however, since the other thread could not yet place the
slab into the cache, object_cache_alloc() cannot find it.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35702 a95241bf-73f2-0310-859d-f6bbb57e9c96


# ff59ce680df5d2032ea5a11c666688269225f033 24-Feb-2010 Axel Dörfler <axeld@pinc-software.de>

* The low resource handler now empties the cache depot's magazines; before,
they were never freed unless the cache was destroyed (I just wondered why
my system would bury >1G in the magazines).
* Made the magazine capacity variable per cache, ie. for larger objects, it's
not a good idea to have 64*CPU buffers lying around in the worst case.
* Furthermore, the create_object_cache_etc()/object_depot_init() now have
arguments for the magazine capacity as well as the maximum number of full
unused magazines.
* By default, you might want to initialize both to zero, as then some hopefully
usable defaults are computed. Otherwise (the only current example is the
vm_page_mapping cache) you can just put in the values you'd want there.
The page mapping cache uses larger values, as its objects are usually
allocated and deleted in larger chunks.
* Beware, though, I couldn't test these changes yet as Qemu didn't like to run
today. I'll test these changes on another machine now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35601 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f1bdb8b92c9e68aeb6dec29f5274a48bb3e91d59 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* HashedObjectCache: Instead of entering the individual objects in the hash
table, we only enter the slab. This also saves us the link object per object.
* Removed the now useless {Prepare,Unprepare}Object() methods.
* SmallObjectCache: Unlock the cache while calling into the MemoryManager. We
need to do that to avoid an indirect violation of the CACHE_DONT_* policy.
* Simplified lower_boundary().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35285 a95241bf-73f2-0310-859d-f6bbb57e9c96


# b4e5e4982360e684c5a13d227b9a958dbe725554 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

MemoryManager:
* Added support to do larger raw allocations (up to one large chunk (128 pages))
in the slab areas. For an even larger allocation an area is created (haven't
seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
would be added to the free lists instead of being removed from them. This
would corrupt the lists and also lead to all kinds of misuse of meta chunks.

object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
case. The function could deadlock itself, since
HashedObjectCache::CreateSlab() does allocate memory, thus potentially
reentering.
* object_cache_low_memory():
- I missed some returns when reworking that one in r35254, so the function
might stop early and also leave the cache in maintenance mode, which would
cause it to be ignored by object cache resizer and low memory handler from
that point on.
- Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
and too many slabs could be freed.
- Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
unlock the cache, the situation might have changed and their might not be an
empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
MemoryManager. object_cache_get_usage() would thus always return 0 and the
block cache would not be considered cached memory. This was only of
informational relevance, though.

slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
Allocations of those sizes aren't so common that the object depots yield any
benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
memory from the MemoryManager, and the hash tables for HashedObjectCaches use
the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
(USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
has virtually no lock contention. Handling for low memory situations is yet
missing, though.
* Improved the output of some debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96


# c2d63cfa66b7309551a7d1fc4997a02500a9f8af 23-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Reworked the object cache resizer (renamed to object cache maintainer) and
low resource handler functions. Particularly fixed the race conditions
between those and delete_object_cache().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35254 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 86c794e5c10f1b2d99d672d424a8637639c703dd 21-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 08d66c12887e28e2760a85076561eba91a00ab66 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Always unlock the object cache while allocating memory. This is necessary for
the CACHE_DONT_SLEEP flag to work for real, since otherwise the thread could
block on the mutex held by a thread allocating memory. We use two condition
variables to prevent multiple threads from allocating slabs at the same time.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35206 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bb439b871e0be2e107ecc868be8c5660836f4253 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Consequently propagate the CACHE_DONT_SLEEP flag.
* block_alloc(): Create B_FULL_LOCK area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35202 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 20ca0c5eaa6a1d4e0a3ddc9d203eab905b3a34d7 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* object_cache_return_object_wrapper(): Calling object_cache_free() is a bad
idea, since that would potentially add the object back to the object store
or lead to infinite recursion. When the object cache is destroyed it most
likely led to infinite loops, because the object would alternately be
removed from and added back to the object store.
* delete_object_cache(): Lock after destroying the object store, so we don't
deadlock.
* Use the object store on SMP machines. It seems to work, though I only
tested with the network stack and that seems to have problems of its own.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35182 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 825566f82f652d82ffaf3f0deca0a2bcda1e02c2 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Split the slab allocator code into separate source files and C++-ified
things a bit.
* Some style cleanup.
* The object depot does now have a cookie that will be passed to the return
hook.
* Fixed object_cache_return_object_wrapper() using the new cookie.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35174 a95241bf-73f2-0310-859d-f6bbb57e9c96