#
077c84eb |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: atomic_*() functions rework * No need for the atomically changed variables to be declared as volatile. * Drop support for atomically getting and setting unaligned data. * Introduce atomic_get_and_set[64]() which works the same as atomic_set[64]() used to. atomic_set[64]() does not return the previous value anymore.
|
#
54848900 |
|
02-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the unused vnode management to a new file. Well the few variables used for it that is. * The main cause for the heavy contention of the unused vnodes mutex was that relatively few vnodes are actually used for a longer time. Mainly those are the volume roots, mmap()ed files, and the files opened by programs. A good deal of nodes -- particularly directories -- are just referenced for a very short time, e.g. to resolve a path to a contained entry. This caused those nodes to be added to and removed from the unused vnodes list very frequently, thus resulting in a high contention of the mutex guarding it. To address the problem I've introduced an approximation of a set of "hot" vnodes, i.e. vnodes that have recently been marked unused. They are stored in an array that by means of an r/w locker and atomic operations can most of the time be accessed concurrently. Whenever it gets full, it is flushed to the actual unused vnodes list. * dec_vnode_ref_count(): No longer check the unused vnode count every time. The called new vnode_unused() does only from time to time and returns when the caller is expected to free some of the unused vnodes. As a side effect this also fixes a bug I previously introduced: The unused vnode to be freed was marked busy without being locked first. The -j8 Haiku image test build shows that the changes reduce the contention of the unused vnode list mutex to virtually zero without introducing any significant contention of the new r/w lock. The VMCache lock contention also seems to be decreased somewhat, which is probably not that surprising considering that the page writer acquires/releases vnode references with the cache lock held. The "pages" lock takes over even more contention, now causing more than 100000 waits per second. The total build time reduction is about 4.5%. Kernel time drops more than 10%. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34866 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
077c84eb27b25430428d356f3d13afabc0cc0d13 |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: atomic_*() functions rework * No need for the atomically changed variables to be declared as volatile. * Drop support for atomically getting and setting unaligned data. * Introduce atomic_get_and_set[64]() which works the same as atomic_set[64]() used to. atomic_set[64]() does not return the previous value anymore.
|
#
5484890096d122519e722d0a31b478eaa7e8ea1d |
|
02-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the unused vnode management to a new file. Well the few variables used for it that is. * The main cause for the heavy contention of the unused vnodes mutex was that relatively few vnodes are actually used for a longer time. Mainly those are the volume roots, mmap()ed files, and the files opened by programs. A good deal of nodes -- particularly directories -- are just referenced for a very short time, e.g. to resolve a path to a contained entry. This caused those nodes to be added to and removed from the unused vnodes list very frequently, thus resulting in a high contention of the mutex guarding it. To address the problem I've introduced an approximation of a set of "hot" vnodes, i.e. vnodes that have recently been marked unused. They are stored in an array that by means of an r/w locker and atomic operations can most of the time be accessed concurrently. Whenever it gets full, it is flushed to the actual unused vnodes list. * dec_vnode_ref_count(): No longer check the unused vnode count every time. The called new vnode_unused() does only from time to time and returns when the caller is expected to free some of the unused vnodes. As a side effect this also fixes a bug I previously introduced: The unused vnode to be freed was marked busy without being locked first. The -j8 Haiku image test build shows that the changes reduce the contention of the unused vnode list mutex to virtually zero without introducing any significant contention of the new r/w lock. The VMCache lock contention also seems to be decreased somewhat, which is probably not that surprising considering that the page writer acquires/releases vnode references with the cache lock held. The "pages" lock takes over even more contention, now causing more than 100000 waits per second. The total build time reduction is about 4.5%. Kernel time drops more than 10%. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34866 a95241bf-73f2-0310-859d-f6bbb57e9c96
|