History log of /linux-master/include/linux/xarray.h
Revision Date Author Comments
# e7716c74 21-Aug-2023 Philipp Stanner <pstanner@redhat.com>

xarray: Document necessary flag in alloc functions

Adds a new line to the docstrings of functions wrapping __xa_alloc() and
__xa_alloc_cyclic(), informing about the necessity of flag XA_FLAGS_ALLOC
being set previously.

The documentation so far says that functions wrapping __xa_alloc() and
__xa_alloc_cyclic() are supposed to return either -ENOMEM or -EBUSY in
case of an error. If the xarray has been initialized without the flag
XA_FLAGS_ALLOC, however, they fail with a different, undocumented error
code.

As hinted at in Documentation/core-api/xarray.rst, wrappers around these
functions should only be invoked when the flag has been set. The
functions' documentation should reflect that as well.

Signed-off-by: Philipp Stanner <pstanner@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 5649d113 18-Jan-2023 Yang Yang <yang.yang29@zte.com.cn>

swap_state: update shadow_nodes for anonymous page

Shadow_nodes is for shadow nodes reclaiming of workingset handling, it is
updated when page cache add or delete since long time ago workingset only
supported page cache. But when workingset supports anonymous page
detection, we missied updating shadow nodes for it. This caused that
shadow nodes of anonymous page will never be reclaimd by
scan_shadow_nodes() even they use much memory and system memory is tense.

So update shadow_nodes of anonymous page when swap cache is add or delete
by calling xas_set_update(..workingset_update_node).

Link: https://lkml.kernel.org/r/202301182013032211005@zte.com.cn
Fixes: aae466b0052e ("mm/swap: implement workingset detection for anonymous LRU")
Signed-off-by: Yang Yang <yang.yang29@zte.com.cn>
Reviewed-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 1dd685c4 06-Jul-2022 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Add calls to might_alloc()

Catch bogus GFP flags deterministically, instead of occasionally
when we actually have to allocate memory.

Reported-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 69a37a8b 08-Jun-2022 Matthew Wilcox (Oracle) <willy@infradead.org>

mm/huge_memory: Fix xarray node memory leak

If xas_split_alloc() fails to allocate the necessary nodes to complete the
xarray entry split, it sets the xa_state to -ENOMEM, which xas_nomem()
then interprets as "Please allocate more memory", not as "Please free
any unnecessary memory" (which was the intended outcome). It's confusing
to use xas_nomem() to free memory in this context, so call xas_destroy()
instead.

Reported-by: syzbot+9e27a75a8c24f3fe75c1@syzkaller.appspotmail.com
Fixes: 6b24ca4a1a8d ("mm: Use multi-index entries in the page cache")
Cc: stable@vger.kernel.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 22f56b8e 03-Feb-2022 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Include bitmap.h from xarray.h

xas_find_chunk() calls find_next_bit(), which is defined in find.h,
included from bitmap.h. Inside the kernel, this isn't a problem because
bitmap.h is included from cpumask.h which is dragged in (eventually)
by gfp.h. When building the test-suite, that doesn't happen, so we need
to include bitmap.h explicitly.

Fixes: 4ade0818cf04 ("tools: sync tools/bitmap with mother linux")
Reported-by: Liam Howlett <liam.howlett@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 9bbdc0f3 22-Mar-2022 Muchun Song <songmuchun@bytedance.com>

xarray: use kmem_cache_alloc_lru to allocate xa_node

The workingset will add the xa_node to the shadow_nodes list. So the
allocation of xa_node should be done by kmem_cache_alloc_lru(). Using
xas_set_lru() to pass the list_lru which we want to insert xa_node into to
set up the xa_node reclaim context correctly.

Link: https://lkml.kernel.org/r/20220228122126.37293-9-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Alex Shi <alexs@kernel.org>
Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
Cc: Chao Yu <chao@kernel.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Fam Zheng <fam.zheng@bytedance.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Kari Argillander <kari.argillander@gmail.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 25a8de7f 27-Aug-2021 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Add xas_advance()

Add a new helper function to help iterate over multi-index entries.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>


# df59d0a4 30-Mar-2021 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Add xa_limit_16b

A 16-bit limit is a more common limit than I had realised. Make it
generally available.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 8fc75643 15-Oct-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: add xas_split

In order to use multi-index entries for huge pages in the page cache, we
need to be able to split a multi-index entry (eg if a file is truncated in
the middle of a huge page entry). This version does not support splitting
more than one level of the tree at a time. This is an acceptable
limitation for the page cache as we do not expect to support order-12
pages in the near future.

[akpm@linux-foundation.org: export xas_split_alloc() to modules]
[willy@infradead.org: fix xarray split]
Link: https://lkml.kernel.org/r/20200910175450.GV6583@casper.infradead.org
[willy@infradead.org: fix xarray]
Link: https://lkml.kernel.org/r/20201001233943.GW20115@casper.infradead.org

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Qian Cai <cai@lca.pw>
Cc: Song Liu <songliubraving@fb.com>
Link: https://lkml.kernel.org/r/20200903183029.14930-3-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 57417ceb 15-Oct-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: add xa_get_order

Patch series "Fix read-only THP for non-tmpfs filesystems".

As described more verbosely in the [3/3] changelog, we can inadvertently
put an order-0 page in the page cache which occupies 512 consecutive
entries. Users are running into this if they enable the
READ_ONLY_THP_FOR_FS config option; see
https://bugzilla.kernel.org/show_bug.cgi?id=206569 and Qian Cai has also
reported it here:
https://lore.kernel.org/lkml/20200616013309.GB815@lca.pw/

This is a rather intrusive way of fixing the problem, but has the
advantage that I've actually been testing it with the THP patches, which
means that it sees far more use than it does upstream -- indeed, Song has
been entirely unable to reproduce it. It also has the advantage that it
removes a few patches from my gargantuan backlog of THP patches.

This patch (of 3):

This function returns the order of the entry at the index. We need this
because there isn't space in the shadow entry to encode its order.

[akpm@linux-foundation.org: export xa_get_order to modules]

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Qian Cai <cai@lca.pw>
Cc: Song Liu <songliubraving@fb.com>
Link: https://lkml.kernel.org/r/20200903183029.14930-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20200903183029.14930-2-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# ca7b639e 02-Aug-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Fix xas_reload for multi-index entries

xas_reload() was only checking that the head entry was still at the
head index. If the entry has been split, that's not enough as there
may be a different entry at the specified index now.

Solve this by checking the slot for the requested index instead of the
head index.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# f82cd2f0 18-Aug-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Add private interface for workingset node deletion

Move the tricky bits of dealing with the XArray from the workingset
code to the XArray. Make it clear in the documentation that this is a
private interface, and only export it for the benefit of the test suite.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 8446466c 06-Aug-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Fix xas_for_each_conflict documentation

At one point, xas_for_each_conflict() was going to work this way,
and I forgot to update the documentation when I changed my mind.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 01f39c1c 07-Jun-2020 Manfred Spraul <manfred@colorfullife.com>

xarray.h: correct return code documentation for xa_store_{bh,irq}()

__xa_store() and xa_store() document that the functions can fail, and
that the return code can be an xa_err() encoded error code.

xa_store_bh() and xa_store_irq() do not document that the functions can
fail and that they can also return xa_err() encoded error codes.

Thus: Update the documentation.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Link: http://lkml.kernel.org/r/20200430111424.16634-1-manfred@colorfullife.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 7e934cf5 12-Mar-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

xarray: Fix early termination of xas_for_each_marked

xas_for_each_marked() is using entry == NULL as a termination condition
of the iteration. When xas_for_each_marked() is used protected only by
RCU, this can however race with xas_store(xas, NULL) in the following
way:

TASK1 TASK2
page_cache_delete() find_get_pages_range_tag()
xas_for_each_marked()
xas_find_marked()
off = xas_find_chunk()

xas_store(&xas, NULL)
xas_init_marks(&xas);
...
rcu_assign_pointer(*slot, NULL);
entry = xa_entry(off);

And thus xas_for_each_marked() terminates prematurely possibly leading
to missed entries in the iteration (translating to missing writeback of
some pages or a similar problem).

If we find a NULL entry that has been marked, skip it (unless we're trying
to allocate an entry).

Reported-by: Jan Kara <jack@suse.cz>
CC: stable@vger.kernel.org
Fixes: ef8e5717db01 ("page cache: Convert delete_batch to XArray")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 24a448b1 06-Dec-2019 Chengguang Xu <cgxu519@mykernel.net>

XArray: Fix incorrect comment in header file

256 means Retry entry and 257 means Zero entry,
so fix it.

Signed-off-by: Chengguang Xu <cgxu519@mykernel.net>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 00ed452c 12-Jan-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Add xa_for_each_range

This function supports iterating over a range of an array. Also add
documentation links for xa_for_each_start().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 82a95849 16-Jan-2020 Matthew Wilcox (Oracle) <willy@infradead.org>

XArray: Add wrappers for nested spinlocks

Some users need to take an xarray lock while holding another xarray lock.

Reported-by: Doug Gilbert <dgilbert@interlog.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 13bea898 14-Oct-2019 Randy Dunlap <rdunlap@infradead.org>

xarray.h: fix kernel-doc warning

Fix (Sphinx) kernel-doc warning in <linux/xarray.h>:

include/linux/xarray.h:232: WARNING: Unexpected indentation.

Link: http://lkml.kernel.org/r/89ba2134-ce23-7c10-5ee1-ef83b35aa984@infradead.org
Fixes: a3e4d3f97ec8 ("XArray: Redesign xa_alloc API")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 7b785645 24-May-2019 Johannes Weiner <hannes@cmpxchg.org>

mm: fix page cache convergence regression

Since a28334862993 ("page cache: Finish XArray conversion"), on most
major Linux distributions, the page cache doesn't correctly transition
when the hot data set is changing, and leaves the new pages thrashing
indefinitely instead of kicking out the cold ones.

On a freshly booted, freshly ssh'd into virtual machine with 1G RAM
running stock Arch Linux:

[root@ham ~]# ./reclaimtest.sh
+ dd of=workingset-a bs=1M count=0 seek=600
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ ./mincore workingset-a
153600/153600 workingset-a
+ dd of=workingset-b bs=1M count=0 seek=600
+ cat workingset-b
+ cat workingset-b
+ cat workingset-b
+ cat workingset-b
+ ./mincore workingset-a workingset-b
104029/153600 workingset-a
120086/153600 workingset-b
+ cat workingset-b
+ cat workingset-b
+ cat workingset-b
+ cat workingset-b
+ ./mincore workingset-a workingset-b
104029/153600 workingset-a
120268/153600 workingset-b

workingset-b is a 600M file on a 1G host that is otherwise entirely
idle. No matter how often it's being accessed, it won't get cached.

While investigating, I noticed that the non-resident information gets
aggressively reclaimed - /proc/vmstat::workingset_nodereclaim. This is
a problem because a workingset transition like this relies on the
non-resident information tracked in the page cache tree of evicted
file ranges: when the cache faults are refaults of recently evicted
cache, we challenge the existing active set, and that allows a new
workingset to establish itself.

Tracing the shrinker that maintains this memory revealed that all page
cache tree nodes were allocated to the root cgroup. This is a problem,
because 1) the shrinker sizes the amount of non-resident information
it keeps to the size of the cgroup's other memory and 2) on most major
Linux distributions, only kernel threads live in the root cgroup and
everything else gets put into services or session groups:

[root@ham ~]# cat /proc/self/cgroup
0::/user.slice/user-0.slice/session-c1.scope

As a result, we basically maintain no non-resident information for the
workloads running on the system, thus breaking the caching algorithm.

Looking through the code, I found the culprit in the above-mentioned
patch: when switching from the radix tree to xarray, it dropped the
__GFP_ACCOUNT flag from the tree node allocations - the flag that
makes sure the allocated memory gets charged to and tracked by the
cgroup of the calling process - in this case, the one doing the fault.

To fix this, allow xarray users to specify per-tree flag that makes
xarray allocate nodes using __GFP_ACCOUNT. Then restore the page cache
tree annotation to request such cgroup tracking for the cache nodes.

With this patch applied, the page cache correctly converges on new
workingsets again after just a few iterations:

[root@ham ~]# ./reclaimtest.sh
+ dd of=workingset-a bs=1M count=0 seek=600
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ cat workingset-a
+ ./mincore workingset-a
153600/153600 workingset-a
+ dd of=workingset-b bs=1M count=0 seek=600
+ cat workingset-b
+ ./mincore workingset-a workingset-b
124607/153600 workingset-a
87876/153600 workingset-b
+ cat workingset-b
+ ./mincore workingset-a workingset-b
81313/153600 workingset-a
133321/153600 workingset-b
+ cat workingset-b
+ ./mincore workingset-a workingset-b
63036/153600 workingset-a
153600/153600 workingset-b

Cc: stable@vger.kernel.org # 4.20+
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


# 962033d5 20-Feb-2019 Matthew Wilcox <willy@infradead.org>

XArray: Use xa_cmpxchg to implement xa_reserve

Jason feels this is clearer, and it saves a function and an exported
symbol.

Suggested-by: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Matthew Wilcox <willy@infradead.org>


# b38f6c50 20-Feb-2019 Matthew Wilcox <willy@infradead.org>

XArray: Fix xa_release in allocating arrays

xa_cmpxchg() was a little too magic in turning ZERO entries into NULL,
and would leave the entry set to the ZERO entry instead of releasing
it for future use. After careful review of existing users of
xa_cmpxchg(), change the semantics so that it does not translate either
incoming argument from NULL into ZERO entries.

Add several tests to the test-suite to make sure this problem doesn't
come back.

Reported-by: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Matthew Wilcox <willy@infradead.org>


# f818b82b 08-Feb-2019 Matthew Wilcox <willy@infradead.org>

XArray: Mark xa_insert and xa_reserve as must_check

If the user doesn't care about the return value from xa_insert(), then
they should be using xa_store() instead. The point of xa_reserve() is
to get the return value early before taking another lock, so this should
also be __must_check.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 2fa044e5 06-Nov-2018 Matthew Wilcox <willy@infradead.org>

XArray: Add cyclic allocation

This differs slightly from the IDR equivalent in five ways.

1. It can allocate up to UINT_MAX instead of being limited to INT_MAX,
like xa_alloc(). Also like xa_alloc(), it will write to the 'id'
pointer before placing the entry in the XArray.
2. The 'next' cursor is allocated separately from the XArray instead
of being part of the IDR. This saves memory for all the users which
do not use the cyclic allocation API and suits some users better.
3. It returns -EBUSY instead of -ENOSPC.
4. It will attempt to wrap back to the minimum value on memory allocation
failure as well as on an -EBUSY error, assuming that a user would
rather allocate a small ID than suffer an ID allocation failure.
5. It reports whether it has wrapped, which is important to some users.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# a3e4d3f9 31-Dec-2018 Matthew Wilcox <willy@infradead.org>

XArray: Redesign xa_alloc API

It was too easy to forget to initialise the start index. Add an
xa_limit data structure which can be used to pass min & max, and
define a couple of special values for common cases. Also add some
more tests cribbed from the IDR test suite. Change the return value
from -ENOSPC to -EBUSY to match xa_insert().

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 3ccaf57a 26-Oct-2018 Matthew Wilcox <willy@infradead.org>

XArray: Add support for 1s-based allocation

A lot of places want to allocate IDs starting at 1 instead of 0.
While the xa_alloc() API supports this, it's not very efficient if lots
of IDs are allocated, due to having to walk down to the bottom of the
tree to see if ID 1 is available, then all the way over to the next
non-allocated ID. This method marks ID 0 as being occupied which wastes
one slot in the XArray, but preserves xa_empty() as working.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# fd9dc93e 06-Feb-2019 Matthew Wilcox <willy@infradead.org>

XArray: Change xa_insert to return -EBUSY

Userspace translates EEXIST to "File exists" which isn't a very good
error message for the problem. "Device or resource busy" is a better
indication of what went wrong.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 809ab937 25-Jan-2019 Matthew Wilcox <willy@infradead.org>

XArray: Update xa_erase family descriptions

xa_erase does not allocate memory and doesn't have a gfp parameter.
Update the descriptions of all four variants to be more useful.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# edcddd4c 17-Jan-2019 Dan Carpenter <dan.carpenter@oracle.com>

XArray: Fix an arithmetic error in xa_is_err

There is a math problem here which leads to a lot of static checker
warnings for me:

net/sunrpc/clnt.c:451 rpc_new_client() error: (-4096) too low for ERR_PTR

Error values are from -1 to -4095 or from 0xffffffff to 0xfffff001 in
hexadecimal. (I am assuming a 32 bit system for simplicity). We are
using the lowest two bits to hold some internal XArray data so the
error is shifted two spaces to the left. 0xfffff001 << 2 is 0xffffc004.
And finally we want to check that BIT(1) is set so we add 2 which gives
us 0xffffc006.

In other words, we should be checking that "entry >= 0xffffc006", but
the check is actually testing if "entry >= 0xffffc002".

Fixes: 76b4e5299565 ("XArray: Permit storing 2-byte-aligned pointers")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
[Use xa_mk_internal() instead of changing the bracketing]
Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 19ba9ecf 14-Jan-2019 Cyrill Gorcunov <gorcunov@gmail.com>

XArray: Fix typo in comment

Seems copy and paste typo, not a big deal but still
for consistency sake better to fix.

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Matthew Wilcox <willy@infradead.org>


# b0606fed 02-Jan-2019 Matthew Wilcox <willy@infradead.org>

XArray: Honour reserved entries in xa_insert

xa_insert() should treat reserved entries as occupied, not as available.
Also, it should treat requests to insert a NULL pointer as a request
to reserve the slot. Add xa_insert_bh() and xa_insert_irq() for
completeness.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 76b4e529 28-Dec-2018 Matthew Wilcox <willy@infradead.org>

XArray: Permit storing 2-byte-aligned pointers

On m68k, statically allocated pointers may only be two-byte aligned.
This clashes with the XArray's method for tagging internal pointers.
Permit storing these pointers in single slots (ie not in multislots).

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 4a31896c 17-Dec-2018 Matthew Wilcox <willy@infradead.org>

XArray: Change xa_for_each iterator

There were three problems with this API:
1. It took too many arguments; almost all users wanted to iterate over
every element in the array rather than a subset.
2. It required that 'index' be initialised before use, and there's no
realistic way to make GCC catch that.
3. 'index' and 'entry' were the opposite way round from every other
member of the XArray APIs.

So split it into three different APIs:

xa_for_each(xa, index, entry)
xa_for_each_start(xa, index, entry, start)
xa_for_each_marked(xa, index, entry, filter)

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 02669b17 05-Dec-2018 Matthew Wilcox <willy@infradead.org>

XArray: Turn xa_init_flags into a static inline

A regular xa_init_flags() put all dynamically-initialised XArrays into
the same locking class. That leads to lockdep believing that taking
one XArray lock while holding another is a deadlock. It's possible to
work around some of these situations with separate locking classes for
irq/bh/regular XArrays, and SINGLE_DEPTH_NESTING, but that's ugly, and
it doesn't work for all situations (where we have completely unrelated
XArrays).

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 55f3f7ea 26-Nov-2018 Matthew Wilcox <willy@infradead.org>

XArray: Add xa_cmpxchg_irq and xa_cmpxchg_bh

These convenience wrappers match the other _irq and _bh wrappers we
already have. It turns out I'd already open-coded xa_cmpxchg_irq()
in the shmem code, so convert that.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 804dfaf0 05-Nov-2018 Matthew Wilcox <willy@infradead.org>

XArray: Fix Documentation

Minor fixes.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 84e5acb7 26-Oct-2018 Matthew Wilcox <willy@infradead.org>

XArray: Add xa_store_bh() and xa_store_irq()

These convenience wrappers disable interrupts while taking the spinlock.
A number of drivers would otherwise have to open-code these functions.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 9c16bb88 05-Nov-2018 Matthew Wilcox <willy@infradead.org>

XArray: Turn xa_erase into an exported function

Make xa_erase() take the spinlock and then call __xa_erase(), but make
it out of line since it's such a common function.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# c5beb07e 31-Oct-2018 Matthew Wilcox <willy@infradead.org>

XArray: Unify xa_cmpxchg and __xa_cmpxchg

xa_cmpxchg() was one of the largest functions in the xarray
implementation. By turning it into a wrapper and having the callers
take the lock (like several other functions), we save 160 bytes on a
tinyconfig build and reduce the duplication in xarray.c.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 4c0608f4 30-Oct-2018 Matthew Wilcox <willy@infradead.org>

XArray: Regularise xa_reserve

The xa_reserve() function was a little unusual in that it attempted to
be callable for all kinds of locking scenarios. Make it look like the
other APIs with __xa_reserve, xa_reserve_bh and xa_reserve_irq variants.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 0e9446c3 15-Aug-2018 Matthew Wilcox <willy@infradead.org>

xarray: Add range store functionality

This version of xa_store_range() really only supports load and store.
Our only user only needs basic load and store functionality, so there's
no need to do the extra work to support marking and overlapping stores
correctly yet.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 371c752d 04-Jul-2018 Matthew Wilcox <willy@infradead.org>

xarray: Track free entries in an XArray

Add the optional ability to track which entries in an XArray are free
and provide xa_alloc() to replace most of the functionality of the IDR.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 9f14d4f1 01-Oct-2018 Matthew Wilcox <willy@infradead.org>

xarray: Add xa_reserve and xa_release

This function reserves a slot in the XArray for users which need
to acquire multiple locks before storing their entry in the tree and
so cannot use a plain xa_store().

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 2264f513 03-Dec-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add xas_create_range

This hopefully temporary function is useful for users who have not yet
been converted to multi-index entries.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 4e99d4e9 01-Jun-2018 Matthew Wilcox <willy@infradead.org>

xarray: Add xas_for_each_conflict

This iterator iterates over each entry that is stored in the index or
indices specified by the xa_state. This is intended for use for a
conditional store of a multiindex entry, or to allow entries which are
about to be removed from the xarray to be disposed of properly.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 64d3e9a9 30-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Step through an XArray

The xas_next and xas_prev functions move the xas index by one position,
and adjust the rest of the iterator state to match it. This is more
efficient than calling xas_set() as it keeps the iterator at the leaves
of the tree instead of walking the iterator from the root each time.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 687149fc 17-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Destroy an XArray

This function frees all the internal memory allocated to the xarray
and reinitialises it to be empty.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 80a0a1a9 14-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Extract entries from an XArray

The xa_extract function combines the functionality of
radix_tree_gang_lookup() and radix_tree_gang_lookup_tagged().
It extracts entries matching the specified filter into a normal array.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# b803b428 14-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add XArray iterators

The xa_for_each iterator allows the user to efficiently walk a range
of the array, executing the loop body once for each entry in that
range that matches the filter. This commit also includes xa_find()
and xa_find_after() which are helper functions for xa_for_each() but
may also be useful in their own right.

In the xas family of functions, we have xas_for_each(), xas_find(),
xas_next_entry(), xas_for_each_tagged(), xas_find_tagged(),
xas_next_tagged() and xas_pause().

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 41aec91f 10-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add XArray conditional store operations

Like cmpxchg(), xa_cmpxchg will only store to the index if the current
entry matches the old entry. It returns the current entry, which is
usually more useful than the errno returned by radix_tree_insert().
For the users who really only want the errno, the xa_insert() wrapper
provides a more convenient calling convention.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 58d6ea30 10-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add XArray unconditional store operations

xa_store() differs from radix_tree_insert() in that it will overwrite an
existing element in the array rather than returning an error. This is
the behaviour which most users want, and those that want more complex
behaviour generally want to use the xas family of routines anyway.

For memory allocation, xa_store() will first attempt to request memory
from the slab allocator; if memory is not immediately available, it will
drop the xa_lock and allocate memory, keeping a pointer in the xa_state.
It does not use the per-CPU cache, although those will continue to exist
until all radix tree users are converted to the xarray.

This patch also includes xa_erase() and __xa_erase() for a streamlined
way to store NULL. Since there is no need to allocate memory in order
to store a NULL in the XArray, we do not need to trouble the user with
deciding what memory allocation flags to use.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 9b89a035 10-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add XArray marks

XArray marks are like the radix tree tags, only slightly more strongly
typed. They are renamed in order to distinguish them from tagged
pointers. This commit adds the basic get/set/clear operations.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# ad3d6c72 07-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add XArray load operation

The xa_load function brings with it a lot of infrastructure; xa_empty(),
xa_is_err(), and large chunks of the XArray advanced API that are used
to implement xa_load.

As the test-suite demonstrates, it is possible to use the XArray functions
on a radix tree. The radix tree functions depend on the GFP flags being
stored in the root of the tree, so it's not possible to use the radix
tree functions on an XArray.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# 01959dfe 09-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Define struct xa_node

This is a direct replacement for struct radix_tree_node. A couple of
struct members have changed name, so convert those. Use a #define so
that radix tree users continue to work without change.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>


# f8d5d0cc 07-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Add definition of struct xarray

This is a direct replacement for struct radix_tree_root. Some of the
struct members have changed name; convert those, and use a #define so
that radix_tree users continue to work without change.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>


# 02c02bf1 03-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Change definition of sibling entries

Instead of storing a pointer to the slot containing the canonical entry,
store the offset of the slot. Produces slightly more efficient code
(~300 bytes) and simplifies the implementation.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>


# 3159f943 03-Nov-2017 Matthew Wilcox <willy@infradead.org>

xarray: Replace exceptional entries

Introduce xarray value entries and tagged pointers to replace radix
tree exceptional entries. This is a slight change in encoding to allow
the use of an extra bit (we can now store BITS_PER_LONG - 1 bits in a
value entry). It is also a change in emphasis; exceptional entries are
intimidating and different. As the comment explains, you can choose
to store values or pointers in the xarray and they are both first-class
citizens.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>


# 3d0186bb 16-Jun-2018 Matthew Wilcox <willy@infradead.org>

Update email address

Redirect some older email addresses that are in the git logs.

Signed-off-by: Matthew Wilcox <willy@infradead.org>


# f6bb2a2c 10-Apr-2018 Matthew Wilcox <willy@infradead.org>

xarray: add the xa_lock to the radix_tree_root

This results in no change in structure size on 64-bit machines as it
fits in the padding between the gfp_t and the void *. 32-bit machines
will grow the structure from 8 to 12 bytes. Almost all radix trees are
protected with (at least) a spinlock, so as they are converted from
radix trees to xarrays, the data structures will shrink again.

Initialising the spinlock requires a name for the benefit of lockdep, so
RADIX_TREE_INIT() now needs to know the name of the radix tree it's
initialising, and so do IDR_INIT() and IDA_INIT().

Also add the xa_lock() and xa_unlock() family of wrappers to make it
easier to use the lock. If we could rely on -fplan9-extensions in the
compiler, we could avoid all of this syntactic sugar, but that wasn't
added until gcc 4.6.

Link: http://lkml.kernel.org/r/20180313132639.17387-8-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>