#
ad29cf99 |
|
13-Apr-2024 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: set_btree_iter_dontneed also clears should_be_locked This is part of a larger series cleaning up the semantics of should_be_locked and adding assertions around it; if we don't need an iterator/path anymore, it clearly doesn't need to be locked. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
be42e4a6 |
|
04-Apr-2024 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Bump limit in btree_trans_too_many_iters() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
46bf2e9c |
|
15-Jan-2024 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix excess transaction restarts in __bchfs_fallocate() drop_locks_do() should not be used in a fastpath without first trying the do in nonblocking mode - the unlock and relock will cause excessive transaction restarts and potentially livelocking with other threads that are contending for the same locks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0c99e17d |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: growable btree_paths XXX: we're allocating memory with btree locks held - bad We need to plumb through an error path so we can do allocate_dropping_locks() - but we're merging this now because it fixes a transaction path overflow caused by indirect extent fragmentation, and the resize path is rare. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
2c3b0fc3 |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: trans->nr_paths Start to plumb through dynamically growable btree_paths; this patch replaces most BTREE_ITER_MAX references with trans->nr_paths. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
fea153a8 |
|
12-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: rcu protect trans->paths Upcoming patches are going to be changing trans->paths to a reallocatable buffer. We need to guard against use after free when it's used by other threads; this introduces RCU protection to those paths and changes them to check for trans->paths == NULL Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
398c9834 |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: kill btree_path.idx Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
b0b67378 |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: trans_for_each_path_with_node() no longer uses path->idx Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ccb7b08f |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: trans_for_each_path() no longer uses path->idx path->idx is now a code smell: we should be using path_idx_t, since it's stable across btree path reallocation. This is also a bit faster, using the same loop counter vs. fetching path->idx from each path we iterate over. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
4c5289e6 |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: kill trans_for_each_path_from() dead code Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
311e446a |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_btree_path_to_text() -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1f75ba4e |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: struct trans_for_each_path_inorder_iter reducing our usage of path->idx Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
07f383c7 |
|
03-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: btree_iter -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
96ed47d1 |
|
08-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_btree_path_traverse() -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f6363aca |
|
08-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_btree_path_make_mut() -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
4617d946 |
|
08-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_btree_path_set_pos() -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
74e600c1 |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs; bch2_path_put() -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
255ebbbf |
|
08-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_path_get() -> btree_path_idx_t Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
5ce8b92d |
|
07-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: minor bch2_btree_path_set_pos() optimization bpos_eq() is cheaper than bpos_cmp() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0bc64d7e |
|
17-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: kill __bch2_btree_iter_peek_upto_and_restart() dead code Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
80eab7a7 |
|
16-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: for_each_btree_key() now declares loop iter Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c47e8bfb |
|
16-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: kill for_each_btree_key_norestart() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
44ddd8ad |
|
16-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: kill for_each_btree_key_old_upto() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
3a860b5a |
|
16-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: for_each_btree_key_upto() -> for_each_btree_key_old_upto() And for_each_btree_key2_upto -> for_each_btree_key_upto Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
79904fa2 |
|
11-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_trans_srcu_lock() should be static Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
67997234 |
|
11-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: kill btree_trans->wb_updates the btree write buffer path now creates a journal entry directly Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f3360005 |
|
10-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_trans_node_add no longer uses trans_for_each_path() In the future we'll be making trans->paths resizable and potentially having _many_ more paths (for fsck); we need to start fixing algorithms that walk each path in a transaction where possible. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f8fd5871 |
|
07-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: reserve path idx 0 for sentinal Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
5028b907 |
|
07-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Rename for_each_btree_key2() -> for_each_btree_key() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
27b2df98 |
|
07-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Kill for_each_btree_key() for_each_btree_key() handles transaction restarts, like for_each_btree_key2(), but only calls bch2_trans_begin() after a transaction restart - for_each_btree_key2() wraps every loop iteration in a transaction. The for_each_btree_key() behaviour is problematic when it leads to holding the SRCU lock that prevents key cache reclaim for an unbounded amount of time - there's no real need to keep it around. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
8c066ede |
|
07-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: continue now works in for_each_btree_key2() continue now works as in any other loop Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
6e92d155 |
|
03-Dec-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Refactor trans->paths_allocated to be standard bitmap Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e153a0d7 |
|
26-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Improve trace_trans_restart_too_many_iters() We now include the list of paths in use. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ad9c7992 |
|
17-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Kill btree_iter->journal_pos For BTREE_ITER_WITH_JOURNAL, we memoize lookups in the journal keys, to avoid the binary search overhead. Previously we stashed the pos of the last key returned from the journal, in order to force the lookup to be redone when rewinding. Now bch2_journal_keys_peek_upto() handles rewinding itself when necessary - so we can slim down btree_iter. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1ae8a090 |
|
16-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Kill memset() in bch2_btree_iter_init() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e56978c8 |
|
12-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Kill BTREE_ITER_ALL_LEVELS As discussed in the previous patch, BTREE_ITER_ALL_LEVELS appears to be racy with concurrent interior node updates - and perhaps it is fixable, but it's tricky and unnecessary. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
50a8a732 |
|
14-Dec-2023 |
Thomas Bertschinger <tahbertschinger@gmail.com> |
bcachefs: fix invalid memory access in bch2_fs_alloc() error path When bch2_fs_alloc() gets an error before calling bch2_fs_btree_iter_init(), bch2_fs_btree_iter_exit() makes an invalid memory access because btree_trans_list is uninitialized. Signed-off-by: Thomas Bertschinger <tahbertschinger@gmail.com> Fixes: 6bd68ec266ad ("bcachefs: Heap allocate btree_trans") Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
9fcdd23b |
|
04-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Add a comment for BTREE_INSERT_NOJOURNAL usage BTREE_INSERT_NOJOURNAL is primarily used for a performance optimization related to inode updates and fsync - document it. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
d3c7727b |
|
03-Nov-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: rebalance_work btree is not a snapshots btree rebalance_work entries may refer to entries in the extents btree, which is a snapshots btree, or they may also refer to entries in the reflink btree, which is not. Hence rebalance_work keys may use the snapshot field but it's not required to be nonzero - add a new btree flag to reflect this. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c4accde4 |
|
29-Oct-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Ensure srcu lock is not held too long The SRCU read lock that btree_trans takes exists to make it safe for bch2_trans_relock() to deref pointers to btree nodes/key cache items we don't have locked, but as a side effect it blocks reclaim from freeing those items. Thus, it's important to not hold it for too long: we need to differentiate between bch2_trans_unlock() calls that will be only for a short duration, and ones that will be for an unbounded duration. This introduces bch2_trans_unlock_long(), to be used mainly by the data move paths. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
50a38ca1 |
|
19-Oct-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix btree_node_type enum More forwards compatibility fixups: having BKEY_TYPE_btree at the end of the enum conflicts with unnkown btree IDs, this shifts BKEY_TYPE_btree to slot 0 and fixes things up accordingly. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
6bd68ec2 |
|
12-Sep-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Heap allocate btree_trans We're using more stack than we'd like in a number of functions, and btree_trans is the biggest object that we stack allocate. But we have to do a heap allocatation to initialize it anyways, so there's no real downside to heap allocating the entire thing. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
96dea3d5 |
|
12-Sep-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix W=12 build errors Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c872afa2 |
|
10-Sep-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix bch2_propagate_key_to_snapshot_leaves() When we handle a transaction restart in a nested context, we need to return -BCH_ERR_transaction_restart_nested because we invalidated the outer context's iterators and locks. bch2_propagate_key_to_snapshot_leaves() wasn't doing this, this patch fixes it to use trans_was_restarted(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
8079aab0 |
|
04-Aug-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Split up btree_update_leaf.c We now have btree_trans_commit.c btree_update.c Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f26c67f4 |
|
25-Jun-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Snapshot depth, skiplist fields This extents KEY_TYPE_snapshot to include some new fields: - depth, to indicate depth of this particular node from the root - skip[3], skiplist entries for quickly walking back up to the root These are to improve bch2_snapshot_is_ancestor(), making it O(ln(n)) instead of O(n) in the snapshot tree depth. Skiplist nodes are picked at random from the set of ancestor nodes, not some fixed fraction. This introduces bcachefs_metadata_version 1.1, snapshot_skiplists. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
73bd774d |
|
06-Jul-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Assorted sparse fixes - endianness fixes - mark some things static - fix a few __percpu annotations - fix silent enum conversions Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
9473cff9 |
|
21-Jun-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix more lockdep splats in debug.c Similar to previous fixes, we can't incur page faults while holding btree locks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
d95dd378 |
|
28-May-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: allocate_dropping_locks() Add two new helpers for allocating memory with btree locks held: The idea is to first try the allocation with GFP_NOWAIT|__GFP_NOWARN, then if that fails - unlock, retry with GFP_KERNEL, and then call trans_relock(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
b5fd7566 |
|
28-May-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: drop_locks_do() Add a new helper for the common pattern of: - trans_unlock() - do something - trans_relock() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e47a390a |
|
27-May-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Convert -ENOENT to private error codes As with previous conversions, replace -ENOENT uses with more informative private error codes. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f154c3eb |
|
27-May-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: trans_for_each_path_safe() bch2_btree_trans_to_text() is used on btree_trans objects that are owned by different threads - when printing out deadlock cycles - so we need a safe version of trans_for_each_path(), else we race with seeing a btree_path that was just allocated and not fully initialized: Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
32913f49 |
|
16-Jun-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
six locks: Seq now only incremented on unlock Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1fb4fe63 |
|
20-May-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
six locks: Kill six_lock_state union As suggested by Linus, this drops the six_lock_state union in favor of raw bitmasks. On the one hand, bitfields give more type-level structure to the code. However, a significant amount of the code was working with six_lock_state as a u64/atomic64_t, and the conversions from the bitfields to the u64 were deemed a bit too out-there. More significantly, because bitfield order is poorly defined (#ifdef __LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the sequence number would overflow into the rest of the bitfield if the compiler didn't put the sequence number at the high end of the word. The new code is a bit saner when we're on an architecture without real atomic64_t support - all accesses to lock->state now go through atomic64_*() operations. On architectures with real atomic64_t support, we additionally use atomic bit ops for setting/clearing individual bits. Text size: 7467 bytes -> 4649 bytes - compilers still suck at bitfields. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
d67a16df |
|
30-Apr-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Move bch2_bkey_make_mut() to btree_update.h It's for doing updates - this is where it belongs, and next pathes will be changing these helpers to use items from btree_update.h. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
bcb79a51 |
|
29-Apr-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_bkey_get_iter() helpers Introduce new helpers for a common pattern: bch2_trans_iter_init(); bch2_btree_iter_peek_slot(); - bch2_bkey_get_iter_type() returns -ENOENT if it doesn't find a key of the correct type - bch2_bkey_get_val_typed() copies the val out of the btree to a (typically stack allocated) variable; it handles the case where the value in the btree is smaller than the current version of the type, zeroing out the remainder. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ab158fce |
|
30-Apr-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Converting to typed bkeys is now allowed for err, null ptrs Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
511b629a |
|
06-Mar-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_btree_iter_peek_node_and_restart() Minor refactoring for the Rust interface. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0f2ea655 |
|
27-Feb-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_btree_iter_peek_and_restart_outlined() Needed for interfacing with Rust - bindgen can't handle inline functions, alas. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
5e2d8be8 |
|
18-Feb-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Split trans->last_begin_ip and trans->last_restarted_ip These are two different things - this improves our debug assert messages. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
73d86dfd |
|
17-Feb-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix erasure coding locking This adds a new helper, bch2_trans_mutex_lock(), for locking a mutex - dropping and retaking btree locks as needed. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c72f687a |
|
11-Oct-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Use for_each_btree_key_upto() more consistently It's important that in BTREE_ITER_FILTER_SNAPSHOTS mode we always use peek_upto() and provide an end for the interval we're searching for - otherwise, when we hit the end of the inode the next inode be in a different subvolume and not have any keys in the current snapshot, and we'd iterate over arbitrarily many keys before returning one. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
12344c7c |
|
01-Feb-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_trans_in_restart_error() This replaces various BUG_ON() assertions with panics that tell us where the restart was done and the restart type. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
4e3d1899 |
|
04-Feb-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Inline bch2_btree_path_traverse() fastpath Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
31381636 |
|
23-Jan-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_trans_relock_notrace() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ee2c6ea7 |
|
08-Jan-2023 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: btree_iter->ip_allocated In debug mode, we now track where btree iterators and paths are initialized/allocated - helpful in tracking down btree path overflows. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
994ba475 |
|
23-Nov-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: New btree helpers This introduces some new conveniences, to help cut down on boilerplate: - bch2_trans_kmalloc_nomemzero() - performance optimiation - bch2_bkey_make_mut() - bch2_bkey_get_mut() - bch2_bkey_get_mut_typed() - bch2_bkey_alloc() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e88a75eb |
|
24-Nov-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: New bpos_cmp(), bkey_cmp() replacements This patch introduces - bpos_eq() - bpos_lt() - bpos_le() - bpos_gt() - bpos_ge() and equivalent replacements for bkey_cmp(). Looking at the generated assembly these could probably be improved further, but we already see a significant code size improvement with this patch. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c96f108b |
|
24-Nov-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Optimize bch2_trans_iter_init() When flags & btree_id are constants, we can constant fold the entire calculation of the actual iterator flags - and the whole thing becomes small enough to inline. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
3bce1383 |
|
15-Nov-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix for_each_btree_key2() Previously, when we exited from the loop body with a break statement _ret wouldn't have been assigned to yet, and we could spuriously return a transaction restart error. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0f35e086 |
|
15-Nov-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fix return code from btree_path_traverse_one() trans->restarted is a positive error code, not the usual negative Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
b2d1d56b |
|
13-Nov-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Fixes for building in userspace - Marking a non-static function as inline doesn't actually work and is now causing problems - drop that - Introduce BCACHEFS_LOG_PREFIX for when we want to prefix log messages with bcachefs (filesystem name) - Userspace doesn't have real percpu variables (maybe we can get this fixed someday), put an #ifdef around bch2_disk_reservation_add() fastpath Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
307e3c13 |
|
17-Oct-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Optimize bch2_trans_init() Now we store the transaction's fn idx in a local variable, instead of redoing the lookup every time we call bch2_trans_init(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
13bc41a7 |
|
03-Oct-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: bch2_trans_locked() Useful debugging function. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
14d8f26a |
|
26-Sep-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Inline bch2_trans_kmalloc() fast path Small performance optimization. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
33bd5d06 |
|
22-Aug-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Deadlock cycle detector We've outgrown our own deadlock avoidance strategy. The btree iterator API provides an interface where the user doesn't need to concern themselves with lock ordering - different btree iterators can be traversed in any order. Without special care, this will lead to deadlocks. Our previous strategy was to define a lock ordering internally, and whenever we attempt to take a lock and trylock() fails, we'd check if the current btree transaction is holding any locks that cause a lock ordering violation. If so, we'd issue a transaction restart, and then bch2_trans_begin() would re-traverse all previously used iterators, but in the correct order. That approach had some issues, though. - Sometimes we'd issue transaction restarts unnecessarily, when no deadlock would have actually occured. Lock ordering restarts have become our primary cause of transaction restarts, on some workloads totally 20% of actual transaction commits. - To avoid deadlock or livelock, we'd often have to take intent locks when we only wanted a read lock: with the lock ordering approach, it is actually illegal to hold _any_ read lock while blocking on an intent lock, and this has been causing us unnecessary lock contention. - It was getting fragile - the various lock ordering rules are not trivial, and we'd been seeing occasional livelock issues related to this machinery. So, since bcachefs is already a relational database masquerading as a filesystem, we're stealing the next traditional database technique and switching to a cycle detector for avoiding deadlocks. When we block taking a btree lock, after adding ourself to the waitlist but before sleeping, we do a DFS of btree transactions waiting on other btree transactions, starting with the current transaction and walking our held locks, and transactions blocking on our held locks. If we find a cycle, we emit a transaction restart. Occasionally (e.g. the btree split path) we can not allow the lock() operation to fail, so if necessary we'll tell another transaction that it has to fail. Result: trans_restart_would_deadlock events are reduced by a factor of 10 to 100, and we'll be able to delete a whole bunch of grotty, fragile code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
e4215d0f |
|
16-Sep-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: All held locks must be in a btree path With the new deadlock cycle detector, it's critical that all held locks be marked in a btree_path, because that's what the cycle detector traverses - any locks that aren't correctly marked will cause deadlocks. This changes the btree_path to allocate some btree_paths for the new nodes, since until the final update is done we otherwise don't have a path referencing them. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
674cfc26 |
|
26-Aug-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Add persistent counters for all tracepoints Also, do some reorganizing/renaming, convert atomic counters in bch_fs to persistent counters, and add a few missing counters. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
b1cdc398 |
|
27-Aug-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Make more btree_paths available - Don't decrease BTREE_ITER_MAX when building with CONFIG_LOCKDEP anymore. The lockdep table sizes are configurable now, we don't need this anymore. - btree_trans_too_many_iters() is less conservative now. Previously it was causing a transaction restart if we had used more than BTREE_ITER_MAX / 2 paths, change this to BTREE_ITER_MAX - 8. This helps with excessive transaction restarts/livelocks in the bucket allocator path. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
616928c3 |
|
22-Aug-2022 |
Kent Overstreet <kent.overstreet@linux.dev> |
bcachefs: Track maximum transaction memory This patch - tracks maximum bch2_trans_kmalloc() memory used in btree_transaction_stats - makes it available in debugfs - switches bch2_trans_init() to using that for the amount of memory to preallocate, instead of the parameter passed in This drastically reduces transaction restarts, and means we no longer need to track this in the source code. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
cd5afabe |
|
19-Aug-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_locking.c Start to centralize some of the locking code in a new file; more locking code will be moving here in the future. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
f0d2e9f2 |
|
06-Jul-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Add assertions for unexpected transaction restarts Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
c497df8b |
|
11-Aug-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Increment restart count in bch2_trans_begin() Instead of counting transaction restarts, count when the transaction is restarted: if bch2_trans_begin() was called when the transaction wasn't restarted we need to ensure restart_count is still incremented. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
5c0bb66a |
|
11-Aug-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Track the maximum btree_paths ever allocated by each transaction We need a way to check if the machinery for handling btree_paths with in a transaction is behaving reasonably, as it often has not been - we've had bugs with transaction path overflows caused by duplicate paths and plenty of other things. This patch tracks, per transaction fn, the most btree paths ever allocated by that transaction and makes it available in debugfs. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
9f96568c |
|
09-Aug-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Tracepoint improvements Our types are exported to the tracepoint code, so it's not necessary to break things out individually when passing them to tracepoints - we can also call other functions from TP_fast_assign(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
17047fbc |
|
05-Aug-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix incorrectly freeing btree_path in alloc path Clearing path->preserve means the path will be dropping in bch2_trans_begin() - but on transaction restart, we're likely to need that path again. This fixes a livelock in the allocation path. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
4f84b7e3 |
|
20-Jul-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: for_each_btree_key_reverse() This adds a new macro, like for_each_btree_key2(), but for iterating in reverse order. Also, change for_each_btree_key2() to properly check the return value of bch2_btree_iter_advance(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
549d173c |
|
17-Jul-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: EINTR -> BCH_ERR_transaction_restart Now that we have error codes, with subtypes, we can switch to our own error code for transaction restarts - and even better, a distinct error code for each transaction restart reason: clearer code and better debugging. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
0990efae |
|
05-Jul-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_trans_too_many_iters() is now a transaction restart All transaction restarts need a tracepoint - this is essential for debugging Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e941ae7d |
|
17-Jul-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Add a counter for btree_trans restarts This will help us improve nested transactions - we need to add assertions that whenever an inner transaction handles a restart, it still returns -EINTR to the outer transaction. This also adds nested_lockrestart_do() and nested_commit_do() which use the new counters to correctly return -EINTR when the transaction was restarted. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
8bfe14e8 |
|
14-Jul-2022 |
Daniel Hill <daniel@gluo.nz> |
bcachefs: lock time stats prep work. We need the caller name and a place to store our results, btree_trans provides this. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
a1783320 |
|
15-Jul-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: for_each_btree_key2() This introduces two new macros for iterating through the btree, with transaction restart handling - for_each_btree_key2() - for_each_btree_key_commit() Every iteration is now in an implicit transaction, and - as with lockrestart_do() and commit_do() - returning -EINTR will cause the transaction to be restarted, at the same key. This patch converts a bunch of code that was open coding this to these new macros, saving a substantial amount of code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
50b13bee |
|
17-Jun-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Improve an error message When inserting a key type that's not valid for a given btree, we should print out which btree we were inserting into. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
30525f68 |
|
21-May-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix journal_keys_search() overhead Previously, on every btree_iter_peek() operation we were searching the journal keys, doing a full binary search - which was slow. This patch fixes that by saving our position in the journal keys, so that we only do a full binary search when moving our position backwards or a large jump forwards. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
b0babf2a |
|
12-Apr-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_btree_iter_peek_all_levels() This adds bch2_btree_iter_peek_all_levels(), which returns keys from every level of the btree - interior nodes included - in monotonically increasing order, soon to be used by the backpointers check & repair code. - BTREE_ITER_ALL_LEVELS can now be passed to for_each_btree_key() to iterate thusly, much like BTREE_ITER_SLOTS - The existing algorithm in bch2_btree_iter_advance() doesn't work with peek_all_levels(): we have to defer the actual advancing until the next time we call peek, where we have the btree path traversed and uptodate. So, we add an advanced bit to btree_iter; when BTREE_ITER_ALL_LEVELS is set bch2_btree_iter_advanced() just marks the iterator as advanced. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
d8648425 |
|
30-Mar-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_path_make_mut() clears should_be_locked This fixes a bug where __bch2_btree_node_update_key() wasn't clearing should_be_locked, leading to bch2_btree_path_traverse() always failing - all callers of btree_path_make_mut() want should_be_locked cleared, so do it there. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
8570d775 |
|
11-Mar-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_trans_updates_to_text() This turns bch2_dump_trans_updates() into a to_text() method - this way it can be used by debug tracing. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
2158fe46 |
|
02-Mar-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_trans_inconsistent() Add a new error macro that also dumps transaction updates in addition to doing an emergency shutdown - when a transaction update discovers or is causing a fs inconsistency, it's helpful to see what updates it was doing. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
85d8cf16 |
|
10-Mar-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_btree_iter_peek_upto() In BTREE_ITER_FILTER_SNAPHOTS mode, we skip over keys in unrelated snapshots. When we hit the end of an inode, if the next inode(s) are in a different subvolume, we could potentially have to skip past many keys before finding a key we can return to the caller, so they can terminate the iteration. This adds a peek_upto() variant to solve this problem, to be used when we know the range we're searching within. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
f7b6ca23 |
|
06-Feb-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: BTREE_ITER_WITH_KEY_CACHE This is the start of cache coherency with the btree key cache - this adds a btree iterator flag that causes lookups to also check the key cache when we're iterating over the btree (not iterating over the key cache). Note that we could still race with another thread creating at item in the key cache and updating it, since we aren't holding the key cache locked if it wasn't found. The next patch for the update path will address this by causing the transaction to restart if the key cache is found to be dirty. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ce91abd6 |
|
06-Feb-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_btree_path_set_pos() bch2_btree_path_set_pos() is now available outside of btree_iter.c Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1f2d9192 |
|
08-Jan-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: iter->update_path With BTREE_ITER_FILTER_SNAPSHOTS, we have to distinguish between the path where the key was found, and the path for inserting into the current snapshot. This adds a new field to struct btree_iter for saving a path for the current snapshot, and plumbs it through bch2_trans_update(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
a1e82d35 |
|
08-Jan-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Refactor bch2_btree_iter() This splits bch2_btree_iter() up into two functions: an inner function that handles BTREE_ITER_WITH_JOURNAL, BTREE_ITER_WITH_UPDATES, and iterating acrcoss leaf nodes, and an outer one that implements BTREE_ITER_FILTER_SNAPHSOTS. This is prep work for remember a btree_path at our update position in BTREE_ITER_FILTER_SNAPSHOTS mode. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
669f87a5 |
|
03-Jan-2022 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Switch to __func__for recording where btree_trans was initialized Symbol decoding, via %ps, isn't supported in userspace - this will also be faster when we're using trans->fn in the fast path, as with the new BCH_JSET_ENTRY_log journal messages. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
f3e1f444 |
|
21-Dec-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: BTREE_ITER_NOPRESERVE This adds a flag to not mark the initial btree_path as preserve, for paths that we expect to be cheap to reconstitute if necessary - this solves a btree_path overflow caused by need_whiteout_for_snapshot(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
084d42bb |
|
23-Nov-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Apply workaround for too many btree iters to read path Reading from cached data, which calls bch2_bucket_io_time_reset(), is leading to transaction iterator overflows - this standardizes the workaround. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
32b26e8c |
|
05-Nov-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_assert_pos_locked() This adds a new assertion to be used by bch2_inode_update_after_write(), which updates the VFS inode based on the update to the btree inode we just did - we require that the btree inode still be locked when we do that update. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
9a74f63c |
|
07-Nov-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: path->should_be_locked fixes - We should only be clearing should_be_locked in btree_path_set_pos() - it's the responsiblity of the btree_path code, not the btree_iter code. - bch2_path_put() needs to pay attention to path->should_be_locked, to ensure we don't drop locks we're supposed to be keeping. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
f527afea |
|
02-Nov-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix upgrade_readers() The bch2_btree_path_upgrade() call was failing and tripping an assert - path->level + 1 is in this case not necessarily exactly what we want, fix it by upgrading exactly the locks we want. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
d1211725 |
|
25-Oct-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: More general fix for transaction paths overflow for_each_btree_key() now calls bch2_trans_begin() as needed; that means, we can also call it when we're in danger of overflowing transaction paths. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
b0d1b70a |
|
24-Oct-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Must check for errors from bch2_trans_cond_resched() But we don't need to call it from outside the btree iterator code anymore, since it's called by bch2_trans_begin() and bch2_btree_path_traverse(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
e5fa91d7 |
|
20-Oct-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix restart handling in for_each_btree_key() Code that uses for_each_btree_key often wants transaction restarts to be handled locally and not returned. Originally, we wouldn't return transaction restarts if there was a single iterator in the transaction - the reasoning being if there weren't other iterators being invalidated, and the current iterator was being advanced/retraversed, there weren't any locks or iterators we were required to preserve. But with the btree_path conversion that approach doesn't work anymore - even when we're using for_each_btree_key() with a single iterator there will still be two paths in the transaction, since we now always preserve the path at the pos the iterator was initialized at - the reason being that on restart we often restart from the same place. And it turns out there's now a lot of for_each_btree_key() uses that _do not_ want transaction restarts handled locally, and should be returning them. This patch splits out for_each_btree_key_norestart() and for_each_btree_key_continue_norestart(), and converts existing users as appropriate. for_each_btree_key(), for_each_btree_key_continue(), and for_each_btree_node() now handle transaction restarts themselves by calling bch2_trans_begin() when necessary - and the old hack to not return transaction restarts when there's a single path in the transaction has been deleted. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
9a796fdb |
|
19-Oct-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_trans_exit() no longer returns errors Now that peek_node()/next_node() are converted to return errors directly, we don't need bch2_trans_exit() to return errors - it's cleaner this way and wasn't used much anymore. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
d355c6f4 |
|
19-Oct-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: for_each_btree_node() now returns errors directly This changes for_each_btree_node() to work like for_each_btree_key(), and to that end bch2_btree_iter_peek_node() and next_node() also return error ptrs. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
c075ff70 |
|
04-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: BTREE_ITER_FILTER_SNAPSHOTS For snapshots, we need to implement btree lookups that return the first key that's an ancestor of the snapshot ID the lookup is being done in - and filter out keys in unrelated snapshots. This patch adds the btree iterator flag BTREE_ITER_FILTER_SNAPSHOTS which does that filtering. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
db92f2ea |
|
07-Sep-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Optimize btree lookups in write path This patch significantly reduces the number of btree lookups required in the extent update path. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
1d3ecd7e |
|
04-Sep-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Tighten up btree locking invariants New rule is: if a btree path holds any locks it should be holding precisely the locks wanted (accoringing to path->level and path->locks_want). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
068bcaa5 |
|
03-Sep-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Add more assertions for locking btree iterators out of order btree_path_traverse_all() traverses btree iterators in sorted order, and thus shouldn't see transaction restarts due to potential deadlocks - but sometimes we do. This patch adds some more assertions and tracks some more state to help track this down. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
67e0dd8f |
|
30-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_path This splits btree_iter into two components: btree_iter is now the externally visible componont, and it points to a btree_path which is now reference counted. This means we no longer have to clone iterators up front if they might be mutated - btree_path can be shared by multiple iterators, and cloned if an iterator would mutate a shared btree_path. This will help us use iterators more efficiently, as well as slimming down the main long lived state in btree_trans, and significantly cleans up the logic for iterator lifetimes. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
deb0e573 |
|
30-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill BTREE_ITER_NEED_PEEK This was used for an optimization that hasn't existing in quite awhile - iter->uptodate will probably be going away as well. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
a0a56879 |
|
30-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: More renaming Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
f7a966a3 |
|
30-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Clean up/rename bch2_trans_node_* fns These utility functions are for managing btree node state within a btree_trans - rename them for consistency, and drop some unneeded arguments. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
78cf784e |
|
30-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Further reduce iter->trans usage This is prep work for splitting btree_path out from btree_iter - btree_path will not have a pointer to btree_trans. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
9f6bd307 |
|
24-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Reduce iter->trans usage Disfavoured, and should go away. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
84841b0d |
|
24-Aug-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_dump_trans_iters_updates() This factors out bch2_dump_trans_iters_updates() from the iter alloc overflow path, and makes some small improvements to what it prints. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
0423fb71 |
|
12-Jun-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Keep a sorted list of btree iterators This will be used to make other operations on btree iterators within a transaction more efficient, and enable some other improvements to how we manage btree iterators. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
955af634 |
|
24-Jul-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: __bch2_trans_commit() no longer calls bch2_trans_reset() It's now the caller's responsibility to call bch2_trans_begin. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
e5af273f |
|
25-Jul-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: trans->restarted Start tracking when btree transactions have been restarted - and assert that we're always calling bch2_trans_begin() immediately after transaction restart. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
a88171c9 |
|
24-Jul-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Clean up interior update paths Btree node merging now happens prior to transaction commit, not after, so we don't need to pay attention to BTREE_INSERT_NOUNLOCK. Also, foreground_maybe_merge shouldn't be calling bch2_btree_iter_traverse_all() - this is becoming private to the btree iterator code and should only be called by bch2_trans_begin(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
6e075b54 |
|
24-Jul-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_btree_iter_relock_intent() This adds a new helper for btree_cache.c that does what we want where the iterator is still being traverse - and also eliminates some unnecessary transaction restarts. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
9f1833ca |
|
10-Jul-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Update btree ptrs after every write This closes a significant hole (and last known hole) in our ability to verify metadata. Previously, since btree nodes are log structured, we couldn't detect lost btree writes that weren't the first write to a given node. Additionally, this seems to have lead to some significant metadata corruption on multi device filesystems with metadata replication: since a write may have made it to one device and not another, if we read that btree node back from the replica that did have that write and started appending after that point, the other replica would have a gap in the bset entries and reading from that replica wouldn't find the rest of the bsets. But, since updates to interior btree nodes are now journalled, we can close this hole by updating pointers to btree nodes after every write with the currently written number of sectors, without negatively affecting performance. This means we will always detect lost or corrupt metadata - it also means that our btree is now a curious hybrid of COW and non COW btrees, with all the benefits of both (excluding complexity). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
5aab6635 |
|
14-Jul-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Tighten up btree_iter locking assertions We weren't correctly verifying that we had interior node intent locks - this patch also fixes bugs uncovered by the new assertions. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
d38494c4 |
|
07-Jul-2021 |
Dan Robertson <dan@dlrobertson.com> |
bcachefs: docs: add docs for bch2_trans_reset Add basic kernel docs for bch2_trans_reset and bch2_trans_begin. Signed-off-by: Dan Robertson <dan@dlrobertson.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
8c3f6da9 |
|
14-Jun-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Improve iter->should_be_locked Adding iter->should_be_locked introduced a regression where it ended up not being set on the iterator passed to bch2_btree_update_start(), which is definitely not what we want. This patch requires it to be set when calling bch2_trans_update(), and adds various fixups to make that happen. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
953ee28a |
|
10-Jun-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill bch2_btree_iter_peek_cached() It's now been rolled into bch2_btree_iter_peek_slot() Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
5288e66a |
|
03-Jun-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: BTREE_ITER_WITH_UPDATES This drops bch2_btree_iter_peek_with_updates() and replaces it with a new flag, BTREE_ITER_WITH_UPDATES, and also reworks bch2_btree_iter_peek_slot() to respect it too. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
509d3e0a |
|
20-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Child btree iterators This adds the ability for btree iterators to own child iterators - to be used by an upcoming rework of bch2_btree_iter_peek_slot(), so we can scan forwards while maintaining our current position. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
66a0a497 |
|
04-Jun-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_iter->should_be_locked Add a field to struct btree_iter for tracking whether it should be locked - this fixes spurious transaction restarts in bch2_trans_relock(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
531a0095 |
|
04-Jun-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Improve btree iterator tracepoints This patch adds some new tracepoints to the btree iterator code, and adds new fields to the existing tracepoints - primarily for the iterator position. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
|
#
2527dd91 |
|
14-Apr-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Improve bch2_btree_iter_traverse_all() By changing it to upgrade iterators to intent locks to avoid lock restarts we can simplify __bch2_btree_node_lock() quite a bit - this fixes a probable bug where it could potentially drop a lock on an unrelated error but still succeed instead of causing a transaction restart. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
08e33761 |
|
03-Apr-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Drop some memset() calls gcc is emitting rep stos here, which is silly (and slow) for an 8 byte memset. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
2d587674 |
|
02-Apr-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Increase commality between BTREE_ITER_NODES and BTREE_ITER_KEYS Eventually BTREE_ITER_NODES should be going away. This patch is to fix a transaction iterator overflow in the btree node merge path because BTREE_ITER_NODES iterators couldn't be reused. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
5c1d808a |
|
31-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Drop trans->nounlock Since we're no longer doing btree node merging post commit, we can now delete a bunch of code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e751c01a |
|
24-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Start using bpos.snapshot field This patch starts treating the bpos.snapshot field like part of the key in the btree code: * bpos_successor() and bpos_predecessor() now include the snapshot field * Keys in btrees that will be using snapshots (extents, inodes, dirents and xattrs) now always have their snapshot field set to U32_MAX The btree iterator code gets a new flag, BTREE_ITER_ALL_SNAPSHOTS, that determines whether we're iterating over keys in all snapshots or not - internally, this controlls whether bkey_(successor|predecessor) increment/decrement the snapshot field, or only the higher bits of the key. We add a new member to struct btree_iter, iter->snapshot: when BTREE_ITER_ALL_SNAPSHOTS is not set, iter->pos.snapshot should always equal iter->snapshot, which will be 0 for btrees that don't use snapshots, and alsways U32_MAX for btrees that will use snapshots (until we enable snapshot creation). This patch also introduces a new metadata version number, and compat code for reading from/writing to older versions - this isn't a forced upgrade (yet). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f793fd85 |
|
27-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix for bch2_trans_commit() unlocking when it's not supposed to When we pass BTREE_INSERT_NOUNLOCK bch2_trans_commit isn't supposed to unlock after a successful commit, but it was calling bch2_trans_cond_resched() - oops. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
08070cba |
|
23-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Split btree_iter_traverse and bch2_btree_iter_traverse() External (to the btree iterator code) users of bch2_btree_iter_traverse expect that on success the iterator will be pointed at iter->pos and have that position locked - but since we split iter->pos and iter->real_pos, that means it has to update iter->real_pos if necessary. Internal users don't expect it to modify iter->real_pos, so we need two separate functions. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
bcad5622 |
|
21-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Update iter->real_pos lazily peek() has to update iter->real_pos - there's no need for bch2_btree_iter_set_pos() to update it as well. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e0ba3b64 |
|
21-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Replace bch2_btree_iter_next() calls with bch2_btree_iter_advance The way btree iterators work internally has been changing, particularly with the iter->real_pos changes, and bch2_btree_iter_next() is no longer hyper optimized - it's just advance followed by peek, so it's more efficient to just call advance where we're not using the return value of bch2_btree_iter_next(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
8d956c2f |
|
19-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_iter_set_dontneed() This is a bit clearer than using bch2_btree_iter_free(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
abcecb49 |
|
19-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fsck code refactoring Change fsck code to always put btree iterators - also, make some flow control improvements to deal with lock restarts better, and refactor check_extents() to not walk extents twice for counting/checking i_sectors. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c7bb769c |
|
19-Feb-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: __bch2_trans_get_iter() refactoring, BTREE_ITER_NOT_EXTENTS Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
27ace9cc |
|
04-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Simplify for_each_btree_key() Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
18fc6ae5 |
|
02-Mar-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_iter_prev_slot() Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1f7fdc0a |
|
20-Feb-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_iter_live() New helper to clean things up a bit - also, improve iter->flags handling. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
792e2c4c |
|
07-Feb-2021 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill bch2_btree_iter_set_pos_same_leaf() The only reason we were keeping this around was for BTREE_INSERT_NOUNLOCK semantics - if bch2_btree_iter_set_pos() advances to the next leaf node, it'll drop the lock on the node that we just inserted to. But we don't rely on BTREE_INSERT_NOUNLOCK semantics for the extents btree, just the inodes btree, and if we do need it for the extents btree in the future we can do it more cleanly by cloning the iterator - this lets us delete some special cases in the btree iterator code, which is complicated enough as it is. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
cc578a36 |
|
09-Dec-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix __btree_iter_next() when all iters are in use_next() when all iters are in use Also, print out more information on btree transaction iterator overflow. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
3eb26d01 |
|
01-Dec-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_trans_get_iter() no longer returns errors Since we now always preallocate the maximum number of iterators when we initialize a btree transaction, getting an iterator never fails - we can delete a fair amount of error path code. This patch also simplifies the iterator allocation code a bit. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
7e7ae6ca |
|
05-Nov-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix spurious transaction restarts The checks for lock ordering violations weren't quite right. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
645d72aa |
|
26-Oct-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix btree updates when mixing cached and non cached iterators There was a bug where bch2_trans_update() would incorrectly delete a pending update where the new update did not actually overwrite the existing update, because we were incorrectly using BTREE_ITER_TYPE when sorting pending btree updates. This affects the pending patch to use cached iterators for inode updates. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
2ca88e5a |
|
07-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Btree key cache This introduces a new kind of btree iterator, cached iterators, which point to keys cached in a hash table. The cache also acts as a write cache - in the update path, we journal the update but defer updating the btree until the cached entry is flushed by journal reclaim. Cache coherency is for now up to the users to handle, which isn't ideal but should be good enough for now. These new iterators will be used for updating inodes and alloc info (the alloc and stripes btrees). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
72545b5e |
|
08-Jun-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_trans_downgrade() bch2_btree_iter_downgrade() was looping over all iterators in a transaction; bch2_trans_downgrade() should be doing that. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f96c0df4 |
|
02-Jun-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Fix a deadlock in bch2_btree_node_get_sibling() There was a bad interaction with bch2_btree_iter_set_pos_same_leaf(), which can leave a btree node locked that is just outside iter->pos, breaking the lock ordering checks in __bch2_btree_node_lock(). Ideally we should get rid of this corner case, but for now fix it locally with verbose comments. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
495aabed |
|
02-Jun-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Add debug code to print btree transactions Intented to help debug deadlocks, since we can't use lockdep to check btree node lock ordering. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0329b150 |
|
01-Apr-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Trace where btree iterators are allocated This will help with iterator overflow bugs. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
39fb2983 |
|
07-Jan-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill bkey_type_successor Previously, BTREE_ID_INODES was special - inodes were indexed by the inode field, which meant the offset field of struct bpos wasn't used, which led to special cases in e.g. the btree iterator code. Now, inodes in the inodes btree are indexed by the offset field. Also: prevously min_key was special for extents btrees, min_key for extents would equal max_key for the previous node. Now, min_key = bkey_successor() of the previous node, same as non extent btrees. This means we can completely get rid of btree_type_sucessor/predecessor. Also make some improvements to the metadata IO validate/compat code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
57b0b3db |
|
05-Mar-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_iter_peek_with_updates() Introduce a new iterator method that provides a consistent view of the btree plus uncommitted updates. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
2dac0eae |
|
18-Feb-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Iterator debug code improvements More aggressively checking iterator invariants, and fixing the resulting bugs. Also greatly simplifying iter_next() and iter_next_slot() - they were hyper optimized before, but the optimizations were getting too brittle. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
163e885a |
|
26-Feb-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill TRANS_RESET_MEM|TRANS_RESET_ITERS All iterators should be released now with bch2_trans_iter_put(), so TRANS_RESET_ITERS shouldn't be needed anymore, and TRANS_RESET_MEM is always used. Also convert more code to __bch2_trans_do(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
6a9ec828 |
|
31-Jan-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: __bch2_btree_iter_set_pos() This one takes an additional argument for whether we're searching for >= or > the search key. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
f21539a5 |
|
29-Dec-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Use bch2_trans_reset in bch2_trans_commit() Clean up a bit of duplicated code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
a8abd3a7 |
|
20-Dec-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_trans_reset() calls should be at the tops of loops It needs to be called when we get -EINTR due to e.g. lock restart - this fixes a transaction iterators overflow bug. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
887c2a4e |
|
02-Oct-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_btree_iter_fix_key_modified() This is considerably cheaper than bch2_btree_node_iter_fix(), for cases where the key was only modified and key ordering isn't changing. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
2a9101a9 |
|
19-Oct-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Refactor bch2_trans_commit() path Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
64bc0011 |
|
26-Sep-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Rework btree iterator lifetimes The btree_trans struct needs to memoize/cache btree iterators, so that on transaction restart we don't have to completely redo btree lookups, and so that we can do them all at once in the correct order when the transaction had to restart to avoid a deadlock. This switches the btree iterator lookups to work based on iterator position, instead of trying to match them up based on the stack trace. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ef9f95ba |
|
25-Sep-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Improve error handling for for_each_btree_key_continue() Change it to match for_each_btree_key() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ccf5a109 |
|
07-Sep-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: bch2_btree_iter_peek_prev() Last of the basic operations for iterating forwards and backwards over the btree: we now have - peek(), returns key >= iter->pos - next(), returns key > iter->pos - peek_prev(), returns key <= iter->pos - prev(), returns key < iter->pos Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
9b02d1c4 |
|
08-Sep-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Optimize calls to bch2_btree_iter_traverse() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
36e9d698 |
|
07-Sep-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Do updates in order they were queued up in Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c8b18c37 |
|
09-Aug-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: fix for_each_btree_key() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
61011ea2 |
|
14-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Rip out old hacky transaction restart tracing Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
20bceecb |
|
15-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: More work to avoid transaction restarts Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
7d825866 |
|
15-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Avoid spurious transaction restarts Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0e6dd8fb |
|
15-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Ensure bch2_btree_iter_next() always advances Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
58fbf808 |
|
15-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Delete duplicate code Also rename for consistency Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ed8413fd |
|
14-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: improved btree locking tracepoints Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
60755344 |
|
10-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: kill BTREE_ITER_NOUNLOCK Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
b03b81df |
|
10-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Don't pass around may_drop_locks Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
b7607ce9 |
|
10-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill remaining bch2_btree_iter_unlock() uses Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
c43a6ef9 |
|
05-Jun-2020 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: btree_bkey_cached_common This is prep work for the btree key cache: btree iterators will point to either struct btree, or a new struct bkey_cached. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ab5c63f5 |
|
11-May-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Don't hardcode BTREE_ID_EXTENTS Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
94f651e2 |
|
17-Apr-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Return errors from for_each_btree_key() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
bf7b87a4 |
|
27-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: traverse all iterators on transaction restart Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e1120a4c |
|
27-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Add iter->idx Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
ecc892e4 |
|
27-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Kill btree_iter->next Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
0f238367 |
|
27-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: trans_for_each_iter() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
7c26ecae |
|
25-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Better bch2_trans_copy_iter() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
9e5e5b9e |
|
25-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Btree iterators now always have a btree_trans Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
424eb881 |
|
25-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Only get btree iters from btree transactions Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
5df4be3f |
|
25-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Btree iter improvements Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
446c562c |
|
04-Mar-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Remove direct use of bch2_btree_iter_link() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
5154704b |
|
20-Jul-2018 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Use deferred btree updates for inode updates Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
d0cc3def |
|
13-Jan-2019 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: More allocator startup improvements Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
216c9fac |
|
11-Aug-2018 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Pass around bset_tree less Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
08af47df |
|
08-Aug-2018 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: convert bchfs_write_index_update() to bch2_extent_update() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
581edb63 |
|
08-Aug-2018 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: mempoolify btree_trans Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
e4ccb251 |
|
05-Aug-2018 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: make struct btree_iter a bit smaller Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1c7a0adf |
|
12-Jul-2018 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: trace transaction restarts exceptionally crappy "tracing", but it's a start at documenting the places restarts can be triggered Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
#
1c6fdbd8 |
|
17-Mar-2017 |
Kent Overstreet <kent.overstreet@gmail.com> |
bcachefs: Initial commit Initially forked from drivers/md/bcache, bcachefs is a new copy-on-write filesystem with every feature you could possibly want. Website: https://bcachefs.org Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|