Lines Matching defs:checkpoint

29  * A storage pool checkpoint can be thought of as a pool-wide snapshot or
35 * zpool on-disk features. If a pool has a checkpoint that is no longer
41 * flag is set to active when we create the checkpoint and remains active
42 * until the checkpoint is fully discarded. The entry in the MOS config
44 * references the state of the pool when we take the checkpoint. The entry
45 * remains populated until we start discarding the checkpoint or we rewind
48 * - Each vdev contains a vdev-wide space map while the pool has a checkpoint,
49 * which persists until the checkpoint is fully discarded. The space map
51 * but we want to keep around in case we decide to rewind to the checkpoint.
55 * checkpoint, with the only exception being the scenario when we free
56 * blocks that belong to the checkpoint. In this case, these blocks remain
58 * vdev's checkpoint space map.
65 * - To create a checkpoint, we first wait for the current TXG to be synced,
68 * uberblock in MOS config, increment the feature flag for the checkpoint
75 * - When a checkpoint exists, we need to ensure that the blocks that
76 * belong to the checkpoint are freed but never reused. This means that
78 * trees of a metaslab. Therefore, whenever there is a checkpoint the new
82 * checkpoint (we find out by comparing its birth to spa_checkpoint_txg),
92 * when we discard the checkpoint, we can find the entries that have
96 * - To discard the checkpoint we use an early synctask to delete the
100 * new data end up in the checkpoint's data structures.
112 * - To rewind to the checkpoint, we first use the current uberblock and
119 * An important note on rewinding to the checkpoint has to do with how we
121 * blocks that have not been claimed by the time we took the checkpoint
127 * - In the hypothetical event that we take a checkpoint, remove a vdev,
134 * - As most of the checkpoint logic is implemented in the SPA and doesn't
136 * checkpoint can potentially break the boundaries set by dataset
153 * prefetching of the checkpoint space map done on each vdev while
154 * discarding the checkpoint.
156 * The reason it exists is because top-level vdevs with long checkpoint
159 * the pool had a checkpoint.
195 spa_history_log_internal(spa, "spa discard checkpoint", tx,
222 * the checkpoint's space map entries should not cross
337 zfs_dbgmsg("discarding checkpoint: txg %llu, vdev id %d, "
345 "while incrementally destroying the checkpoint "
428 "while prefetching checkpoint space map "
481 uberblock_t checkpoint = spa->spa_ubsync;
484 * At this point, there should not be a checkpoint in the MOS.
497 ASSERT3U(checkpoint.ub_txg, ==, spa->spa_syncing_txg - 1);
500 * Once the checkpoint is in place, we need to ensure that none of
502 * When there is a checkpoint and a block is freed, we compare its
504 * block is part of the checkpoint or not. Therefore, we have to set
509 spa->spa_checkpoint_txg = checkpoint.ub_txg;
510 spa->spa_checkpoint_info.sci_timestamp = checkpoint.ub_timestamp;
512 checkpoint.ub_checkpoint_txg = checkpoint.ub_txg;
516 &checkpoint, tx));
526 spa_history_log_internal(spa, "spa checkpoint", tx,
527 "checkpointed uberblock txg=%llu", checkpoint.ub_txg);
531 * Create a checkpoint for the pool.
548 * to see if we were to revert later to the checkpoint. In other
551 * the checkpoint command.
603 spa_history_log_internal(spa, "spa discard checkpoint", tx,
608 * Discard the checkpoint from a pool.
616 * won't end up in the checkpoint's data structures (e.g.