Lines Matching defs:checkpoint

29  * A storage pool checkpoint can be thought of as a pool-wide snapshot or
35 * zpool on-disk features. If a pool has a checkpoint that is no longer
41 * flag is set to active when we create the checkpoint and remains active
42 * until the checkpoint is fully discarded. The entry in the MOS config
44 * references the state of the pool when we take the checkpoint. The entry
45 * remains populated until we start discarding the checkpoint or we rewind
48 * - Each vdev contains a vdev-wide space map while the pool has a checkpoint,
49 * which persists until the checkpoint is fully discarded. The space map
51 * but we want to keep around in case we decide to rewind to the checkpoint.
55 * checkpoint, with the only exception being the scenario when we free
56 * blocks that belong to the checkpoint. In this case, these blocks remain
58 * vdev's checkpoint space map.
65 * - To create a checkpoint, we first wait for the current TXG to be synced,
68 * uberblock in MOS config, increment the feature flag for the checkpoint
75 * - When a checkpoint exists, we need to ensure that the blocks that
76 * belong to the checkpoint are freed but never reused. This means that
78 * trees of a metaslab. Therefore, whenever there is a checkpoint the new
82 * checkpoint (we find out by comparing its birth to spa_checkpoint_txg),
92 * when we discard the checkpoint, we can find the entries that have
96 * - To discard the checkpoint we use an early synctask to delete the
100 * new data end up in the checkpoint's data structures.
112 * - To rewind to the checkpoint, we first use the current uberblock and
119 * An important note on rewinding to the checkpoint has to do with how we
121 * blocks that have not been claimed by the time we took the checkpoint
127 * - In the hypothetical event that we take a checkpoint, remove a vdev,
134 * - As most of the checkpoint logic is implemented in the SPA and doesn't
136 * checkpoint can potentially break the boundaries set by dataset
153 * prefetching of the checkpoint space map done on each vdev while
154 * discarding the checkpoint.
156 * The reason it exists is because top-level vdevs with long checkpoint
159 * the pool had a checkpoint.
196 spa_history_log_internal(spa, "spa discard checkpoint", tx,
223 * the checkpoint's space map entries should not cross
340 zfs_dbgmsg("discarding checkpoint: txg %llu, vdev id %lld, "
349 "while incrementally destroying the checkpoint "
432 "while prefetching checkpoint space map "
486 uberblock_t checkpoint = spa->spa_ubsync;
489 * At this point, there should not be a checkpoint in the MOS.
502 ASSERT3U(checkpoint.ub_txg, ==, spa->spa_syncing_txg - 1);
505 * Once the checkpoint is in place, we need to ensure that none of
507 * When there is a checkpoint and a block is freed, we compare its
509 * block is part of the checkpoint or not. Therefore, we have to set
514 spa->spa_checkpoint_txg = checkpoint.ub_txg;
515 spa->spa_checkpoint_info.sci_timestamp = checkpoint.ub_timestamp;
517 checkpoint.ub_checkpoint_txg = checkpoint.ub_txg;
521 &checkpoint, tx));
531 spa_history_log_internal(spa, "spa checkpoint", tx,
532 "checkpointed uberblock txg=%llu", (u_longlong_t)checkpoint.ub_txg);
536 * Create a checkpoint for the pool.
553 * to see if we were to revert later to the checkpoint. In other
556 * the checkpoint command.
608 spa_history_log_internal(spa, "spa discard checkpoint", tx,
613 * Discard the checkpoint from a pool.
621 * won't end up in the checkpoint's data structures (e.g.
638 "Limit for memory used in prefetching the checkpoint space map done "
639 "on each vdev while discarding the checkpoint");