• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /freebsd-13-stable/sys/contrib/openzfs/module/zfs/

Lines Matching defs:vdev

64  * One metaslab from each (normal-class) vdev is used by the ZIL.  These are
67 * in each vdev is selected for this purpose when the pool is opened (or a
68 * vdev is added). See vdev_metaslab_init().
77 * than this number of metaslabs in the vdev. This ensures that we don't set
84 /* default target for number of metaslabs per top-level vdev */
87 /* minimum number of metaslabs per top-level vdev */
90 /* practical upper limit of total metaslabs per top-level vdev */
102 * Since the DTL space map of a vdev is not expected to have a lot of
124 * vdev-wide space maps that have lots of entries written to them at
152 zfs_dbgmsg("%s vdev '%s': %s", vd->vdev_ops->vdev_op_type,
155 zfs_dbgmsg("%s-%llu vdev (guid %llu): %s",
234 * Given a vdev type, return the appropriate ops vector.
249 * Given a vdev and a metaslab class, find which metaslab group we're
275 * String origin is either the per-vdev zap or zpool(8).
318 * the vdev's asize rounded to the nearest metaslab. This allows us to
335 * The top-level vdev just returns the allocatable size rounded
354 * Get the minimal allocation size for the top-level vdev.
368 * Get the parity level for a top-level vdev.
382 * Get the number of data disks for a top-level vdev.
396 vdev_lookup_top(spa_t *spa, uint64_t vdev)
402 if (vdev < rvd->vdev_children) {
403 ASSERT(rvd->vdev_child[vdev] != NULL);
404 return (rvd->vdev_child[vdev]);
595 * The root vdev's guid will also be the pool guid,
601 * Any other vdev's guid must be unique within the pool.
668 offsetof(struct vdev, vdev_dtl_node));
677 * Allocate a new vdev. The 'alloctype' is used to control whether we are
678 * creating a new vdev or loading an existing one - the behavior is slightly
704 * If this is a load, get the vdev guid from the nvlist.
728 * The first allocated vdev must be of type 'root'.
734 * Determine whether we're a log vdev.
748 * If creating a top-level vdev, check for allocation
772 * Initialize the vdev specific data. This is done before calling
796 * fault on a vdev and want it to persist across imports (like with
852 * Retrieve the vdev creation time.
858 * If we're a top-level vdev, try to load the allocation parameters.
893 * If we're a leaf vdev, try to load the DTL object and other state.
930 * exception is if we forced a vdev to a persistently faulted
982 * queue exists here, that implies the vdev is being removed while
993 * vdev_free() implies closing the vdev first. This is simpler than
1032 * Remove this vdev from its parent's child list.
1040 * Clean up vdev structure.
1122 * Transfer top-level vdev state from svd to tvd.
1177 * State which may be set on a top-level vdev that's in the
1247 * Add a mirror/replacing vdev above an existing vdev. There is no need to
1284 * Remove a 1-way mirror/replacing vdev from the tree.
1305 * If cvd will replace mvd as a top-level vdev, preserve mvd's guid.
1308 * instead of a different version of the same top-level vdev.
1409 * This vdev is not being allocated from yet or is a hole.
1432 * metaslabs for an indirect vdev for zdb's leak detection.
1457 * Find the emptiest metaslab on the vdev and mark it for use for
1501 * If the vdev is being removed we don't activate
1515 * Regardless whether this vdev was just added or it is being
1637 * vdev label but the first, which we leave alone in case it contains
1658 * this vdev will become parents of the probe io.
1696 * We can't change the vdev state in this context, so we
1861 * we ensure that the top-level vdev's ashift is not smaller
1900 * If this vdev is not removed, check its fault status. If it's
1931 * the vdev on error.
1956 * the vdev is accessible. If we're faulted, bail.
2015 * If the vdev was expanded, record this so that we can re-create the
2094 * LUN growth or vdev replacement, and automatic expansion is enabled;
2098 * vdev replace with a smaller device. This ensures that calculations
2113 * vdev open for business.
2133 * If this is a leaf vdev, assess whether a resilver is needed.
2160 * /etc/zfs/zpool.cache was readonly at the time. Otherwise, the vdev state
2217 * be updating the vdev's label before updating spa_last_synced_txg.
2234 * Determine if this vdev has been split off into another
2242 vdev_dbgmsg(vd, "vdev_validate: vdev split into other pool");
2258 * guid might have been written to all of the vdev labels, but not the
2266 vdev_dbgmsg(vd, "vdev_validate: vdev label pool_guid doesn't "
2297 * If this vdev just became a top-level vdev because its sibling was
2298 * detached, it will have adopted the parent's vdev guid -- but the
2301 * vdev, we can safely compare to that instead.
2303 * after the detach, a top-level vdev will appear as a non top-level
2304 * vdev in the config. Also relax the constraints if we perform an
2307 * If we split this vdev off instead, then we also check the
2308 * original pool's guid. We don't want to consider the vdev
2363 * If we were able to open and validate a vdev that was
2378 zfs_dbgmsg("vdev_copy_path: vdev %llu: path changed "
2386 zfs_dbgmsg("vdev_copy_path: vdev %llu: path set to '%s'",
2392 * Recursively copy vdev paths from one vdev to another. Source and destination
2393 * vdev trees must have same geometry otherwise return error. Intended to copy
2405 vdev_dbgmsg(svd, "vdev_copy_path: vdev type mismatch: %s != %s",
2451 * The idea here is that while a vdev can shift positions within
2452 * a top vdev (when replacing, attaching mirror, etc.) it cannot
2466 * Recursively copy vdev paths from one root vdev to another. Source and
2467 * destination vdev trees may differ in geometry. For each destination leaf
2468 * vdev, search a vdev with the same guid and top vdev id in the source.
2562 /* set the reopening flag unless we're taking the vdev offline */
2577 * In case the vdev is present we should evict all ARC
2594 * Reassess parent vdev's health.
2638 * the size of the metaslab and the count of metaslabs per vdev.
2653 * On the lower end of vdev sizes, we aim for metaslabs sizes of
2658 * On the upper end of vdev sizes, we aim for a maximum metaslab
2665 * vdev size metaslab count
2675 * number of metaslabs. Expanding a top-level vdev will result
2731 * A vdev's DTL (dirty time log) is the set of transaction groups for which
2732 * the vdev has less than perfect replication. There are four kinds of DTL:
2734 * DTL_MISSING: txgs for which the vdev has no valid copies of the data
2752 * A vdev's DTL_PARTIAL is the union of its children's DTL_PARTIALs, because
2754 * A vdev's DTL_MISSING is a modified union of its children's DTL_MISSINGs,
2835 * Returns B_TRUE if the vdev determines the DVA needs to be resilvered.
2878 * Determine if a resilvering vdev should remove any DTL entries from
2879 * its range. If the vdev was resilvering for the entire duration of the
2881 * vdev is considered partially resilvered and should leave its DTL
2998 * then determine if this vdev should remove any DTLs. We
3062 * If the vdev was resilvering or rebuilding and no longer
3297 * Determine whether the specified vdev can be offlined/detached/removed
3315 * whether this results in any DTL outages in the top-level vdev.
3373 * Gets the checkpoint space map object from the vdev's ZAP. On success sm_obj
3405 * It's only worthwhile to use the taskq for the root vdev, because the
3465 * Load any rebuild state from the top-level vdev zap.
3479 * If this is a top-level vdev, initialize its metaslabs.
3533 "checkpoint space map object from vdev ZAP "
3540 * If this is a leaf vdev, load its DTL.
3568 "space map object from vdev ZAP [error=%d]", error);
3576 * The special vdev case is used for hot spares and l2cache devices. Its
3577 * sole purpose it to set the vdev state for the associated vdev. To do this,
3638 * Free the objects used to store this vdev's spacemaps, and the array
3723 * If the vdev is indirect, it can't have dirty
3772 * Mark the given vdev faulted. A faulted vdev behaves as if the device could
3801 * We tell if a vdev is persistently faulted by looking at the
3833 * back off and simply mark the vdev as degraded instead.
3853 * Mark the given vdev degraded. A degraded vdev is purely an indication to the
3854 * user that something is wrong. The vdev continues to operate as normal as far
3871 * If the vdev is already faulted, then don't do anything.
3885 * Online the given vdev.
4020 * then proceed. We check that the vdev's metaslab group
4022 * added this vdev but not yet initialized its metaslabs.
4062 * Offline this device and reopen its top-level vdev.
4063 * If the top-level vdev is a log device then just offline
4065 * vdev becoming unusable, undo it and fail the request.
4104 * Clear the error counts associated with this vdev. Unlike vdev_online() and
4127 * It makes no sense to "clear" an indirect vdev.
4135 * also mark the vdev config dirty, so that the new faulted state is
4219 * the proper locks. Note that we have to get the vdev state
4319 * Get statistics for the given vdev.
4326 * If we're getting stats on the root vdev, aggregate the I/O counts
4433 * The vdev fragmentation rating doesn't take into
4502 * (Holes never create vdev children, so all the counters
4508 * one top-level vdev does not imply a root-level error.
4539 * spare vdev is excluded from the processed bytes.
4698 * Update the in-core space usage stats for this vdev, its metaslab class,
4699 * and the root vdev.
4713 * factor. We must calculate this here and not at the root vdev
4714 * because the root vdev's psize-to-asize is simply the max of its
4743 * Mark a top-level vdev's config as dirty, placing it on the dirty list
4744 * so that it will be written out next time the vdev configuration is synced.
4745 * If the root vdev is specified (vdev_top == NULL), dirty all top-level vdevs.
4757 * If this is an aux vdev (as with l2cache and spare devices), then we
4758 * update the vdev config manually and set the sync flag.
4835 * Mark a top-level vdev's state as dirty, so that the next pass of
4877 * Propagate vdev state up from children to parent.
4903 * device, treat the root vdev as if it were
4921 * Root special: if there is a top-level vdev that cannot be
4923 * vdev's aux state as 'corrupt' rather than 'insufficient
4937 * Set a vdev's state. If this is during an open, we don't update the parent
4972 * If we are setting the vdev state to anything but an open state, then
5003 * If we fail to open a vdev during an import or recovery, we
5070 * Notify ZED of any significant state-change on a leaf vdev.
5103 * Check the vdev configuration to ensure that it's capable of supporting
5136 * Determine if a log device has valid content. If the vdev was
5155 * Expand a vdev if possible.
5175 * Split a vdev.
5211 zfs_dbgmsg("slow vdev: %s has %d active IOs",
5279 * specified vdev_t. This function is initially called with a leaf vdev and
5280 * will walk each parent vdev until it reaches a top-level vdev. Once the
5282 * function begins to unwind. As it unwinds it calls the parent's vdev
5290 * Walk up the vdev tree
5297 * We've reached the top-level vdev, initialize the physical
5317 * the vdev specific translate function.
5340 * does not live on this leaf vdev. Only when there is a non-
5351 * Look at the vdev tree and determine whether any devices are currently being
5355 vdev_replace_in_progress(vdev_t *vdev)
5357 ASSERT(spa_config_held(vdev->vdev_spa, SCL_ALL, RW_READER) != 0);
5359 if (vdev->vdev_ops == &vdev_replacing_ops)
5363 * A 'spare' vdev indicates that we have a replace in progress, unless
5367 if (vdev->vdev_ops == &vdev_spare_ops && (vdev->vdev_children > 2 ||
5368 !vdev_dtl_empty(vdev->vdev_child[1], DTL_MISSING)))
5371 for (int i = 0; i < vdev->vdev_children; i++) {
5372 if (vdev_replace_in_progress(vdev->vdev_child[i]))
5387 "Target number of metaslabs per top-level vdev");
5393 "Minimum number of metaslabs per top-level vdev");
5396 "Practical upper limit of total metaslabs per top-level vdev");