Lines Matching refs:vdev

61 /* maximum scrub/resilver I/O queue per leaf vdev */
65 * Given a vdev type, return the appropriate ops vector.
99 * the vdev's asize rounded to the nearest metaslab. This allows us to
116 * The top-level vdev just returns the allocatable size rounded
123 * The allocatable space for a raidz vdev is N * sizeof(smallest child),
142 vdev_lookup_top(spa_t *spa, uint64_t vdev)
148 if (vdev < rvd->vdev_children) {
149 ASSERT(rvd->vdev_child[vdev] != NULL);
150 return (rvd->vdev_child[vdev]);
294 * The root vdev's guid will also be the pool guid,
300 * Any other vdev's guid must be unique within the pool.
325 offsetof(struct vdev, vdev_dtl_node));
334 * Allocate a new vdev. The 'alloctype' is used to control whether we are
335 * creating a new vdev or loading an existing one - the behavior is slightly
356 * If this is a load, get the vdev guid from the nvlist.
380 * The first allocated vdev must be of type 'root'.
386 * Determine whether we're a log vdev.
468 * Retrieve the vdev creation time.
474 * If we're a top-level vdev, try to load the allocation parameters.
498 * If we're a leaf vdev, try to load the DTL object and other state.
567 * vdev_free() implies closing the vdev first. This is simpler than
597 * Remove this vdev from its parent's child list.
604 * Clean up vdev structure.
644 * Transfer top-level vdev state from svd to tvd.
720 * Add a mirror/replacing vdev above an existing vdev.
752 * Remove a 1-way mirror/replacing vdev from the tree.
772 * If cvd will replace mvd as a top-level vdev, preserve mvd's guid.
775 * instead of a different version of the same top-level vdev.
808 * This vdev is not being allocated from yet or is a hole.
864 * If the vdev is being removed we don't activate
954 * to several known locations: the pad regions of each vdev label
975 * this vdev will become parents of the probe io.
1013 * We can't change the vdev state in this context, so we
1121 * If this vdev is not removed, check its fault status. If it's
1141 * the vdev on error.
1161 * the vdev is accessible. If we're faulted, bail.
1257 * vdev open for business.
1267 * If a leaf vdev has a DTL, and seems healthy, then kick off a
1285 * /etc/zfs/zpool.cache was readonly at the time. Otherwise, the vdev state
1316 * Determine if this vdev has been split off into another
1341 * If this vdev just became a top-level vdev because its
1343 * vdev guid -- but the label may or may not be on disk yet.
1345 * same top guid, so if we're a top-level vdev, we can
1348 * If we split this vdev off instead, then we also check the
1349 * original pool's guid. We don't want to consider the vdev
1384 * If we were able to open and validate a vdev that was
1473 /* set the reopening flag unless we're taking the vdev offline */
1494 * Reassess parent vdev's health.
1532 * Aim for roughly 200 metaslabs per vdev.
1558 * A vdev's DTL (dirty time log) is the set of transaction groups for which
1559 * the vdev has less than perfect replication. There are four kinds of DTL:
1561 * DTL_MISSING: txgs for which the vdev has no valid copies of the data
1579 * A vdev's DTL_PARTIAL is the union of its children's DTL_PARTIALs, because
1581 * A vdev's DTL_MISSING is a modified union of its children's DTL_MISSINGs,
1830 * Determine whether the specified vdev can be offlined/detached/removed
1848 * whether this results in any DTL outages in the top-level vdev.
1916 * If this is a top-level vdev, initialize its metaslabs.
1925 * If this is a leaf vdev, load its DTL.
1933 * The special vdev case is used for hot spares and l2cache devices. Its
1934 * sole purpose it to set the vdev state for the associated vdev. To do this,
2046 * Remove the metadata associated with this vdev once it's empty.
2069 * Mark the given vdev faulted. A faulted vdev behaves as if the device could
2104 * back off and simply mark the vdev as degraded instead.
2124 * Mark the given vdev degraded. A degraded vdev is purely an indication to the
2125 * user that something is wrong. The vdev continues to operate as normal as far
2142 * If the vdev is already faulted, then don't do anything.
2156 * Online the given vdev. If 'unspare' is set, it implies two things. First,
2248 * then proceed. We check that the vdev's metaslab group
2250 * added this vdev but not yet initialized its metaslabs.
2278 * Offline this device and reopen its top-level vdev.
2279 * If the top-level vdev is a log device then just offline
2281 * vdev becoming unusable, undo it and fail the request.
2319 * Clear the error counts associated with this vdev. Unlike vdev_online() and
2343 * also mark the vdev config dirty, so that the new faulted state is
2419 * the proper locks. Note that we have to get the vdev state
2445 * Get statistics for the given vdev.
2462 * If we're getting stats on the root vdev, aggregate the I/O counts
2529 * (Holes never create vdev children, so all the counters
2535 * one top-level vdev does not imply a root-level error.
2645 * Update the in-core space usage stats for this vdev, its metaslab class,
2646 * and the root vdev.
2662 * factor. We must calculate this here and not at the root vdev
2663 * because the root vdev's psize-to-asize is simply the max of its
2695 * Mark a top-level vdev's config as dirty, placing it on the dirty list
2696 * so that it will be written out next time the vdev configuration is synced.
2697 * If the root vdev is specified (vdev_top == NULL), dirty all top-level vdevs.
2709 * If this is an aux vdev (as with l2cache and spare devices), then we
2710 * update the vdev config manually and set the sync flag.
2786 * Mark a top-level vdev's state as dirty, so that the next pass of
2827 * Propagate vdev state up from children to parent.
2852 * device, treat the root vdev as if it were
2870 * Root special: if there is a top-level vdev that cannot be
2872 * vdev's aux state as 'corrupt' rather than 'insufficient
2886 * Set a vdev's state. If this is during an open, we don't update the parent
2910 * If we are setting the vdev state to anything but an open state, then
2924 * If we have brought this vdev back into service, we need
2930 * double-check the state of the vdev before repairing it.
2954 * If we fail to open a vdev during an import or recovery, we
3021 * Check the vdev configuration to ensure that it's capable of supporting
3023 * In addition, only a single top-level vdev is allowed and none of the leaves
3051 * Load the state from the original vdev tree (ovd) which
3053 * vdev was offline or faulted then we transfer that state to the
3054 * device in the current vdev tree (nvd).
3070 * Restore the persistent vdev state
3080 * Determine if a log device has valid content. If the vdev was
3099 * Expand a vdev if possible.
3114 * Split a vdev.