• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /freebsd-12-stable/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/

Lines Matching defs:vdev

58 SYSCTL_NODE(_vfs_zfs, OID_AUTO, vdev, CTLFLAG_RW, 0, "ZFS VDEV");
65 * The limit for ZFS to automatically increase a top-level vdev's ashift
74 * On pool creation or the addition of a new top-level vdev, ZFS will
75 * increase the ashift of the top-level vdev to 2048 as limited by
84 * On pool creation or the addition of a new top-level vdev, ZFS will
85 * increase the ashift of the top-level vdev to 4096 to match the
94 * On pool creation or the addition of a new top-level vdev, ZFS will
95 * increase the ashift of the top-level vdev to 4096 to match the
167 /* default target for number of metaslabs per top-level vdev */
171 "Target number of metaslabs per top-level vdev");
173 /* minimum number of metaslabs per top-level vdev */
177 "Minimum number of metaslabs per top-level vdev");
179 /* practical upper limit of total metaslabs per top-level vdev */
183 "Maximum number of metaslabs per top-level vdev");
189 "Default shift between vdev size and number of metaslabs");
195 "Maximum shift between vdev size and number of metaslabs");
200 "Bypass vdev validation");
203 * Since the DTL space map of a vdev is not expected to have a lot of
212 * vdev-wide space maps that have lots of entries written to them at
242 zfs_dbgmsg("%s vdev '%s': %s", vd->vdev_ops->vdev_op_type,
245 zfs_dbgmsg("%s-%llu vdev (guid %llu): %s",
304 * Given a vdev type, return the appropriate ops vector.
320 * String origin is either the per-vdev zap or zpool(1M).
365 * the vdev's asize rounded to the nearest metaslab. This allows us to
382 * The top-level vdev just returns the allocatable size rounded
389 * The allocatable space for a raidz vdev is N * sizeof(smallest child),
409 vdev_lookup_top(spa_t *spa, uint64_t vdev)
415 if (vdev < rvd->vdev_children) {
416 ASSERT(rvd->vdev_child[vdev] != NULL);
417 return (rvd->vdev_child[vdev]);
603 * The root vdev's guid will also be the pool guid,
609 * Any other vdev's guid must be unique within the pool.
645 offsetof(struct vdev, vdev_dtl_node));
654 * Allocate a new vdev. The 'alloctype' is used to control whether we are
655 * creating a new vdev or loading an existing one - the behavior is slightly
679 * If this is a load, get the vdev guid from the nvlist.
703 * The first allocated vdev must be of type 'root'.
709 * Determine whether we're a log vdev.
756 * If creating a top-level vdev, check for allocation classes input
824 * Retrieve the vdev creation time.
830 * If we're a top-level vdev, try to load the allocation parameters.
865 * If we're a leaf vdev, try to load the DTL object and other state.
937 * queue exists here, that implies the vdev is being removed while
948 * vdev_free() implies closing the vdev first. This is simpler than
979 * Remove this vdev from its parent's child list.
987 * Clean up vdev structure.
1050 * Transfer top-level vdev state from svd to tvd.
1098 * State which may be set on a top-level vdev that's in the
1164 * Add a mirror/replacing vdev above an existing vdev.
1200 * Remove a 1-way mirror/replacing vdev from the tree.
1222 * If cvd will replace mvd as a top-level vdev, preserve mvd's guid.
1225 * instead of a different version of the same top-level vdev.
1280 * general vdev classes. Class destination is late
1308 * This vdev is not being allocated from yet or is a hole.
1331 * metaslabs for an indirect vdev for zdb's leak detection.
1369 * If the vdev is being removed we don't activate
1487 * vdev label but the first, which we leave alone in case it contains
1508 * this vdev will become parents of the probe io.
1546 * We can't change the vdev state in this context, so we
1679 * If this vdev is not removed, check its fault status. If it's
1700 * the vdev on error.
1725 * the vdev is accessible. If we're faulted, bail.
1831 * LUN growth or vdev replacement, and automatic expansion is enabled;
1835 * vdev replace with a smaller device. This ensures that calculations
1850 * vdev open for business.
1875 * If a leaf vdev has a DTL, and seems healthy, then kick off a
1893 * /etc/zfs/zpool.cache was readonly at the time. Otherwise, the vdev state
1925 * be updating the vdev's label before updating spa_last_synced_txg.
1942 * Determine if this vdev has been split off into another
1950 vdev_dbgmsg(vd, "vdev_validate: vdev split into other pool");
1966 * guid might have been written to all of the vdev labels, but not the
1974 vdev_dbgmsg(vd, "vdev_validate: vdev label pool_guid doesn't "
2005 * If this vdev just became a top-level vdev because its sibling was
2006 * detached, it will have adopted the parent's vdev guid -- but the
2009 * vdev, we can safely compare to that instead.
2011 * after the detach, a top-level vdev will appear as a non top-level
2012 * vdev in the config. Also relax the constraints if we perform an
2015 * If we split this vdev off instead, then we also check the
2016 * original pool's guid. We don't want to consider the vdev
2071 * If we were able to open and validate a vdev that was
2086 zfs_dbgmsg("vdev_copy_path: vdev %llu: path changed "
2094 zfs_dbgmsg("vdev_copy_path: vdev %llu: path set to '%s'",
2100 * Recursively copy vdev paths from one vdev to another. Source and destination
2101 * vdev trees must have same geometry otherwise return error. Intended to copy
2113 vdev_dbgmsg(svd, "vdev_copy_path: vdev type mismatch: %s != %s",
2159 * The idea here is that while a vdev can shift positions within
2160 * a top vdev (when replacing, attaching mirror, etc.) it cannot
2174 * Recursively copy vdev paths from one root vdev to another. Source and
2175 * destination vdev trees may differ in geometry. For each destination leaf
2176 * vdev, search a vdev with the same guid and top vdev id in the source.
2273 /* set the reopening flag unless we're taking the vdev offline */
2294 * Reassess parent vdev's health.
2338 * the size of the metaslab and the count of metaslabs per vdev.
2353 * On the lower end of vdev sizes, we aim for metaslabs sizes of
2358 * On the upper end of vdev sizes, we aim for a maximum metaslab
2365 * vdev size metaslab count
2375 * number of metaslabs. Expanding a top-level vdev will result
2459 * A vdev's DTL (dirty time log) is the set of transaction groups for which
2460 * the vdev has less than perfect replication. There are four kinds of DTL:
2462 * DTL_MISSING: txgs for which the vdev has no valid copies of the data
2480 * A vdev's DTL_PARTIAL is the union of its children's DTL_PARTIALs, because
2482 * A vdev's DTL_MISSING is a modified union of its children's DTL_MISSINGs,
2551 * Returns B_TRUE if vdev determines offset needs to be resilvered.
2598 * Determine if a resilvering vdev should remove any DTL entries from
2599 * its range. If the vdev was resilvering for the entire duration of the
2601 * vdev is considered partially resilvered and should leave its DTL
2663 * if this vdev should remove any DTLs. We only want to
2711 * If the vdev was resilvering and no longer has any
2933 * Determine whether the specified vdev can be offlined/detached/removed
2951 * whether this results in any DTL outages in the top-level vdev.
3007 * Gets the checkpoint space map object from the vdev's ZAP.
3060 * If this is a top-level vdev, initialize its metaslabs.
3115 * If this is a leaf vdev, load its DTL.
3146 * The special vdev case is used for hot spares and l2cache devices. Its
3147 * sole purpose it to set the vdev state for the associated vdev. To do this,
3188 * Free the objects used to store this vdev's spacemaps, and the array
3269 * If the vdev is indirect, it can't have dirty
3318 * Mark the given vdev faulted. A faulted vdev behaves as if the device could
3353 * back off and simply mark the vdev as degraded instead.
3373 * Mark the given vdev degraded. A degraded vdev is purely an indication to the
3374 * user that something is wrong. The vdev continues to operate as normal as far
3391 * If the vdev is already faulted, then don't do anything.
3405 * Online the given vdev.
3519 * then proceed. We check that the vdev's metaslab group
3521 * added this vdev but not yet initialized its metaslabs.
3558 * Offline this device and reopen its top-level vdev.
3559 * If the top-level vdev is a log device then just offline
3561 * vdev becoming unusable, undo it and fail the request.
3599 * Clear the error counts associated with this vdev. Unlike vdev_online() and
3629 * It makes no sense to "clear" an indirect vdev.
3637 * also mark the vdev config dirty, so that the new faulted state is
3715 * the proper locks. Note that we have to get the vdev state
3763 * Get statistics for the given vdev.
3811 * If we're getting stats on the root vdev, aggregate the I/O counts
3877 * (Holes never create vdev children, so all the counters
3883 * one top-level vdev does not imply a root-level error.
4003 * Update the in-core space usage stats for this vdev and the root vdev.
4017 * factor. We must calculate this here and not at the root vdev
4018 * because the root vdev's psize-to-asize is simply the max of its
4041 * Mark a top-level vdev's config as dirty, placing it on the dirty list
4042 * so that it will be written out next time the vdev configuration is synced.
4043 * If the root vdev is specified (vdev_top == NULL), dirty all top-level vdevs.
4055 * If this is an aux vdev (as with l2cache and spare devices), then we
4056 * update the vdev config manually and set the sync flag.
4133 * Mark a top-level vdev's state as dirty, so that the next pass of
4175 * Propagate vdev state up from children to parent.
4201 * device, treat the root vdev as if it were
4219 * Root special: if there is a top-level vdev that cannot be
4221 * vdev's aux state as 'corrupt' rather than 'insufficient
4235 * Set a vdev's state. If this is during an open, we don't update the parent
4259 * If we are setting the vdev state to anything but an open state, then
4290 * If we fail to open a vdev during an import or recovery, we
4357 * if it wants to when it sees that a leaf vdev had a state change.
4380 * Check the vdev configuration to ensure that it's capable of supporting
4382 * In addition, only a single top-level vdev is allowed.
4423 * Determine if a log device has valid content. If the vdev was
4442 * Expand a vdev if possible.
4462 * Split a vdev.
4511 "hung on vdev guid %llu at '%s'.",