Lines Matching defs:vdev

179  * to get the vdev stats associated with the imported devices.
501 * Make sure the vdev config is bootable
786 * the root vdev's guid, our own pool guid, and then mark all of our
1146 offsetof(struct vdev, vdev_txg_node));
1234 * Verify a pool configuration, and construct the vdev tree appropriately. This
1235 * will create all the necessary vdevs in the appropriate layout, with each vdev
1237 * All vdev validation is done by the vdev_alloc() routine.
1433 * for basic validation purposes) and one in the active vdev
1435 * validate each vdev on the spare list. If the vdev also exists in the
1436 * active configuration, then we also mark this vdev as an active spare.
1454 * able to load the vdev. Otherwise, importing a pool
1544 * Retain previous vdev for add/remove ops.
1554 * Create new vdev
1562 * Commit this vdev as an l2cache device,
1656 * Checks to see if the given vdev could not be opened, in which case we post a
1726 * Compare the root vdev tree with the information we have
1727 * from the MOS config (mrvd). Check each top-level vdev
1737 * about the top-level vdev then use that vdev instead.
1761 * Swap the missing vdev with the data we were
2076 spa_vdev_err(vdev_t *vdev, vdev_aux_t aux, int err)
2078 vdev_set_state(vdev, B_TRUE, VDEV_STATE_CANT_OPEN, aux);
2091 * we do a reopen() call. If the vdev label for every disk that was
2121 if (glist[i] == 0) /* vdev is hole */
2283 * Parse the configuration into a vdev tree. We explicitly set the
2312 * We need to validate the vdev labels against the configuration that
2314 * mosconfig is true then we're validating the vdev labels based on
2318 * the vdev config.
2413 * If the vdev guid sum doesn't match the uberblock, we have an
2748 * Load the vdev state for all toplevel vdevs.
2790 * root vdev. If it can't be opened, it indicates one or
3029 * The stats information (gen/count/ustats) is used to gather vdev statistics at
3100 * information: the state of each vdev after the
3243 * Add l2cache device information to the nvlist, including vdev stats.
3413 * array of nvlists, each which describes a valid leaf vdev. If this is an
3673 * Create the root vdev.
3871 * Add this top-level vdev to the child array.
3880 * Put this pool's top-level vdevs into a root vdev.
3891 * Replace the existing vdev_tree with the new root vdev in
3900 * Walk the vdev tree and see if we can find a device with "better"
3936 * the vdev (e.g. "id1,sd@SSEAGATE..." or "/pci@1f,0/ide@d/disk@0,0:a").
3937 * The GRUB "findroot" command will return the vdev we should boot.
3991 * Build up a vdev tree based on the boot device's label config.
4008 * Get the boot vdev.
4011 cmn_err(CE_NOTE, "Can not find the boot vdev for guid %llu",
4030 * If the boot device is part of a spare vdev then ensure that
4090 * Multi-vdev root pool configuration discovery is not supported yet.
4134 * Create pool config based on the best vdev config.
4139 * Put this pool's top-level vdevs into a root vdev.
4152 * Replace the existing vdev_tree with the new root vdev in
4158 * Drop vdev config elements that should not be present at pool level.
4235 * Build up a vdev tree based on the boot device's label config.
4741 * Transfer each new top-level vdev from vd to rvd.
4746 * Set the vdev id to the first hole, if one exists.
4801 * a device that is not mirrored, we automatically insert the mirror vdev.
4805 * mirror using the 'replacing' vdev, which is functionally identical to
4806 * the mirror vdev (it actually reuses all the same ops) but has a few
4860 * vdev.
4880 * want to create a replacing vdev. The user is not allowed to
4881 * attach to a spared vdev child unless the 'isspare' state is
4907 * than the top-level vdev.
4933 * mirror/replacing/spare vdev above oldvd.
4997 spa_history_log_internal(spa, "vdev attach", NULL,
4998 "%s vdev=%s %s vdev=%s",
5010 * Detach a device from a mirror or replacing vdev.
5013 * is a replacing vdev.
5043 * vdev that's replacing B with C. The user's intent in replacing
5051 * that C's parent is still the replacing vdev R.
5084 * If we are detaching the second disk from a replacing vdev, then
5085 * check to see if we changed the original vdev's path to have "/old"
5140 * do it now, marking the vdev as no longer a spare in the process.
5156 * If the parent mirror/replacing vdev only has one child,
5168 * may have been the previous top-level vdev.
5174 * Reevaluate the parent vdev state.
5182 * add metaslabs (i.e. grow the pool). We need to reopen the vdev
5212 "vdev=%s", vdpath);
5216 * If this was the removal of the original device in a hot spare vdev,
5263 vdev_t *rvd, **vml = NULL; /* vdev modify list */
5320 /* then, loop over each vdev and validate it */
5488 "vdev=%s", vml[c]->vdev_path);
5609 * associated with this vdev, and wait for these changes to sync.
5659 * Reassess the health of our root vdev.
5667 * Removing a device from the vdev namespace requires several steps
5734 * Stop allocating from this vdev.
5746 * Attempt to evacuate the vdev.
5753 * If we couldn't evacuate the vdev, unwind.
5761 * Clean up the vdev namespace.
5773 * There is no vdev of any kind with the specified guid.
5788 * Find any device that's done replacing, or a vdev marked 'unspare' that's
5804 * vdev in the list to be the oldest vdev, and the last one to be
5806 * the case where the newest vdev is faulted, we will not automatically
5905 * Update the stored path or FRU for this vdev.
6017 /* Tell userspace that the vdev is gone. */
6098 spa_history_log_internal(spa, "vdev online", NULL,
6501 * to do this for pool creation since the vdev's
6684 * If there are any pending vdev state changes, convert them
6693 * eliminate the aux vdev wart by integrating all vdevs
6694 * into the root vdev tree.
6742 * Set the top-level vdev's max queue depth. Evaluate each
6840 * Rewrite the vdev configuration (which includes the uberblock)
6850 * We hold SCL_STATE to prevent vdev open/close/etc.
6851 * while we're attempting to write the vdev labels.
7170 * filled in from the spa and (optionally) the vdev. This doesn't do anything