• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /freebsd-12-stable/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/

Lines Matching defs:vdev

197  * to get the vdev stats associated with the imported devices.
202 * For debugging purposes: print out vdev tree during pool import.
213 * With 1 missing vdev we should be able to import the pool and mount all
257 "print out vdev tree during pool import");
621 * Make sure the vdev config is bootable
906 * the root vdev's guid, our own pool guid, and then mark all of our
1318 offsetof(struct vdev, vdev_txg_node));
1425 * Verify a pool configuration, and construct the vdev tree appropriately. This
1426 * will create all the necessary vdevs in the appropriate layout, with each vdev
1428 * All vdev validation is done by the vdev_alloc() routine.
1678 * for basic validation purposes) and one in the active vdev
1680 * validate each vdev on the spare list. If the vdev also exists in the
1681 * active configuration, then we also mark this vdev as an active spare.
1699 * able to load the vdev. Otherwise, importing a pool
1802 * Retain previous vdev for add/remove ops.
1812 * Create new vdev
1820 * Commit this vdev as an l2cache device,
1935 * Checks to see if the given vdev could not be opened, in which case we post a
2324 spa_vdev_err(vdev_t *vdev, vdev_aux_t aux, int err)
2326 vdev_set_state(vdev, B_TRUE, VDEV_STATE_CANT_OPEN, aux);
2354 * we do a reopen() call. If the vdev label for every disk that was
2384 if (glist[i] == 0) /* vdev is hole */
2460 * Count the number of per-vdev ZAPs associated with all of the vdevs in the
2461 * vdev tree rooted in the given vd, and ensure that each ZAP is present in the
2462 * spa's per-vdev ZAP list.
2870 * Parse the configuration into a vdev tree. We explicitly set the
2898 * Recursively open all vdevs in the vdev tree. This function is called twice:
2908 * missing/unopenable for the root vdev to be still considered openable.
2928 spa_load_note(spa, "vdev tree has %lld missing top-level "
2950 spa_load_failed(spa, "unable to open vdev tree [error=%d]",
2960 * We need to validate the vdev labels against the configuration that
2981 spa_load_failed(spa, "cannot open vdev tree after invalidating "
3014 * checkpointed uberblock to the vdev labels, so searching
3235 * Build a new vdev tree from the trusted config
3241 * obtained by scanning /dev/dsk, then it will have the right vdev
3244 * succeeds only when both configs have exactly the same vdev tree.
3250 spa_load_note(spa, "provided vdev tree:");
3252 spa_load_note(spa, "MOS vdev tree:");
3287 * of the vdev tree. spa_trust_config must be set to true before opening
3293 * Open and validate the new vdev tree
3304 spa_load_note(spa, "final vdev tree:");
3324 spa_load_note(spa, "vdev tree:");
3372 * Retrieve information needed to condense indirect vdev mappings.
3584 * Load the per-vdev ZAP map. If we have an older pool, this will not
3610 * we have orphaned per-vdev ZAPs in the MOS. Defer their
3758 * Load the vdev metadata such as metaslabs, DTLs, spacemap object, etc.
3767 * Propagate the leaf DTLs we just loaded all the way up the vdev tree.
3951 * This means don't trust blkptrs and the vdev tree in general. This
3960 * Parse the config provided to create a vdev tree.
3967 * Now that we have the vdev tree, try to open each vdev. This involves
3969 * probing the vdev with a dummy I/O. The state of each vdev will be set
3978 * Read the label of each vdev and make sure that the GUIDs stored
3991 * Read all vdev labels to find the best uberblock (i.e. latest,
3994 * the vdev label with the best uberblock and verify that our version
4043 * reopen the pool right after we've written it in the vdev labels.
4078 /* Stop when revisiting the first vdev */
4097 "uberblock to the vdev labels [error=%d]", error);
4122 * a new, exact version of the vdev tree, then reopen all vdevs.
4153 * partial configs present in each vdev's label and an entire copy of the
4350 * next sync, we would update the config stored in vdev labels
4529 * The stats information (gen/count/ustats) is used to gather vdev statistics at
4602 * information: the state of each vdev after the
4745 * Add l2cache device information to the nvlist, including vdev stats.
4961 * array of nvlists, each which describes a valid leaf vdev. If this is an
5214 * Create the root vdev.
5422 * Add this top-level vdev to the child array.
5431 * Put this pool's top-level vdevs into a root vdev.
5442 * Replace the existing vdev_tree with the new root vdev in
5451 * Walk the vdev tree and see if we can find a device with "better"
5487 * the vdev (e.g. "id1,sd@SSEAGATE..." or "/pci@1f,0/ide@d/disk@0,0:a").
5488 * The GRUB "findroot" command will return the vdev we should boot.
5545 * Build up a vdev tree based on the boot device's label config.
5562 * Get the boot vdev.
5565 cmn_err(CE_NOTE, "Can not find the boot vdev for guid %llu",
5584 * If the boot device is part of a spare vdev then ensure that
5685 * Create pool config based on the best vdev config.
5690 * Put this pool's top-level vdevs into a root vdev.
5703 * Replace the existing vdev_tree with the new root vdev in
5709 * Drop vdev config elements that should not be present at pool level.
5786 * Build up a vdev tree based on the boot device's label config.
6341 /* Fail if top level vdev is raidz */
6365 * Set the vdev id to the first hole, if one exists.
6420 * a device that is not mirrored, we automatically insert the mirror vdev.
6424 * mirror using the 'replacing' vdev, which is functionally identical to
6425 * the mirror vdev (it actually reuses all the same ops) but has a few
6489 * vdev.
6509 * want to create a replacing vdev. The user is not allowed to
6510 * attach to a spared vdev child unless the 'isspare' state is
6536 * than the top-level vdev.
6562 * mirror/replacing/spare vdev above oldvd.
6626 spa_history_log_internal(spa, "vdev attach", NULL,
6627 "%s vdev=%s %s vdev=%s",
6639 * Detach a device from a mirror or replacing vdev.
6642 * is a replacing vdev.
6667 * happen as we never empty the DTLs of a vdev during the scrub
6693 * vdev that's replacing B with C. The user's intent in replacing
6701 * that C's parent is still the replacing vdev R.
6734 * If we are detaching the second disk from a replacing vdev, then
6735 * check to see if we changed the original vdev's path to have "/old"
6790 * do it now, marking the vdev as no longer a spare in the process.
6806 * If the parent mirror/replacing vdev only has one child,
6818 * may have been the previous top-level vdev.
6824 * Reevaluate the parent vdev state.
6832 * add metaslabs (i.e. grow the pool). We need to reopen the vdev
6862 "vdev=%s", vdpath);
6866 * If this was the removal of the original device in a hot spare vdev,
6906 * we can properly assess the vdev state before we commit to
6912 /* Look up vdev and ensure it's a leaf. */
6993 vdev_t *rvd, **vml = NULL; /* vdev modify list */
7057 /* then, loop over each vdev and validate it */
7117 /* transfer per-vdev ZAPs */
7252 "vdev=%s", vml[c]->vdev_path);
7308 * Find any device that's done replacing, or a vdev marked 'unspare' that's
7324 * vdev in the list to be the oldest vdev, and the last one to be
7326 * the case where the newest vdev is faulted, we will not automatically
7428 * Update the stored path or FRU for this vdev.
7550 /* Tell userspace that the vdev is gone. */
7637 spa_history_log_internal(spa, "vdev online", NULL,
7951 * Rebuild spa's all-vdev ZAP from the vdev ZAPs indicated in each vdev_t.
7952 * The all-vdev ZAP must be empty.
7977 * If the pool is being imported from a pre-per-vdev-ZAP version of ZFS,
7978 * its config may not be dirty but we still need to build per-vdev ZAPs.
8170 * to do this for pool creation since the vdev's
8350 * Since frees / remaps to an indirect vdev can only
8402 * If there are any pending vdev state changes, convert them
8411 * eliminate the aux vdev wart by integrating all vdevs
8412 * into the root vdev tree.
8460 * Set the top-level vdev's max queue depth. Evaluate each
8592 * the number of ZAPs in the per-vdev ZAP list. This only gets
8609 * Rewrite the vdev configuration (which includes the uberblock)
8619 * We hold SCL_STATE to prevent vdev open/close/etc.
8620 * while we're attempting to write the vdev labels.
8633 /* Stop when revisiting the first vdev */
8964 * filled in from the spa and (optionally) the vdev and history nvl. This