Lines Matching refs:vdev

32  * marking the vdev FAULTY (for I/O errors) or DEGRADED (for checksum errors).
91 * Find a vdev within a tree with a matching GUID.
137 * Given a (pool, vdev) GUID pair, find the matching pool and vdev.
148 * Find the corresponding pool and make sure the vdev still exists.
198 * Given a FRU FMRI, find the matching pool and vdev.
215 * Given a vdev, attempt to replace it with every known spare until one
219 replace_with_spare(fmd_hdl_t *hdl, zpool_handle_t *zhp, nvlist_t *vdev)
243 dev_name = zpool_vdev_name(NULL, zhp, vdev, B_FALSE);
269 * Repair this vdev if we had diagnosed a 'fault.fs.zfs.device' and
314 * vdev, it's possible to see the 'statechange' event, only to be
315 * followed by a vdev failure later. If we don't check the current
316 * state of the vdev (or pool) before marking it repaired, then we risk
322 * DEGRADED leaf vdev (due to checksum errors), this is not the case.
326 * checking the vdev state, where we could correctly account for
365 nvlist_t *vdev;
386 &vdev)) == NULL)
390 replace_with_spare(hdl, zhp, vdev);
432 * for faults targeting a specific vdev (open failure or SERD
436 if (fmd_nvl_class_match(hdl, fault, "fault.fs.zfs.vdev.io")) {
439 "fault.fs.zfs.vdev.checksum")) {
454 * an FMRI string, and attempt to find a matching vdev.
471 zhp = find_by_fru(zhdl, fmri, &vdev);
478 (void) nvlist_lookup_uint64(vdev,
484 * attempt to find the matching vdev.
508 &vdev)) == NULL)
524 * If this is a repair event, then mark the vdev as repaired and
545 replace_with_spare(hdl, zhp, vdev);