Lines Matching defs:blocks

54  * How do we handle breaking sharing of data blocks?
62 * same data blocks.
105 * blocks will typically be shared by many different devices, so we're
686 * Returns the _complete_ blocks that this bio covers.
1081 * We've already unmapped this range of blocks, but before we
1082 * passdown we have to check that these blocks are now unused.
1156 * Only this thread allocates blocks, so we can be sure that the
1157 * newly unmapped blocks will not be allocated before the end of
1170 * Increment the unmapped blocks. This prevents a race between the
1171 * passdown io and reallocation of freed blocks.
1204 * unmapped blocks.
1453 ooms_reason = "Could not get free metadata blocks";
1455 ooms_reason = "No free metadata blocks";
1653 * We don't need to lock the data blocks, since there's no
1654 * passdown. We only lock data blocks for allocation and breaking sharing.
2109 * metadata blocks?
3206 * This ensures that the data blocks of any newly inserted mappings are
3211 * external snapshots and in the case of newly provisioned blocks, when block
3275 * <low water mark (blocks)>
3279 * skip_block_zeroing: skips the zeroing of newly-provisioned blocks.
3475 DMERR("%s: pool target (%llu blocks) too small: expected %llu",
3488 DMINFO("%s: growing the data device from %llu to %llu blocks",
3522 DMERR("%s: metadata device (%llu blocks) too small: expected %llu",
3535 DMINFO("%s: growing the metadata device from %llu to %llu blocks",
3555 * Retrieves the number of blocks of the data device from
4468 sector_t blocks;
4473 * We can't call dm_pool_get_data_dev_size() since that blocks. So
4479 blocks = pool->ti->len;
4480 (void) sector_div(blocks, pool->sectors_per_block);
4481 if (blocks)
4482 return fn(ti, tc->pool_dev, 0, pool->sectors_per_block * blocks, data);