Lines Matching defs:pages
80 * If there's no chance of allocating enough pages for the whole
90 * Get the list of pages out of our struct file. They'll be pinned
138 * kswapd to reclaim our pages (direct reclaim
144 * dirty pages -- unless you try over and over
251 "Failed to DMA remap %zu pages\n",
294 * backing pages, *now*.
298 obj->mm.pages = ERR_PTR(-EFAULT);
316 * leaving only CPU mmapings around) and add those pages to the LRU
368 struct sg_table *pages,
381 drm_clflush_sg(pages);
389 * pages are swapped-in, and since execbuf binds the object before doing
396 void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj, struct sg_table *pages)
398 __i915_gem_object_release_shmem(obj, pages, true);
400 i915_gem_gtt_finish_pages(obj, pages);
403 i915_gem_object_save_bit_17_swizzle(obj, pages);
405 shmem_sg_free_table(pages, file_inode(obj->base.filp)->i_mapping,
407 kfree(pages);
412 shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
415 i915_gem_object_put_pages_shmem(obj, pages);
417 i915_gem_object_put_pages_phys(obj, pages);
440 * pages, important if the user is just writing to a few and never
452 * Before the pages are instantiated the object is treated as being
453 * in the CPU domain. The pages will be clflushed as required before
454 * use, and we can freely write into the pages directly. If userspace