Lines Matching refs:page

16 #include <asm/page.h>
169 * By default need to be able to allocate page tables below PGD firstly for
216 * enable and PPro Global page enable), so that any CPU's that boot
342 * big page size instead small one if nearby are ram too.
384 * 32-bit without PAE has a 4M large page size.
409 /* head if not big page alignment ? */
413 * Don't use a large page for the first 2/4MB of memory
432 /* big page (2M) range */
449 /* big page (1G) range */
459 /* tail is not big page (1G) alignment */
469 /* tail is not big page (2M) alignment */
477 /* try to merge same page size and continuous */
492 pr_debug(" [mem %#010lx-%#010lx] page %s\n",
562 * in using smaller size (i.e. 4K instead of 2M or 1G) page tables.
604 * difference of page table level shifts.
621 * [map_start, map_end) in top-down. That said, the page tables
656 * for page table.
686 * bottom-up allocation above the kernel, the page tables will
705 * for page table.
740 * The code below will alias kernel page-tables in the user-range of the
742 * be created when using the trampoline page-table.
780 * allocate page tables above the kernel. So we first map
782 * as soon as possible. And then use page tables allocated above
824 * Randomize the poking address, but make sure that the following page
837 * We need to trigger the allocation of the page-tables that will be
848 * is valid. The argument is a physical page number.
865 * request that the page be shown as all zeros.
892 /* Make sure boundaries are page aligned */
905 * If debugging page accesses then do not free this memory but
907 * create a kernel page fault:
980 * We already reserve the end partial page before in
984 * So here We can do PAGE_ALIGN() safely to get partial page to be freed