Lines Matching refs:swap

17 #include <linux/swap.h>
51 #include "swap.h"
61 * Some modules use swappable objects and may try to swap them out under
63 * check to see if any swap space is available.
74 static const char Bad_file[] = "Bad swap file entry ";
75 static const char Unused_file[] = "Unused swap file entry ";
76 static const char Bad_offset[] = "Bad swap offset entry ";
77 static const char Unused_offset[] = "Unused swap offset entry ";
123 /* Reclaim the swap entry anyway if possible */
126 * Reclaim the swap entry if there are no more mappings of the
130 /* Reclaim the swap entry if swap is getting full*/
133 /* returns 1 if swap entry is freed */
175 * swapon tell device that all the old swap contents can be discarded,
176 * to allow the swap device to optimize its wear-levelling.
185 /* Do not discard the swap header page! */
233 struct swap_info_struct *sis = swp_swap_info(folio->swap);
238 offset = swp_offset(folio->swap);
245 * swap allocation tell device that a cluster of swap can now be discarded,
246 * to allow the swap device to optimize its wear-levelling.
466 * taken by scan_swap_map_slots(), mark the swap entries bad (occupied).
546 * If the swap is discardable, prepare discard the cluster
624 * Try to get a swap entry from current cpu's swap entry pool (a cluster). This
773 * Cross the swap address space size aligned trunk, choose
774 * another trunk randomly to avoid lock contention on swap
779 /* No free swap slots available */
818 * We try to cluster swap pages by allocating them sequentially
819 * in swap. Once we've allocated SWAPFILE_CLUSTER pages this
821 * a new cluster. This prevents us from scattering swap pages
822 * all over the entire swap partition, so that we reduce
823 * overall disk seek times between swap pages. -- sct
825 * And we let swap pages go all over an SSD partition. Hugh
831 * cluster and swap cache. For HDD, sequential access is more
854 * start of partition, to minimize the span of allocated swap.
902 /* reuse swap entry of cache-only swap if not busy. */
1016 * page swap is disabled. Warn and fail the allocation.
1228 * When we get a swap entry, if there aren't some other ways to
1229 * prevent swapoff, such as the folio in swap cache is locked, page
1230 * table lock is held, etc., the swap entry may become invalid because
1231 * of swapoff. Then, we need to enclose all swap related functions
1232 * with get_swap_device() and put_swap_device(), unless the swap
1236 * after freeing a swap entry. Therefore, immediately after
1237 * __swap_entry_free(), the swap info might become stale and should not
1240 * Check whether swap entry is valid in the swap device. If so,
1241 * return pointer to swap_info_struct, and keep the swap entry valid
1242 * via preventing the swap device from being swapoff, until
1261 * changing partly because the specified swap entry may be for another
1262 * swap device which has been swapoff. And in do_swap_page(), after
1263 * the page is read from the swap device, the PTE is verified not
1264 * changed with the page table locked to check whether the swap device
1336 * Caller has made sure that the swap device corresponding to entry
1349 * Called after dropping swapcache to decrease refcnt to swap entries.
1434 * Sort swap entries by swap device, so each lock is only taken once.
1460 * This does not give an exact answer when swap count is continued,
1550 swp_entry_t entry = folio->swap;
1563 * folio_free_swap() - Free the swap space used for this folio.
1566 * If swap is getting full, or if there are no more mappings of this folio,
1567 * then call folio_free_swap to free its swap space.
1569 * Return: true if we were able to release the swap space.
1586 * hibernation is allocating its own swap pages for the image,
1588 * the swap from a folio which has already been recorded in the
1589 * image as a clean swapcache folio, and then reuse its swap for
1592 * later read back in from swap, now with the wrong data.
1606 * Free the swap entry like above, but also try to
1644 /* This is called for allocating swap entry, not cache */
1654 * Find the swap type that corresponds to given device (if any).
1657 * from 0, in which the swap header is expected to be located.
1708 * corresponding to given index in swap_info (swap type).
1722 * Return either the total number of swap pages of given type, or the number
1754 * No need to decide whether this PTE shares the swap entry with others,
1806 * when reading from swap. This metadata may be indexed by swap entry
2119 * swap cache just before we acquired the page lock. The folio
2120 * might even be back in swap cache on another swap area. But
2131 * Lets check again to see if there are still swap entries in the map.
2133 * Under global memory pressure, swap entries can be reinserted back
2137 * above fails, that mm is likely to be freeing swap from
2140 * folio_alloc_swap(), temporarily hiding that swap. It's easy
2159 * After a successful try_to_unuse, if no swap is now in use, we know
2249 * A `swap extent' is a simple thing which maps a contiguous range of pages
2250 * onto a contiguous range of disk blocks. A rbtree of swap extents is
2256 * swap files identically.
2258 * Whether the swapdev is an S_ISREG file or an S_ISBLK blockdev, the swap
2268 * For all swap devices we set S_SWAPFILE across the life of the swapon. This
2269 * prevents users from writing to the swap device, which will corrupt memory.
2271 * The amount of disk space which a single swap extent represents varies.
2328 * low-to-high, while swap ordering is high-to-low
2359 * which allocates swap pages from the highest available priority
2364 /* add to available list iff swap device is not full */
2379 * Finished initializing swap device, now it's safe to reference it.
2488 /* re-insert swap space back into swap_list */
2497 * Wait for swap operations protected by get/put_swap_device()
2501 * the swap cache data structure.
2550 /* Destroy swap account information */
2602 static void *swap_start(struct seq_file *swap, loff_t *pos)
2623 static void *swap_next(struct seq_file *swap, void *v, loff_t *pos)
2643 static void swap_stop(struct seq_file *swap, void *v)
2648 static int swap_show(struct seq_file *swap, void *v)
2656 seq_puts(swap, "Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority\n");
2664 len = seq_file_path(swap, file, " \t\n\\");
2665 seq_printf(swap, "%*s%s\t%lu\t%s%lu\t%s%d\n",
2817 * Find out how many pages are allowed for a single swap device. There
2819 * 1) the number of bits for the swap offset in the swp_entry_t type, and
2820 * 2) the number of bits in the swap pte, as defined by the different
2823 * In order to find the largest possible bit mask, a swap entry with
2824 * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
2825 * decoded to a swp_entry_t again, and finally the swap offset is
2830 * of a swap pte.
2854 pr_err("Unable to find swap-space signature\n");
2858 /* swap partition endianness hack... */
2868 /* Check the swap header's sub-version */
2870 pr_warn("Unable to handle swap header version %d\n",
2882 pr_warn("Empty swap-file\n");
2886 pr_warn("Truncating oversized swap area, only using %luk out of %luk\n",
2972 pr_warn("Empty swap-file\n");
3067 * Read the swap header.
3086 /* OK, set up the swap map and apply the bad block list */
3158 * When discard is enabled for swap with no particular
3159 * policy flagged, we set all swap discard flags here in
3169 * perform discards for released swap page-clusters.
3196 * swap device.
3212 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s%s%s\n",
3287 * Verify that a swap entry is valid and increment its swap map count.
3293 * - swap-cache reference is requested but there is already one. -> EEXIST
3294 * - swap-cache reference is requested but the entry is not used. -> ENOENT
3295 * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
3314 * swapin_readahead() doesn't check if a swap entry is valid, so the
3315 * swap entry could be SWAP_MAP_BAD. Check here with lock held.
3347 err = -ENOENT; /* unused swap entry */
3358 * Help swapoff by noting that swap entry belongs to shmem/tmpfs
3367 * Increase reference count of swap entry by 1.
3383 * @entry: swap entry for which we allocate swap cache.
3385 * Called when allocating swap cache for existing swap entry,
3387 * -EEXIST means there is a swap cache.
3418 return swp_swap_info(folio->swap)->swap_file->f_mapping;
3424 swp_entry_t swap = page_swap_entry(page);
3425 return swp_offset(swap);
3430 * add_swap_count_continuation - called when a swap count is duplicated
3433 * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
3465 * __swap_duplicate(): the swap device may be swapoff
3479 * The higher the swap count, the more likely it is that tasks
3480 * will race to add swap count continuation: we need to avoid
3548 * Called while __swap_duplicate() or swap_entry_free() holds swap or cluster
3666 * We've already scheduled a throttle, avoid taking the global swap
3691 pr_emerg("Not enough memory for swap heads, swap is disabled\n");