Searched hist:22192 (Results 1 - 2 of 2) sorted by relevance

/linux-master/net/ieee802154/
H A Dnl802154.cdiff 04052a31 Fri Nov 06 01:10:37 MST 2020 Alex Shi <alex.shi@linux.alibaba.com> net/ieee802154: remove unused macros to tame gcc

Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@datenfreihafen.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: linux-wpan@vger.kernel.org
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Link: https://lore.kernel.org/r/1604650237-22192-1-git-send-email-alex.shi@linux.alibaba.com
Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
/linux-master/mm/
H A Drmap.cdiff b854f711 Tue Jan 21 16:49:43 MST 2014 Joonsoo Kim <iamjoonsoo.kim@lge.com> mm/rmap: recompute pgoff for huge page

Rmap traversing is used in five different cases, try_to_unmap(),
try_to_munlock(), page_referenced(), page_mkclean() and
remove_migration_ptes(). Each one implements its own traversing
functions for the cases, anon, file, ksm, respectively. These cause
lots of duplications and cause maintenance overhead. They also make
codes being hard to understand and error-prone. One example is hugepage
handling. There is a code to compute hugepage offset correctly in
try_to_unmap_file(), but, there isn't a code to compute hugepage offset
in rmap_walk_file(). These are used pairwise in migration context, but
we missed to modify pairwise.

To overcome these drawbacks, we should unify these through one unified
function. I decide rmap_walk() as main function since it has no
unnecessity. And to control behavior of rmap_walk(), I introduce struct
rmap_walk_control having some function pointers. These makes
rmap_walk() working for their specific needs.

This patchset remove a lot of duplicated code as you can see in below
short-stat and kernel text size also decrease slightly.

text data bss dec hex filename
10640 1 16 10657 29a1 mm/rmap.o
10047 1 16 10064 2750 mm/rmap.o

13823 705 8288 22816 5920 mm/ksm.o
13199 705 8288 22192 56b0 mm/ksm.o

This patch (of 9):

We have to recompute pgoff if the given page is huge, since result based
on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as
shown by commit 36e4f20af833 ("hugetlb: do not use
vma_hugecache_offset() for vma_prio_tree_foreach") and commit 369a713e
("rmap: recompute pgoff for unmapping huge page").

To handle both the cases, normal page for page cache and hugetlb page,
by same way, we can use compound_page(). It returns 0 on non-compound
page and it also returns proper value on compound page.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Hillf Danton <dhillf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Completed in 262 milliseconds