History log of /linux-master/arch/arm/mm/init.c
Revision Date Author Comments
# 154c56d8 13-Jan-2024 Randy Dunlap <rdunlap@infradead.org>

ARM: 9334/1: mm: init: remove misuse of kernel-doc comment

Change the "/**" beginning of comment to the common "/*" comment
since the comment is not in kernel-doc format. This prevents a
kernel-doc warning:

arch/arm/mm/init.c:422: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
* update_sections_early intended to be called only through stop_machine

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: patches@armlinux.org.uk
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# a90f0a02 30-Jan-2024 Christophe Leroy <christophe.leroy@csgroup.eu>

arm: ptdump: rename CONFIG_DEBUG_WX to CONFIG_ARM_DEBUG_WX

Patch series "mm: ptdump: Refactor CONFIG_DEBUG_WX and check_wx_pages
debugfs attribute", v2.

This series refactors CONFIG_DEBUG_WX for the 5 architectures implementing
CONFIG_GENERIC_PTDUMP

First rename stuff in ARM which uses similar names while not implementing
CONFIG_GENERIC_PTDUMP.

Then define a generic version of debug_checkwx() that calls
ptdump_check_wx() when CONFIG_DEBUG_WX is set. Call it immediately after
calling mark_rodata_ro() instead of calling it at the end of every
mark_rodata_ro().

Then implement a debugfs attribute that can be used to trigger a W^X test
at anytime and regardless of CONFIG_DEBUG_WX


This patch (of 5):

CONFIG_DEBUG_WX is a core option defined in mm/Kconfig.debug

To avoid any future conflict, rename ARM version into CONFIG_ARM_DEBUG_WX.

Link: https://lore.kernel.org/lkml/20200422152656.GF676@willie-the-truck/T/#m802eaf33efd6f8d575939d157301b35ac0d4a64f
Link: https://github.com/KSPP/linux/issues/35
Link: https://lkml.kernel.org/r/cover.1706610398.git.christophe.leroy@csgroup.eu
Link: https://lkml.kernel.org/r/fa297aa90caeb61eee2b70c6c5897a2ab58a9562.1706610398.git.christophe.leroy@csgroup.eu
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org>
Cc: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Greg KH <greg@kroah.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Phong Tran <tranmanphong@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Steven Price <steven.price@arm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# a9ff6961 02-Jun-2022 Linus Walleij <linus.walleij@linaro.org>

ARM: mm: Make virt_to_pfn() a static inline

Making virt_to_pfn() a static inline taking a strongly typed
(const void *) makes the contract of a passing a pointer of that
type to the function explicit and exposes any misuse of the
macro virt_to_pfn() acting polymorphic and accepting many types
such as (void *), (unitptr_t) or (unsigned long) as arguments
without warnings.

Doing this is a bit intrusive: virt_to_pfn() requires
PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in
<asm/page.h>, so this must be included *before* <asm/memory.h>.

The use of macros were obscuring the unclear inclusion order here,
as the macros would eventually be resolved, but a static inline
like this cannot be compiled with unresolved macros.

The naive solution to include <asm/page.h> at the top of
<asm/memory.h> does not work, because <asm/memory.h> sometimes
includes <asm/page.h> at the end of itself, which would create a
confusing inclusion loop. So instead, take the approach to always
unconditionally include <asm/page.h> at the end of <asm/memory.h>

arch/arm uses <asm/memory.h> explicitly in a lot of places,
however it turns out that if we just unconditionally include
<asm/memory.h> into <asm/page.h> and switch all inclusions of
<asm/memory.h> to <asm/page.h> instead, we enforce the right
order and <asm/memory.h> will always have access to the
definitions.

Put an inclusion guard in place making it impossible to include
<asm/memory.h> explicitly.

Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>


# c6af2aa9 29-Mar-2022 Christoph Hellwig <hch@lst.de>

swiotlb: make the swiotlb_init interface more useful

Pass a boolean flag to indicate if swiotlb needs to be enabled based on
the addressing needs, and replace the verbose argument with a set of
flags, including one to force enable bounce buffering.

Note that this patch removes the possibility to force xen-swiotlb use
with the swiotlb=force parameter on the command line on x86 (arm and
arm64 never supported that), but this interface will be restored shortly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


# e46e45f0 22-Dec-2021 Wang Kefeng <wangkefeng.wang@huawei.com>

ARM: 9175/1: Convert to reserve_initrd_mem()

Covert to the generic reserve_initrd_mem() function.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 3ecc6834 05-Nov-2021 Mike Rapoport <rppt@kernel.org>

memblock: rename memblock_free to memblock_phys_free

Since memblock_free() operates on a physical range, make its name
reflect it and rename it to memblock_phys_free(), so it will be a
logical counterpart to memblock_phys_alloc().

The callers are updated with the below semantic patch:

@@
expression addr;
expression size;
@@
- memblock_free(addr, size);
+ memblock_phys_free(addr, size);

Link: https://lkml.kernel.org/r/20210930185031.18648-6-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Juergen Gross <jgross@suse.com>
Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# a4d5613c 17-May-2021 Mike Rapoport <rppt@kernel.org>

arm: extend pfn_valid to take into account freed memory map alignment

When unused memory map is freed the preserved part of the memory map is
extended to match pageblock boundaries because lots of core mm
functionality relies on homogeneity of the memory map within pageblock
boundaries.

Since pfn_valid() is used to check whether there is a valid memory map
entry for a PFN, make it return true also for PFNs that have memory map
entries even if there is no actual memory populated there.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: Tony Lindgren <tony@atomide.com>


# aefdd438 12-Apr-2021 Jisheng Zhang (syna) <Jisheng.Zhang@synaptics.com>

ARM: 9072/1: mm: remove set_kernel_text_r[ow]()

After commit 5a735583b764 ("arm/ftrace: Use __patch_text()"), the last
and only user of these functions has gone, remove them.

Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# fcf04489 18-Mar-2021 Florian Fainelli <f.fainelli@gmail.com>

ARM: Qualify enabling of swiotlb_init()

We do not need a SWIOTLB unless we have DRAM that is addressable beyond
the arm_dma_limit. Compare max_pfn with arm_dma_pfn_limit to determine
whether we do need a SWIOTLB to be initialized.

Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


# 1f9d03c5 30-Apr-2021 Kefeng Wang <wangkefeng.wang@huawei.com>

mm: move mem_init_print_info() into mm_init()

mem_init_print_info() is called in mem_init() on each architecture, and
pass NULL argument, so using void argument and move it into mm_init().

Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86]
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc]
Acked-by: David Hildenbrand <david@redhat.com>
Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64]
Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm]
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Guo Ren <guoren@kernel.org>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra" <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 4f5b0c17 14-Dec-2020 Mike Rapoport <rppt@kernel.org>

arm, arm64: move free_unused_memmap() to generic mm

ARM and ARM64 free unused parts of the memory map just before the
initialization of the page allocator. To allow holes in the memory map both
architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID.

Allowing holes in the memory map for FLATMEM may be useful for small
machines, such as ARC and m68k and will enable those architectures to cease
using DISCONTIGMEM and still support more than one memory bank.

Move the functions that free unused memory map to generic mm and enable
them in case HAVE_ARCH_PFN_VALID=y.

Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Meelis Roos <mroos@linux.ee>
Cc: Michael Schmitz <schmitzmic@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# b9bc3670 31-Oct-2020 Ard Biesheuvel <ardb@kernel.org>

ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations

free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system.

Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of
high memory pages") it rounded beginning of each region upwards and end of
each region downwards.

However, after that commit free_highmem() rounds the beginning and end of
each region downwards, and we may end up freeing a page that is
memblock_reserve()d, resulting in memory corruption.

Restore the original rounding of the region boundaries to avoid freeing
reserved pages.

Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages")
Link: https://lore.kernel.org/r/20201029110334.4118-1-ardb@kernel.org/
Link: https://lore.kernel.org/r/20201031094345.6984-1-rppt@kernel.org
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Co-developed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>


# 7a1be318 11-Oct-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: 9012/1: move device tree mapping out of linear region

On ARM, setting up the linear region is tricky, given the constraints
around placement and alignment of the memblocks, and how the kernel
itself as well as the DT are placed in physical memory.

Let's simplify matters a bit, by moving the device tree mapping to the
top of the address space, right between the end of the vmalloc region
and the start of the the fixmap region, and create a read-only mapping
for it that is independent of the size of the linear region, and how it
is organized.

Since this region was formerly used as a guard region, which will now be
populated fully on LPAE builds by this read-only mapping (which will
still be able to function as a guard region for stray writes), bump the
start of the [underutilized] fixmap region by 512 KB as well, to ensure
that there is always a proper guard region here. Doing so still leaves
ample room for the fixmap space, even with NR_CPUS set to its maximum
value of 32.

Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# c9118e6c 13-Oct-2020 Mike Rapoport <rppt@kernel.org>

arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()

There are several occurrences of the following pattern:

for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);

/* do something with start_pfn and end_pfn */
}

Rather than iterate over all memblock.memory regions and each time query
for their start and end PFNs, use for_each_mem_pfn_range() iterator to get
simpler and clearer code.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format]
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Emil Renner Berthing <kernel@esmil.dk>
Cc: Hari Bathini <hbathini@linux.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# cddb5ddf 13-Oct-2020 Mike Rapoport <rppt@kernel.org>

arm, xtensa: simplify initialization of high memory pages

free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.

Replace open-coded implementation of for_each_free_mem_range() with usage
of memblock API to simplify the code.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa]
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa]
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Emil Renner Berthing <kernel@esmil.dk>
Cc: Hari Bathini <hbathini@linux.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: https://lkml.kernel.org/r/20200818151634.14343-4-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 0b1abd1f 11-Sep-2020 Christoph Hellwig <hch@lst.de>

dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>

Merge dma-contiguous.h into dma-map-ops.h, after removing the comment
describing the contiguous allocator into kernel/dma/contigous.c.

Signed-off-by: Christoph Hellwig <hch@lst.de>


# c89ab04f 07-Aug-2020 Mike Rapoport <rppt@kernel.org>

mm/sparse: cleanup the code surrounding memory_present()

After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent
functions that call memory_present() for each region in memblock.memory:
sparse_memory_present_with_active_regions() and membocks_present().

Moreover, all architectures have a call to either of these functions
preceding the call to sparse_init() and in the most cases they are called
one after the other.

Mark the regions from memblock.memory as present during sparce_init() by
making sparse_init() call memblocks_present(), make memblocks_present()
and memory_present() functions static and remove redundant
sparse_memory_present_with_active_regions() function.

Also remove no longer required HAVE_MEMORY_PRESENT configuration option.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 84e6ffb2 04-Jun-2020 Mike Rapoport <rppt@kernel.org>

arm: add support for folded p4d page tables

Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate, and remove __ARCH_USE_5LEVEL_HACK.

[rppt@linux.ibm.com: fix kexec]
Link: http://lkml.kernel.org/r/20200508174232.GA759899@linux.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert+renesas@glider.be>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: James Morse <james.morse@arm.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200414153455.21744-3-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# a32c1c61 03-Jun-2020 Mike Rapoport <rppt@kernel.org>

arm: simplify detection of memory zone boundaries

free_area_init() only requires the definition of maximal PFN for each of
the supported zone rater than calculation of actual zone sizes and the
sizes of the holes between the zones.

After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available to all architectures.

Using this function instead of free_area_init_node() simplifies the zone
detection.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64]
Cc: Baoquan He <bhe@redhat.com>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200412194859.12663-8-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 31f3010e 17-Dec-2019 Olof Johansson <olof@lixom.net>

ARM: 8949/1: mm: mark free_memmap as __init

As of commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING
forcibly"), free_memmap() might not always be inlined, and thus is
triggering a section warning:

WARNING: vmlinux.o(.text.unlikely+0x904): Section mismatch in reference from the function free_memmap() to the function .meminit.text:memblock_free()

Mark it as __init, since the faller (free_unused_memmap) already is.

Fixes: ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly")
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 9110f3e7 11-Oct-2019 Ben Dooks (Codethink) <ben.dooks@codethink.co.uk>

ARM: 8917/1: mm: include <asm/set_memory.h>

The definitions of set_kernel_text_rw() and
set_kernel_text_ro() are in <asm/set_memory.h>
but this is not included in init.c which defines
these. Silence the following warnings by including
the <asm/set_memory.h> header.

arch/arm/mm/init.c:669:6: warning: symbol 'set_kernel_text_rw' was not declared. Should it be static?
arch/arm/mm/init.c:678:6: warning: symbol 'set_kernel_text_ro' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# ea5379be 11-Oct-2019 Ben Dooks (Codethink) <ben.dooks@codethink.co.uk>

ARM: 8916/1: mm: make set_section_perms() static

The set_section_perms() is not defined outside of the
init.c file, so make it static to avoid the following
warning:

arch/arm/mm/init.c:596:6: warning: symbol 'set_section_perms' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 032be728 22-Sep-2019 Clemens Gruber <clemens.gruber@pqgruber.com>

ARM: 8907/1: arch: reuse addr variable in pfn_valid

Avoid calling __pfn_to_phys twice.

Signed-off-by: Clemens Gruber <clemens.gruber@pqgruber.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 5b3efa4f 25-Aug-2019 zhaoyang <huangzhaoyang@gmail.com>

ARM: 8901/1: add a criteria for pfn_valid of arm

pfn_valid can be wrong when parsing a invalid pfn whose phys address
exceeds BITS_PER_LONG as the MSB will be trimed when shifted.

The issue originally arise from bellowing call stack, which corresponding to
an access of the /proc/kpageflags from userspace with a invalid pfn parameter
and leads to kernel panic.

[46886.723249] c7 [<c031ff98>] (stable_page_flags) from [<c03203f8>]
[46886.723264] c7 [<c0320368>] (kpageflags_read) from [<c0312030>]
[46886.723280] c7 [<c0311fb0>] (proc_reg_read) from [<c02a6e6c>]
[46886.723290] c7 [<c02a6e24>] (__vfs_read) from [<c02a7018>]
[46886.723301] c7 [<c02a6f74>] (vfs_read) from [<c02a778c>]
[46886.723315] c7 [<c02a770c>] (SyS_pread64) from [<c0108620>]
(ret_fast_syscall+0x0/0x28)

Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# c51bc12d 01-Jul-2019 Doug Berger <opendmb@gmail.com>

ARM: 8874/1: mm: only adjust sections of valid mm structures

A timing hazard exists when an early fork/exec thread begins
exiting and sets its mm pointer to NULL while a separate core
tries to update the section information.

This commit ensures that the mm pointer is not NULL before
setting its section parameters. The arguments provided by
commit 11ce4b33aedc ("ARM: 8672/1: mm: remove tasklist locking
from update_sections_early()") are equally valid for not
requiring grabbing the task_lock around this check.

Fixes: 08925c2f124f ("ARM: 8464/1: Update all mm structures with section adjustments")
Signed-off-by: Doug Berger <opendmb@gmail.com>
Acked-by: Laura Abbott <labbott@redhat.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Rob Herring <robh@kernel.org>
Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# ad3c7b18 23-Jul-2019 Christoph Hellwig <hch@lst.de>

arm: use swiotlb for bounce buffering on LPAE configs

The DMA API requires that 32-bit DMA masks are always supported, but on
arm LPAE configs they do not currently work when memory is present
above 4GB. Wire up the swiotlb code like for all other architectures
to provide the bounce buffering in that case.

Fixes: 21e07dba9fb11 ("scsi: reduce use of block bounce buffers").
Reported-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Vignesh Raghavendra <vigneshr@ti.com>


# e6c4375f 04-Jun-2019 YueHaibing <yuehaibing@huawei.com>

ARM: 8865/1: mm: remove unused variables

Fix gcc warnings:

arch/arm/mm/init.c: In function 'mem_init':
arch/arm/mm/init.c:456:13: warning: unused variable 'itcm_end' [-Wunused-variable]
extern u32 itcm_end;
^
arch/arm/mm/init.c:455:13: warning: unused variable 'dtcm_end' [-Wunused-variable]
extern u32 dtcm_end;
^

They are not used any more since
commit 1c31d4e96b8c ("ARM: 8820/1: mm: Stop printing the virtual memory layout")

Link: https://lkml.org/lkml/2019/5/12/82

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 5f41f919 28-May-2019 Marek Szyprowski <m.szyprowski@samsung.com>

ARM: 8864/1: Add workaround for I-Cache line size mismatch between CPU cores

Some big.LITTLE systems have I-Cache line size mismatch between
LITTLE and big cores. This patch adds a workaround for proper I-Cache
support on such systems. Without it, some class of the userspace code
(typically self-modifying) might suffer from random SIGILL failures.

Similar workaround already exists for ARM64 architecture. I has been
added by commit 116c81f427ff ("arm64: Work around systems with mismatched
cache line sizes").

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# d8ae8a37 13-May-2019 Christoph Hellwig <hch@lst.de>

initramfs: move the legacy keepinitrd parameter to core code

No need to handle the freeing disable in arch code when we already have a
core hook (and a different name for the option) for it.

Link: http://lkml.kernel.org/r/20190213174621.29297-7-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Cc: Steven Price <steven.price@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 14b5f54b 19-Mar-2019 Peng Fan <peng.fan@nxp.com>

ARM: 8850/1: use memblocks_present

arm_memory_present is doing same thing as memblocks_present, so
let's use common code memblocks_present instead of platform
specific arm_memory_present.

Patchwork: https://patchwork.kernel.org/patch/10805693/

Signed-off-by: Peng Fan <peng.fan@nxp.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# ecc3e771 12-Mar-2019 Mike Rapoport <rppt@kernel.org>

memblock: memblock_phys_alloc(): don't panic

Make the memblock_phys_alloc() function an inline wrapper for
memblock_phys_alloc_range() and update the memblock_phys_alloc() callers
to check the returned value and panic in case of error.

Link: http://lkml.kernel.org/r/1548057848-15136-8-git-send-email-rppt@linux.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Guo Ren <ren_guo@c-sky.com> [c-sky]
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Juergen Gross <jgross@suse.com> [Xen]
Cc: Mark Salter <msalter@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Rob Herring <robh@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# f240ec09 12-Mar-2019 Mike Rapoport <rppt@kernel.org>

memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc

The calls to memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE)
and memblock_phys_alloc(size, align) are equivalent as both try to
allocate 'size' bytes with 'align' alignment anywhere in the memory and
panic if hte allocation fails.

The conversion is done using the following semantic patch:

@@
expression size, align;
@@
- memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE)
+ memblock_phys_alloc(size, align)

Link: http://lkml.kernel.org/r/1548057848-15136-4-git-send-email-rppt@linux.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Guo Ren <ren_guo@c-sky.com> [c-sky]
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Juergen Gross <jgross@suse.com> [Xen]
Cc: Mark Salter <msalter@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Rob Herring <robh@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 071d184a 22-Jan-2019 Doug Berger <opendmb@gmail.com>

ARM: 8826/1: mm: initialize pfn limits with find_limits()

The max_low_pfn value must be set before sparse_init() is called to
keep the early memblock allocations and frees balanced for kmemleak
initialization when sparsemem is enabled.

This commit accomplishes that by replacing the local variables min,
max_low, and max_high with the global limit variables min_low_pfn,
max_low_pfn, and max_pfn respectively in bootmem_init(). The global
variables are initialized directly by find_limits() and used in the
remainder of the function.

Fixes: 9099daed9c69 ("mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping")
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Doug Berger <opendmb@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 1c31d4e9 08-Jan-2019 Geert Uytterhoeven <geert@linux-m68k.org>

ARM: 8820/1: mm: Stop printing the virtual memory layout

Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
%p"), the virtual memory layout printed during boot up contains "ptrval"
instead of actual addresses:

Memory: 501296K/524288K available (6144K kernel code, 528K rwdata, 1944K rodata, 1024K init, 7584K bss, 22992K reserved, 0K cma-reserved)
Virtual kernel memory layout:
vector : 0xffff0000 - 0xffff1000 ( 4 kB)
fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
vmalloc : 0xe0800000 - 0xff800000 ( 496 MB)
lowmem : 0xc0000000 - 0xe0000000 ( 512 MB)
modules : 0xbf000000 - 0xc0000000 ( 16 MB)
.text : 0x(ptrval) - 0x(ptrval) (7136 kB)
.init : 0x(ptrval) - 0x(ptrval) (1024 kB)
.data : 0x(ptrval) - 0x(ptrval) ( 529 kB)
.bss : 0x(ptrval) - 0x(ptrval) (7585 kB)

Instead of changing the printing to "%px", and leaking virtual memory
layout information again, just remove the printing completely, cfr. e.g.
commits 071929dbdd865f77 ("arm64: Stop printing the virtual memory
layout") and 31833332f7987636 ("m68k/mm: Stop printing the virtual
memory layout").

All interesting information (actual section sizes) is already printed by
mem_init_print_info() just above anyway.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 229c55cc 05-Nov-2018 Florian Fainelli <f.fainelli@gmail.com>

arch: Move initrd= parsing into do_mounts_initrd.c

ARC, ARM, ARM64 and Unicore32 are all capable of parsing the "initrd="
command line parameter to allow specifying the physical address and size
of an initrd. Move that parsing into init/do_mounts_initrd.c such that
we no longer duplicate that logic.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Rob Herring <robh@kernel.org>


# fe7db757 05-Nov-2018 Florian Fainelli <f.fainelli@gmail.com>

of/fdt: Populate phys_initrd_start/phys_initrd_size from FDT

Now that we have central and global variables holding the physical
address and size of the initrd, we can have
early_init_dt_check_for_initrd() populate
phys_initrd_start/phys_initrd_size for us.

This allows us to remove a chunk of code from arch/arm/mm/init.c
introduced with commit 65939301acdb ("arm: set initrd_start/initrd_end
for fdt scan").

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Rob Herring <robh@kernel.org>


# b1ab95c6 05-Nov-2018 Florian Fainelli <f.fainelli@gmail.com>

arch: Make phys_initrd_start and phys_initrd_size global variables

Make phys_initrd_start and phys_initrd_size global variables declared in
init/do_mounts_initrd.c such that we can later have generic code in
drivers/of/fdt.c populate those variables for us.

This requires both the ARM and unicore32 implementations to be properly
guarded against CONFIG_BLK_DEV_INITRD, and also initialize the variables
to the expected default values (unicore32).

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Rob Herring <robh@kernel.org>


# 57c8a661 30-Oct-2018 Mike Rapoport <rppt@linux.vnet.ibm.com>

mm: remove include/linux/bootmem.h

Move remaining definitions and declarations from include/linux/bootmem.h
into include/linux/memblock.h and remove the redundant header.

The includes were replaced with the semantic patch below and then
semi-automated removal of duplicated '#include <linux/memblock.h>

@@
@@
- #include <linux/bootmem.h>
+ #include <linux/memblock.h>

[sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
[sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
[sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# c6ffc5ca 30-Oct-2018 Mike Rapoport <rppt@linux.vnet.ibm.com>

memblock: rename free_all_bootmem to memblock_free_all

The conversion is done using

sed -i 's@free_all_bootmem@memblock_free_all@' \
$(git grep -l free_all_bootmem)

Link: http://lkml.kernel.org/r/1536927045-23536-26-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# b4c7e2bd 10-Jul-2018 Steven Rostedt (VMware) <rostedt@goodmis.org>

ARM: 8780/1: ftrace: Only set kernel memory back to read-only after boot

Dynamic ftrace requires modifying the code segments that are usually
set to read-only. To do this, a per arch function is called both before
and after the ftrace modifications are performed. The "before" function
will set kernel code text to read-write to allow for ftrace to make the
modifications, and the "after" function will set the kernel code text
back to "read-only" to keep the kernel code text protected.

The issue happens when dynamic ftrace is tested at boot up. The test is
done before the kernel code text has been set to read-only. But the
"before" and "after" calls are still performed. The "after" call will
change the kernel code text to read-only prematurely, and other boot
code that expects this code to be read-write will fail.

The solution is to add a variable that is set when the kernel code text
is expected to be converted to read-only, and make the ftrace "before"
and "after" calls do nothing if that variable is not yet set. This is
similar to the x86 solution from commit 162396309745 ("ftrace, x86:
make kernel text writable only for conversions").

Link: http://lkml.kernel.org/r/20180620212906.24b7b66e@vmware.local.home

Reported-by: Stefan Agner <stefan@agner.ch>
Tested-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# b54290e5 08-Mar-2018 Nicolas Pitre <nico@fluxnic.net>

ARM: simplify and fix linker script for TCM

Let's put the TCM stuff in the __init section directly. No need for
a separately freed memory area.

Remove redundant linker sections, as well as comments that were more
confusing than no comments at all. Finally make it XIP compatible by
using LOAD_OFFSET in the section LMA specification.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Chris Brandt <Chris.Brandt@renesas.com>


# a8e53c15 11-Dec-2017 Jinbum Park <jinb.park7@gmail.com>

ARM: 8737/1: mm: dump: add checking for writable and executable

Page mappings with full RWX permissions are a security risk.
x86, arm64 has an option to walk the page tables
and dump any bad pages.

(1404d6f13e47
("arm64: dump: Add checking for writable and exectuable pages"))
Add a similar implementation for arm.

Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Jinbum Park <jinb.park7@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 400eeffa 13-Nov-2017 Philip Derrin <philip@cog.systems>

ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE

Currently, for ARM kernels with CONFIG_ARM_LPAE and
CONFIG_STRICT_KERNEL_RWX enabled, the 2MiB pages mapping the
kernel code and rodata are writable. They are marked read-only in
a software bit (L_PMD_SECT_RDONLY) but the hardware read-only bit
is not set (PMD_SECT_AP2).

For user mappings, the logic that propagates the software bit
to the hardware bit is in set_pmd_at(); but for the kernel,
section_update() writes the PMDs directly, skipping this logic.

The fix is to set PMD_SECT_AP2 for read-only sections in
section_update(), at the same time as L_PMD_SECT_RDONLY.

Fixes: 1e3479225acb ("ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error")
Signed-off-by: Philip Derrin <philip@cog.systems>
Reported-by: Neil Dick <neil@cog.systems>
Tested-by: Neil Dick <neil@cog.systems>
Tested-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 0d9ac162 04-Sep-2017 Vladimir Murzin <vladimir.murzin@arm.com>

ARM: 8696/1: mm: Remove dead code in mem_init()

The code in question checks memory constrains to set default policy for
overcommit; however we support page size of 4K only thus condition is
always evaluated to false. Remove that dead code.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 11ce4b33 25-Apr-2017 Grygorii Strashko <grygorii.strashko@linaro.org>

ARM: 8672/1: mm: remove tasklist locking from update_sections_early()

The below backtrace can be observed on -rt kernel with
CONFIG_DEBUG_MODULE_RONX (4.9 kernel CONFIG_DEBUG_RODATA) option enabled:

BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:993
in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0
1 lock held by migration/0/14:
#0: (tasklist_lock){+.+...}, at: [<c01183e8>] update_sections_early+0x24/0xdc
irq event stamp: 38
hardirqs last enabled at (37): [<c08f6f7c>] _raw_spin_unlock_irq+0x24/0x68
hardirqs last disabled at (38): [<c01fdfe8>] multi_cpu_stop+0xd8/0x138
softirqs last enabled at (0): [<c01303ec>] copy_process.part.5+0x238/0x1b64
softirqs last disabled at (0): [< (null)>] (null)
Preemption disabled at: [<c01fe244>] cpu_stopper_thread+0x80/0x10c
CPU: 0 PID: 14 Comm: migration/0 Not tainted 4.9.21-rt16-02220-g49e319c #15
Hardware name: Generic DRA74X (Flattened Device Tree)
[<c0112014>] (unwind_backtrace) from [<c010d370>] (show_stack+0x10/0x14)
[<c010d370>] (show_stack) from [<c049beb8>] (dump_stack+0xa8/0xd4)
[<c049beb8>] (dump_stack) from [<c01631a0>] (___might_sleep+0x1bc/0x2ac)
[<c01631a0>] (___might_sleep) from [<c08f7244>] (__rt_spin_lock+0x1c/0x30)
[<c08f7244>] (__rt_spin_lock) from [<c08f77a4>] (rt_read_lock+0x54/0x68)
[<c08f77a4>] (rt_read_lock) from [<c01183e8>] (update_sections_early+0x24/0xdc)
[<c01183e8>] (update_sections_early) from [<c01184b0>] (__fix_kernmem_perms+0x10/0x1c)
[<c01184b0>] (__fix_kernmem_perms) from [<c01fe010>] (multi_cpu_stop+0x100/0x138)
[<c01fe010>] (multi_cpu_stop) from [<c01fe24c>] (cpu_stopper_thread+0x88/0x10c)
[<c01fe24c>] (cpu_stopper_thread) from [<c015edc4>] (smpboot_thread_fn+0x174/0x31c)
[<c015edc4>] (smpboot_thread_fn) from [<c015a988>] (kthread+0xf0/0x108)
[<c015a988>] (kthread) from [<c0108818>] (ret_from_fork+0x14/0x3c)
Freeing unused kernel memory: 1024K (c0d00000 - c0e00000)

The stop_machine() is called with cpus = NULL from fix_kernmem_perms() and
mark_rodata_ro() which means only one CPU will execute
update_sections_early() while all other CPUs will spin and wait. Hence,
it's safe to remove tasklist locking from update_sections_early(). As part
of this change also mark functions which are local to this module as
static.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Acked-by: Laura Abbott <labbott@redhat.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 29930025 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task.h>

We are going to split <linux/sched/task.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 3f07c014 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/signal.h>

We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/signal.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# cdcc5fa0 16-Jan-2017 Russell King <rmk+kernel@armlinux.org.uk>

ARM: mm: round the initrd reservation to page boundaries

Round the initrd memblock reservation to page boundaries to prevent
other data sharing the initrd pages. This prevents an allocation
possibly overlapping with the initrd, which would later get trampled
on in free_initrd_mem().

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 68b32f36 16-Jan-2017 Russell King <rmk+kernel@armlinux.org.uk>

ARM: mm: clean up initrd initialisation

Rather than repeatedly testing phys_initrd_size to see if the initrd
is still enabled, return from the new function to avoid executing the
remaining initialisation.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 39286248 16-Jan-2017 Russell King <rmk+kernel@armlinux.org.uk>

ARM: mm: move initrd init code out of arm_memblock_init()

Move the ARM initrd initialisation code out of arm_memblock_init() into
its own function, so it can be cleaned up.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# d2ca5f24 29-Jan-2017 Afzal Mohammed <afzal.mohd.ma@gmail.com>

ARM: 8646/1: mmu: decouple VECTORS_BASE from Kconfig

For MMU configurations, VECTORS_BASE is always 0xffff0000, a macro
definition will suffice.

For no-MMU, exception base address is dynamically determined in
subsequent patches. To preserve bisectability, now make the
macro applicable for no-MMU scenario too.

Thanks to 0-DAY kernel test infrastructure that found the
bisectability issue. This macro will be restricted to MMU case upon
dynamically determining exception base address for no-MMU.

Once exception address is handled dynamically for no-MMU,
VECTORS_BASE can be removed from Kconfig.

Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a09975bf 14-Jan-2017 Florian Fainelli <f.fainelli@gmail.com>

ARM: 8639/1: Define KERNEL_START and KERNEL_END

In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of
common constants: KERNEL_START and KERNEL_END which abstract
CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where
relevant.

Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0f5bf6d0 06-Feb-2017 Laura Abbott <labbott@redhat.com>

arch: Rename CONFIG_DEBUG_RODATA and CONFIG_DEBUG_MODULE_RONX

Both of these options are poorly named. The features they provide are
necessary for system security and should not be considered debug only.
Change the names to CONFIG_STRICT_KERNEL_RWX and
CONFIG_STRICT_MODULE_RWX to better describe what these options do.

Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>


# 64ac2e74 25-Jan-2016 Kees Cook <keescook@chromium.org>

ARM: 8502/1: mm: mark section-aligned portion of rodata NX

When rodata is large enough that it crosses a section boundary after the
kernel text, mark the rest NX. This is as close to full NX of rodata as
we can get without splitting page tables or doing section alignment via
CONFIG_DEBUG_ALIGN_RODATA.

When the config is:

CONFIG_DEBUG_RODATA=y
# CONFIG_DEBUG_ALIGN_RODATA is not set

Before:

---[ Kernel Mapping ]---
0x80000000-0x80100000 1M RW NX SHD
0x80100000-0x80a00000 9M ro x SHD
0x80a00000-0xa0000000 502M RW NX SHD

After:

---[ Kernel Mapping ]---
0x80000000-0x80100000 1M RW NX SHD
0x80100000-0x80700000 6M ro x SHD
0x80700000-0x80a00000 3M ro NX SHD
0x80a00000-0xa0000000 502M RW NX SHD

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 25362dc4 25-Jan-2016 Kees Cook <keescook@chromium.org>

ARM: 8501/1: mm: flip priority of CONFIG_DEBUG_RODATA

The use of CONFIG_DEBUG_RODATA is generally seen as an essential part of
kernel self-protection:
http://www.openwall.com/lists/kernel-hardening/2015/11/30/13
Additionally, its name has grown to mean things beyond just rodata. To
get ARM closer to this, we ought to rearrange the names of the configs
that control how the kernel protects its memory. What was called
CONFIG_ARM_KERNMEM_PERMS is realy doing the work that other architectures
call CONFIG_DEBUG_RODATA.

This redefines CONFIG_DEBUG_RODATA to actually do the bulk of the
ROing (and NXing). In the place of the old CONFIG_DEBUG_RODATA, use
CONFIG_DEBUG_ALIGN_RODATA, since that's what the option does: adds
section alignment for making rodata explicitly NX, as arm does not split
the page tables like arm64 does without _ALIGN_RODATA.

Also adds human readable names to the sections so I could more easily
debug my typos, and makes CONFIG_DEBUG_RODATA default "y" for CPU_V7.

Results in /sys/kernel/debug/kernel_page_tables for each config state:

# CONFIG_DEBUG_RODATA is not set
# CONFIG_DEBUG_ALIGN_RODATA is not set

---[ Kernel Mapping ]---
0x80000000-0x80900000 9M RW x SHD
0x80900000-0xa0000000 503M RW NX SHD

CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_ALIGN_RODATA=y

---[ Kernel Mapping ]---
0x80000000-0x80100000 1M RW NX SHD
0x80100000-0x80700000 6M ro x SHD
0x80700000-0x80a00000 3M ro NX SHD
0x80a00000-0xa0000000 502M RW NX SHD

CONFIG_DEBUG_RODATA=y
# CONFIG_DEBUG_ALIGN_RODATA is not set

---[ Kernel Mapping ]---
0x80000000-0x80100000 1M RW NX SHD
0x80100000-0x80a00000 9M ro x SHD
0x80a00000-0xa0000000 502M RW NX SHD

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 09414d00 01-Oct-2015 Ard Biesheuvel <ardb@kernel.org>

ARM: only consider memblocks with NOMAP cleared for linear mapping

Take the new memblock attribute MEMBLOCK_NOMAP into account when
deciding whether a certain region is or should be covered by the
kernel direct mapping.

Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>


# 08925c2f 30-Nov-2015 Laura Abbott <labbott@redhat.com>

ARM: 8464/1: Update all mm structures with section adjustments

Currently, when updating section permissions to mark areas RO
or NX, the only mm updated is current->mm. This is working off
the assumption that there are no additional mm structures at
the time. This may not always hold true. (Example: calling
modprobe early will trigger a fork/exec). Ensure all mm structres
get updated with the new section information.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 24bbd929 01-Jun-2015 Ard Biesheuvel <ardb@kernel.org>

of/fdt: split off FDT self reservation from memreserve processing

This splits off the reservation of the memory occupied by the FDT
binary itself from the processing of the memory reservations it
contains. This is necessary because the physical address of the FDT,
which is needed to perform the reservation, may not be known to the
FDT driver core, i.e., it may be mapped outside the linear direct
mapping, in which case __pa() returns a bogus value.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Acked-by: Rob Herring <robh@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# d30eae47 14-Apr-2015 Vladimir Murzin <vladimir.murzin@arm.com>

arm: add support for memtest

Add support for memtest command line option.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 37463be8 26-Mar-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: switch to use the generic show_mem() implementation

Switch ARM to use the generic show_mem() implementation, which displays
the statistics from the mm zone rather than walking the page arrays.

Acked-by: Mel Gorman <mgorman <mgorman@suse.de>
Tested-by: Gregory Fong <gregory.0xf0@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 99a468d7 16-Jan-2015 George G. Davis <ggdavisiv@gmail.com>

ARM: 8286/1: mm: Fix dma_contiguous_reserve comment

DMA contiguous allocations have been able to use highmem since commit
"95b0e65 ARM: mm: don't limit default CMA region only to low memory"
but a comment still notes the earlier "low memory" limitation. Update
the comment to remove the low memory limitation and fix the
s/contigouos/contiguous/ typo while we're at it.

Signed-off-by: George G. Davis <george_davis@mentor.com>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1e347922 09-Jan-2015 Victor Kamensky <victor.kamensky@linaro.org>

ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error

In v3.19-rc3 tree when CONFIG_ARM_LPAE and CONFIG_DEBUG_RODATA are enabled
image failed to compile with the following error:

arch/arm/mm/init.c:661:14: error: ‘PMD_SECT_RDONLY’ undeclared here (not in a function)

It seems that '80d6b0c ARM: mm: allow text and rodata sections to be read-only'
and 'ded9477 ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE'
commits crossed. 80d6b0c uses PMD_SECT_RDONLY macro but ded9477 renames it
and uses software bits L_PMD_SECT_RDONLY instead.

Fix is to use L_PMD_SECT_RDONLY instead PMD_SECT_RDONLY as ded9477 does in
another places.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4ed89f22 28-Oct-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: convert printk(KERN_* to pr_*

Convert many (but not all) printk(KERN_* to pr_* to simplify the code.
We take the opportunity to join some printk lines together so we don't
split the message across several lines, and we also add a few levels
to some messages which were previously missing them.

Tested-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 178c3dfe 19-Oct-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix some printk formats

GCC 4.9 complains if we take the difference of two pointers, and it's
printed with "%d". Fix this by using the proper flag - "t" for
ptrdiff_t.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 80d6b0c2 03-Apr-2014 Kees Cook <keescook@chromium.org>

ARM: mm: allow text and rodata sections to be read-only

This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
read-only. Additionally, this splits rodata from text so that rodata can
also be NX, which may lead to wasted memory when aligning to SECTION_SIZE.
The read-only areas are made writable during ftrace updates and kexec.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>


# 1e6b4811 03-Apr-2014 Kees Cook <keescook@chromium.org>

ARM: mm: allow non-text sections to be non-executable

Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions
into section-sized areas that can have different permisions. Performs
the NX permission changes during free_initmem, so that init memory can be
reclaimed.

This uses section size instead of PMD size to reduce memory lost to
padding on non-LPAE systems.

Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>


# b615bbbf 13-Aug-2014 Mark Salter <msalter@redhat.com>

arm: use generic fixmap.h

ARM is different from other architectures in that fixmap pages are indexed
with a positive offset from FIXADDR_START. Other architectures index with
a negative offset from FIXADDR_TOP. In order to use the generic fixmap.h
definitions, this patch redefines FIXADDR_TOP to be inclusive of the
useable range. That is, FIXADDR_TOP is the virtual address of the topmost
fixed page. The newly defined FIXADDR_END is the first virtual address
past the fixed mappings.

Signed-off-by: Mark Salter <msalter@redhat.com>
Reviewed-by: Doug Anderson <dianders@chromium.org>
[kees: update for a05e54c103b0 ("ARM: 8031/2: change fixmap ...")]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Rob Herring <robh@kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>


# 95b0e655 09-Oct-2014 Marek Szyprowski <m.szyprowski@samsung.com>

ARM: mm: don't limit default CMA region only to low memory

DMA-mapping supports CMA regions places either in low or high memory, so
there is no longer needed to limit default CMA regions only to low memory.
The real limit is still defined by architecture specific DMA limit.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reported-by: Russell King - ARM Linux <linux@arm.linux.org.uk>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Cc: Daniel Drake <drake@endlessm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 421520ba 25-Sep-2014 Yalin Wang <Yalin.Wang@sonymobile.com>

ARM: 8167/1: extend the reserved memory for initrd to be page aligned

This patch extends the start and end address of initrd to be page aligned,
so that we can free all memory including the un-page aligned head or tail
page of initrd, if the start or end address of initrd are not page
aligned, the page can't be freed by free_initrd_mem() function.

Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0aeb3408 13-Apr-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: remove global cr_no_alignment

cr_no_alignment is really only used by the alignment code. Since we no
longer change the setting of cr_alignment after boot, we can localise
this to alignment.c

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b4b20ad8 13-Apr-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: provide common method to clear bits in CPU control register

Several places open-code this manipulation, let's consolidate this.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1c2f87c2 13-Apr-2014 Laura Abbott <lauraa@codeaurora.org>

ARM: 8025/1: Get rid of meminfo

memblock is now fully integrated into the kernel and is the prefered
method for tracking memory. Rather than reinvent the wheel with
meminfo, migrate to using memblock directly instead of meminfo as
an intermediate.

Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d1552ce4 01-Apr-2014 Rob Herring <robh@kernel.org>

of/fdt: move memreserve and dtb memory reservations into core

Move the /memreserve/ processing and dtb memory reservations into
early_init_fdt_scan_reserved_mem. This converts arm, arm64, and powerpc
as they are the only users of early_init_fdt_scan_reserved_mem.

memblock_reserve is safe to call on the same region twice, so the
reservation check for the dtb in powerpc 32-bit reservations is safe to
remove.

Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>


# bcedb5f9 28-Feb-2014 Marek Szyprowski <m.szyprowski@samsung.com>

arm: add support for reserved memory defined by device tree

Enable reserved memory initialization from device tree.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>


# 4c235cb9 13-Jan-2014 Ben Peddell <klightspeed@killerwolves.net>

ARM: 7941/2: Fix incorrect FDT initrd parameter override

Commit 65939301acdb (arm: set initrd_start/initrd_end for fdt scan)
caused the FDT initrd_start and initrd_end to override the
phys_initrd_start and phys_initrd_size set by the initrd= kernel
parameter. With this patch initrd_start and initrd_end will be
overridden if phys_initrd_start and phys_initrd_size are set by the
kernel initrd= parameter.

Fixes: 65939301acdb (arm: set initrd_start/initrd_end for fdt scan)

Signed-off-by: Ben Peddell <klightspeed@killerwolves.net>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cfb66586 21-Jan-2014 Santosh Shilimkar <santosh.shilimkar@ti.com>

arch/arm/mm/init.c: use memblock apis for early memory allocations

Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.

Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Paul Walmsley <paul@pwsan.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Yinghai Lu <yinghai@kernel.org>

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# aec6a888 21-Jan-2014 Mel Gorman <mgorman@suse.de>

mm, show_mem: remove SHOW_MEM_FILTER_PAGE_COUNT

Commit 4b59e6c47309 ("mm, show_mem: suppress page counts in
non-blockable contexts") introduced SHOW_MEM_FILTER_PAGE_COUNT to
suppress PFN walks on large memory machines. Commit c78e93630d15 ("mm:
do not walk all of system memory during show_mem") avoided a PFN walk in
the generic show_mem helper which removes the requirement for
SHOW_MEM_FILTER_PAGE_COUNT in that case.

This patch removes PFN walkers from the arch-specific implementations
that report on a per-node or per-zone granularity. ARM and unicore32
still do a PFN walk as they report memory usage on each bank which is a
much finer granularity where the debugging information may still be of
use. As the remaining arches doing PFN walks have relatively small
amounts of memory, this patch simply removes SHOW_MEM_FILTER_PAGE_COUNT.

[akpm@linux-foundation.org: fix parisc]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: James Bottomley <jejb@parisc-linux.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 6bcac805 07-Jan-2014 Russell King <rmk+kernel@arm.linux.org.uk>

Revert "ARM: 7908/1: mm: Fix the arm_dma_limit calculation"

This reverts commit 787b0d5c1ca7ff24feb6f92e4c7f4410ee7d81a8 since
it is no longer required after 7909/1 was applied, and it causes
build regressions when ARM_PATCH_PHYS_VIRT is disabled and DMA_ZONE
is enabled.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 787b0d5c 02-Dec-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: 7908/1: mm: Fix the arm_dma_limit calculation

Current code is using PHYS_OFFSET to calculate the arm_dma_limit which
will lead to wrong calculations in cases where PHYS_OFFSET is updated
runtime.

So fix the code by using __pv_phys_offset instead of PHYS_OFFSET.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 84f452b1 29-Jun-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: mm: Remove bootmem code and switch to NO_BOOTMEM

Now with dma_mask series merged and max*pfn has consistent meaning on ARM
as rest of the arch's thanks to RMK's mega series, lets switch ARM code
to NO_BOOTMEM. With NO_BOOTMEM change, now we use memblock allocator to
reserve space for crash kernel to have one less dependency with nobootmem
allocator wrapper.

Tested with both flat memory and sparse (faked) memory models with highmem
enabled.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# 8e58caef 23-Nov-2013 Grygorii Strashko <grygorii.strashko@ti.com>

ARM: mm: Don't allow resizing of memblock data until "low" memory is not mapped

If allowed by call to memblock_allow_resize() - The Memblock core will
try to allocate additional memory and rearrange its internal data in
case, if there are more then INIT_MEMBLOCK_REGIONS(128) memory regions
of any type have been allocated. If this happens before Low memory is
mapped (which is done now by map_lowmem()) the system will hang, because
the Memblock core will try to operate with virtual addresses which
aren't mapped yet.

In ARM code, the memblock resizing is allowed (memblock_allow_resize())
from arm_memblock_init() which is called before map_lowmem(), so
this may lead to an error as described above.

Hence, allow Memblock resizing later during init, from bootmem_init()
when all appropriate mappings are ready.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# b3ba41f2 23-Nov-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: mm: Fix max_mapnr with recent max*pfn updates

With commit 26ba47b1 {ARM: 7805/1: mm: change max*pfn to include
the physical offset of memory}, the max_pfn already contain
PHYS_PFN_OFFSET, so it shouldn't be taken into account again.

While at it, use use set_max_mapnr() helper.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# 26ba47b1 31-Jul-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: 7805/1: mm: change max*pfn to include the physical offset of memory

Most of the kernel code assumes that max*pfn is maximum pfns because
the physical start of memory is expected to be PFN0. Since this
assumption is not true on ARM architectures, the meaning of max*pfn
is number of memory pages. This is done to keep drivers happy which
are making use of of these variable to calculate the dma bounce limit
using dma_mask.

Now since we have a architecture override possibility for DMAable
maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM
as well.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4dcfa600 08-Jul-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: DMA-API: better handing of DMA masks for coherent allocations

We need to start treating DMA masks as something which is specific to
the bus that the device resides on, otherwise we're going to hit all
sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit
systems, where memory is offset from PFN 0.

In order to start doing this, we convert the DMA mask to a PFN using
the device specific dma_to_pfn() macro. This is the reverse of the
pfn_to_dma() macro which is used to get the DMA address for the device.

This gives us a PFN mask, which we can then check against the PFN
limit of the DMA zone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cebf3e40 11-Oct-2013 Marek Szyprowski <m.szyprowski@samsung.com>

Revert "ARM: init: add support for reserved memory defined by device tree"

This reverts commit 10bcdfb8ba24760f715f0a700c3812747eddddf5. There is
no consensus on the bindings for the reserved memory, so the code for
handing it will be reverted.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>


# 29eb45a9 30-Aug-2013 Rob Herring <rob.herring@calxeda.com>

of: remove early_init_dt_setup_initrd_arch

All arches do essentially the same thing now for
early_init_dt_setup_initrd_arch, so it can now be removed.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Mark Salter <msalter@redhat.com>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Acked-by: Grant Likely <grant.likely@linaro.org>


# 65939301 30-Aug-2013 Rob Herring <rob.herring@calxeda.com>

arm: set initrd_start/initrd_end for fdt scan

In order to unify the initrd scanning for DT across architectures, make
arm set initrd_start and initrd_end instead of the physical addresses.
This is aligned with all other architectures.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Grant Likely <grant.likely@linaro.org>


# 10bcdfb8 09-Aug-2013 Marek Szyprowski <m.szyprowski@samsung.com>

ARM: init: add support for reserved memory defined by device tree

Enable reserved memory initialization from device tree.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Tomasz Figa <t.figa@samsung.com>


# 364230b9 01-Aug-2013 Rob Herring <rob.herring@calxeda.com>

ARM: use phys_addr_t for DMA zone sizes

In order to specify a DMA zone size of 4GB on LPAE systems, the sizes need
to be 64-bit. So make machine_desc.dma_zone_size and arm_dma_zone_size be
phys_addr_t instead of unsigned long.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>


# ff69a4c8 26-Jul-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: constify machine_desc structure uses

struct machine_desc records are defined everywhere as a 'const'
structure, but unfortuantely it loses its const-ness through the use of
linker magic - the symbols which surround the section are not declared
const so it becomes possible not to use 'const' for pointers to these
const structures.

Let's fix this oversight - all pointers to these structures should be
marked const too.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 374d5c99 01-Jul-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

of: Specify initrd location using 64-bit

On some PAE architectures, the entire range of physical memory could reside
outside the 32-bit limit. These systems need the ability to specify the
initrd location using 64-bit numbers.

This patch globally modifies the early_init_dt_setup_initrd_arch() function to
use 64-bit numbers instead of the current unsigned long.

There has been quite a bit of debate about whether to use u64 or phys_addr_t.
It was concluded to stick to u64 to be consistent with rest of the device
tree code. As summarized by Geert, "The address to load the initrd is decided
by the bootloader/user and set at that point later in time. The dtb should not
be tied to the kernel you are booting"

More details on the discussion can be found here:
https://lkml.org/lkml/2013/6/20/690
https://lkml.org/lkml/2012/9/13/544

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>


# 319e0b4f 09-Jul-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: mm: fix boot on SA1110 Assabet

Commit 83db0384 (mm/ARM: use common help functions to free reserved
pages) broke booting on the Assabet by trying to convert a PFN to
a virtual address using the __va() macro. This macro takes the
physical address, not a PFN. Fix this.

Cc: <stable@vger.kernel.org> # 3.10
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2450c973 03-Jul-2013 Jiang Liu <liuj97@gmail.com>

mm/ARM: prepare for removing num_physpages and simplify mem_init()

Prepare for removing num_physpages and simplify mem_init().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 0c988534 03-Jul-2013 Jiang Liu <liuj97@gmail.com>

mm: concentrate modification of totalram_pages into the mm core

Concentrate code to modify totalram_pages into the mm core, so the arch
memory initialized code doesn't need to take care of it. With these
changes applied, only following functions from mm core modify global
variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
free_all_bootmem_node(), adjust_managed_page_count().

With this patch applied, it will be much more easier for us to keep
totalram_pages and zone->managed_pages in consistence.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: David Howells <dhowells@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# dbe67df4 03-Jul-2013 Jiang Liu <liuj97@gmail.com>

mm: enhance free_reserved_area() to support poisoning memory with zero

Address more review comments from last round of code review.
1) Enhance free_reserved_area() to support poisoning freed memory with
pattern '0'. This could be used to get rid of poison_init_mem()
on ARM64.
2) A previous patch has disabled memory poison for initmem on s390
by mistake, so restore to the original behavior.
3) Remove redundant PAGE_ALIGN() when calling free_reserved_area().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 11199692 03-Jul-2013 Jiang Liu <liuj97@gmail.com>

mm: change signature of free_reserved_area() to fix building warnings

Change signature of free_reserved_area() according to Russell King's
suggestion to fix following build warnings:

arch/arm/mm/init.c: In function 'mem_init':
arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default]
free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
^
In file included from include/linux/mman.h:4:0,
from arch/arm/mm/init.c:15:
include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *'
extern unsigned long free_reserved_area(unsigned long start, unsigned long end,

mm/page_alloc.c: In function 'free_reserved_area':
>> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default]
In file included from arch/mips/include/asm/page.h:49:0,
from include/linux/mmzone.h:20,
from include/linux/gfp.h:4,
from include/linux/mm.h:8,
from mm/page_alloc.c:18:
arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int'
mm/page_alloc.c: In function 'free_area_init_nodes':
mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds]

Also address some minor code review comments.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# de22cc6e 22-Jun-2012 Vitaly Andrianov <vitalya@ti.com>

ARM: LPAE: use phys_addr_t for initrd location

This patch fixes the initrd setup code to use phys_addr_t instead of assuming
32-bit addressing. Without this we cannot boot on systems where initrd is
located above the 4G physical address limit.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 56bc6286 21-Jun-2012 Vitaly Andrianov <vitalya@ti.com>

ARM: LPAE: use phys_addr_t in free_memmap()

The free_memmap() was mistakenly using unsigned long type to represent
physical addresses. This breaks on PAE systems where memory could be placed
above the 32-bit addressible limit.

This patch fixes this function to properly use phys_addr_t instead.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# dd6911ef 29-Apr-2013 Jiang Liu <liuj97@gmail.com>

mm/ARM: use free_highmem_page() to free highmem pages into buddy system

Use helper function free_highmem_page() to free highmem pages into
the buddy system.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 83db0384 29-Apr-2013 Jiang Liu <liuj97@gmail.com>

mm/ARM: use common help functions to free reserved pages

Use common help functions to free reserved pages.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 4b59e6c4 29-Apr-2013 David Rientjes <rientjes@google.com>

mm, show_mem: suppress page counts in non-blockable contexts

On large systems with a lot of memory, walking all RAM to determine page
types may take a half second or even more.

In non-blockable contexts, the page allocator will emit a page allocation
failure warning unless __GFP_NOWARN is specified. In such contexts, irqs
are typically disabled and such a lengthy delay may even result in NMI
watchdog timeouts.

To fix this, suppress the page walk in such contexts when printing the
page allocation failure warning.

Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 7ac68a4c 12-Aug-2012 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Allow arm_memblock_steal() to remove memory from any RAM region

Allow arm_memblock_steal() to remove memory from any RAM region,
including highmem areas. This allows memory to be stolen from the
very top of declared memory, including highmem areas, rather than
our precious lowmem.

Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 158e8bfe 23-Jun-2012 Alessandro Rubini <rubini@gnudd.com>

ARM: 7432/1: use the new linux/sizes.h

Signed-off-by: Alessandro Rubini <rubini@gnudd.com>
Acked-by: Giancarlo Asnaghi <giancarlo.asnaghi@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4986e5c7 05-Jun-2012 Marek Szyprowski <m.szyprowski@samsung.com>

ARM: mm: fix type of the arm_dma_limit global variable

arm_dma_limit stores physical address of maximal address accessible by DMA,
so the phys_addr_t type makes much more sense for it instead of u32. This
patch fixes the following build warning:

arch/arm/mm/init.c:380: warning: comparison of distinct pointer types lacks a cast

Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>


# c7909509 29-Dec-2011 Marek Szyprowski <m.szyprowski@samsung.com>

ARM: integrate CMA with DMA-mapping subsystem

This patch adds support for CMA to dma-mapping subsystem for ARM
architecture. By default a global CMA area is used, but specific devices
are allowed to have their private memory areas if required (they can be
created with dma_declare_contiguous() function during board
initialisation).

Contiguous memory areas reserved for DMA are remapped with 2-level page
tables on boot. Once a buffer is requested, a low memory kernel mapping
is updated to to match requested memory access type.

GFP_ATOMIC allocations are performed from special pool which is created
early during boot. This way remapping page attributes is not needed on
allocation time.

CMA has been enabled unconditionally for ARMv6+ systems.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>


# 14904927 26-Apr-2012 Stephen Boyd <sboyd@codeaurora.org>

ARM: 7401/1: mm: Fix section mismatches

WARNING: vmlinux.o(.text+0x111b8): Section mismatch in reference
from the function arm_memory_present() to the function
.init.text:memory_present()
The function arm_memory_present() references
the function __init memory_present().
This is often because arm_memory_present lacks a __init
annotation or the annotation of memory_present is wrong.

WARNING: arch/arm/mm/built-in.o(.text+0x1edc): Section mismatch
in reference from the function alloc_init_pud() to the function
.init.text:alloc_init_section()
The function alloc_init_pud() references
the function __init alloc_init_section().
This is often because alloc_init_pud lacks a __init
annotation or the annotation of alloc_init_section is wrong.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d9277d51 01-Feb-2012 Uwe Kleine-König <u.kleine-koenig@pengutronix.de>

ARM: 7312/1: only show modules in the memory layout for MODULES=y

This line is irritating and wrong when modules are not supported, so
don't show it then.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9e45dd68 04-Feb-2012 Jesper Juhl <jj@chaosbits.net>

ARM: Remove duplicate asm/memblock.h include from arch/arm/mm/init.c

There's no need to include the header twice, so get rid of the
duplicate.

Signed-off-by: Jesper Juhl <jj@chaosbits.net>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>


# bc2827d0 19-Jan-2012 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix a section mismatch warning with our use of memblock

Commit 716a3dc2008 (ARM: Add arm_memblock_steal() to allocate memory
away from the kernel) added a function which calls memblock_alloc().
This causes a section conflict:

WARNING: vmlinux.o(.text+0xc614): Section mismatch in reference from the function arm_memblock_steal() to the function .init.text:memblock_alloc()
The function arm_memblock_steal() references
the function __init memblock_alloc().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 716a3dc2 13-Jan-2012 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Add arm_memblock_steal() to allocate memory away from the kernel

Several platforms are now using the memblock_alloc+memblock_free+
memblock_remove trick to obtain memory which won't be mapped in the
kernel's page tables. Most platforms do this (correctly) in the
->reserve callback. However, OMAP has started to call these functions
outside of this callback, and this is extremely unsafe - memory will
not be unmapped, and could well be given out after memblock is no
longer responsible for its management.

So, provide arm_memblock_steal() to perform this function, and ensure
that it panic()s if it is used inappropriately. Convert everyone
over, including OMAP.

As a result, OMAP with OMAP4_ERRATA_I688 enabled will panic on boot
with this change. Mark this option as BROKEN and make it depend on
BROKEN. OMAP needs to be fixed, or 137d105d50 (ARM: OMAP4: Fix
errata i688 with MPU interconnect barriers.) reverted until such
time it can be fixed correctly.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1aadc056 08-Dec-2011 Tejun Heo <tj@kernel.org>

memblock: s/memblock_analyze()/memblock_allow_resize()/ and update users

The only function of memblock_analyze() is now allowing resize of
memblock region arrays. Rename it to memblock_allow_resize() and
update its users.

* The following users remain the same other than renaming.

arm/mm/init.c::arm_memblock_init()
microblaze/kernel/prom.c::early_init_devtree()
powerpc/kernel/prom.c::early_init_devtree()
openrisc/kernel/prom.c::early_init_devtree()
sh/mm/init.c::paging_init()
sparc/mm/init_64.c::paging_init()
unicore32/mm/init.c::uc32_memblock_init()

* In the following users, analyze was used to update total size which
is no longer necessary.

powerpc/kernel/machine_kexec.c::reserve_crashkernel()
powerpc/kernel/prom.c::early_init_devtree()
powerpc/mm/init_32.c::MMU_init()
powerpc/mm/tlb_nohash.c::__early_init_mmu()
powerpc/platforms/ps3/mm.c::ps3_mm_add_memory()
powerpc/platforms/embedded6xx/wii.c::wii_memory_fixups()
sh/kernel/machine_kexec.c::reserve_crashkernel()

* x86/kernel/e820.c::memblock_x86_fill() was directly setting
memblock_can_resize before populating memblock and calling analyze
afterwards. Call memblock_allow_resize() before start populating.

memblock_can_resize is now static inside memblock.c.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "H. Peter Anvin" <hpa@zytor.com>


# fe091c20 08-Dec-2011 Tejun Heo <tj@kernel.org>

memblock: Kill memblock_init()

memblock_init() initializes arrays for regions and memblock itself;
however, all these can be done with struct initializers and
memblock_init() can be removed. This patch kills memblock_init() and
initializes memblock with struct initializer.

The only difference is that the first dummy entries don't have .nid
set to MAX_NUMNODES initially. This doesn't cause any behavior
difference.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "H. Peter Anvin" <hpa@zytor.com>


# 1c16d242 08-Dec-2011 Tejun Heo <tj@kernel.org>

memblock: Fix include breakages caused by 24aa07882b

24aa07882b (memblock, x86: Replace memblock_x86_reserve/free_range()
with generic ones) removed arch/x86/include/asm/memblock.h and dropped
its inclusion from include/linux/memblock.h which breaks other
architectures which depended on the generic memblock.h pulling in the
arch specific one.

However, the proper fix isn't adding back the asm inclusion. memblock
doesn't have any arch dependent part and doesn't need arch specific
header file and asm/memblock.h files are either practically empty or
contain mostly unrelated arch specific stuff.

* In microblaze, sh, powerpc, sparc and openrisc, asm/memblock.h is
either empty or just contains unused MEMBLOCK_DBG() macro. Remove
them.

* In arm and unicore32, asm/memblock.h contains arch specific stuff.
Include it directly from its users. It might be a good idea to
rename the header file to avoid confusion.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>


# 55a8173c 18-Sep-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: move initialization of the high_memory variable earlier

Some upcoming changes must know the VMALLOC_START value, which is based
on high_memory, before bootmem_init() is called.

The best location to set it is in sanity_check_meminfo() where the needed
computation is already done, and in the non MMU case it is trivial to do
now that the meminfo array is already sorted at that point.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 27a3f0e9 25-Aug-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: sort the meminfo array earlier

The meminfo array has to be sorted before sanity_check_meminfo() in
arch/arm/mm/mmu.c is called for it to work properly. This also allows
for a simpler find_limits() in arch/arm/mm/init.c.

The sort is moved to arch/arm/kernel/setup.c because that's where the
meminfo array is populated. Eventually this should be improved upon
to make the memory bank parser a bit more robust against problems
such as overlapping memory ranges.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# dc28094b 31-Jul-2011 Paul Gortmaker <paul.gortmaker@windriver.com>

arm: Add export.h to ARM specific files as required.

These files all make use of one of the EXPORT_SYMBOL variants
or the THIS_MODULE macro. So they will need <linux/export.h>

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>


# 002ea9eef 29-Sep-2011 Linus Walleij <linus.walleij@linaro.org>

ARM: 7113/1: mm: Align bank start to MAX_ORDER_NR_PAGES

The VM subsystem assumes that there are valid memmap entries from
the bank start aligned to MAX_ORDER_NR_PAGES.

On the Ux500 we have a lot of mem=N arguments on the commandline
triggering this bug several times over and causing kernel
oops messages.

Cc: stable@kernel.org
Cc: Michael Bohan <mbohan@codeaurora.org>
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Johan Palsson <johan.palsson@stericsson.com>
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# fb492c91 30-Aug-2011 Mark Rutland <mark.rutland@arm.com>

ARM: 7067/1: mm: keep significant bits in pfn_valid

When ARCH_HAS_HOLES_MEMORYMODEL is selected, pfn_valid calls
memblock_is_memory to test validity of a pfn:

> memblock_is_memory(pfn << PAGE_SHIFT);

On LPAE systems this cuts off the top bits, as the shift occurs before
the value is promoted to a phys_addr_t.

This patch replaces the shift with a call to __pfn_to_phys (which casts
pfn to phys_addr_t before shifting), preventing the loss of significant
bits.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 99d1717d 02-Aug-2011 Jon Medhurst <tixy@yxit.co.uk>

ARM: Add init_consistent_dma_size()

This function can be called during boot to increase the size of the consistent
DMA region above it's default value of 2MB. It must be called before the memory
allocator is initialised, i.e. before any core_initcall.

Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# bf912d99 04-Aug-2011 Jamie Iles <jamie@jamieiles.com>

ARM: 7010/1: mm: fix invalid loop for poison_init_mem

poison_init_mem() used a loop of:

while ((count = count - 4))

which has 2 problems - an off by one error so that we do one less word
than we should, and the other is that if count == 0 then we loop forever
and poison too much. On a platform with HAVE_TCM=y but nothing in the
TCM's, this caused corruption and the platform failed to boot.

Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# fb89fcfb 18-Jul-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: ARM_DMA_ZONE_SIZE is no more

One less dependency on mach/memory.h.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 65032018 18-Jul-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: change ARM_DMA_ZONE_SIZE into a variable

Having this value defined at compile time prevents multiple machines with
conflicting definitions to coexist. Move it to a variable in preparation
for having a per machine value selected at run time. This is relevant
only when CONFIG_ZONE_DMA is selected.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 022ae537 08-Jul-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: dma: replace ISA_DMA_THRESHOLD with a variable

ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get
rid of it from ARM by replacing it with arm_dma_zone_mask. Move
dma_supported() and dma_set_mask() out of line, and have
dma_supported() check this new variable instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 54d52573 07-Jul-2011 Stephen Boyd <sboyd@codeaurora.org>

ARM: 6996/1: mm: Poison freed init memory

Poisoning __init marked memory can be useful when tracking down
obscure memory corruption bugs. Therefore, poison init memory
with 0xe7fddef0 to catch bugs earlier. The poison value is an
undefined instruction in ARM mode and branch to an undefined
instruction in Thumb mode.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3835d69a 06-Jul-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: vmlinux.lds: move init sections between text and data sections

Place the init sections between the text and data sections. This
means all code is grouped together at the beginning of the kernel
image, and all data is at the end of the image. This avoids problems
with the 24-bit branch instruction relocations becoming invalid with
large initramfs images.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 8f4b8c76 10-Jun-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: initrd: disable initrds outside of memory

We can't cope with initrds outside of memory, so check that the
initrd is within some declared memory to the kernel before using
it. Otherwise we're likely to OOPS during boot.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 45f6d7e0 02-Jun-2011 Rabin Vincent <rabin@rab.in>

ARM: 6951/1: include .bss in memory layout information

The "Virtual memory kernel layout" message at startup already prints
.text and .data. Print .bss too.

Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7b7bf499 19-May-2011 Will Deacon <will@kernel.org>

ARM: 6913/1: sparsemem: allow pfn_valid to be overridden when using SPARSEMEM

In commit eb33575c ("[ARM] Double check memmap is actually valid with a
memmap has unexpected holes V2"), a new function, memmap_valid_within,
was introduced to mmzone.h so that holes in the memmap which pass
pfn_valid in SPARSEMEM configurations can be detected and avoided.

The fix to this problem checks that the pfn <-> page linkages are
correct by calculating the page for the pfn and then checking that
page_to_pfn on that page returns the original pfn. Unfortunately, in
SPARSEMEM configurations, this results in reading from the page flags to
determine the correct section. Since the memmap here has been freed,
junk is read from memory and the check is no longer robust.

In the best case, reading from /proc/pagetypeinfo will give you the
wrong answer. In the worst case, you get SEGVs, Kernel OOPses and hung
CPUs. Furthermore, ioremap implementations that use pfn_valid to
disallow the remapping of normal memory will break.

This patch allows architectures to provide their own pfn_valid function
instead of using the default implementation used by sparsemem. The
architecture-specific version is aware of the memmap state and will
return false when passed a pfn for a freed page within a valid section.

Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7bf02ea2 24-May-2011 David Rientjes <rientjes@google.com>

arch, mm: filter disallowed nodes from arch specific show_mem functions

Architectures that implement their own show_mem() function did not pass
the filter argument to show_free_areas() to appropriately avoid emitting
the state of nodes that are disallowed in the current context. This patch
now passes the filter argument to show_free_areas() so those nodes are now
avoided.

This patch also removes the show_free_areas() wrapper around
__show_free_areas() and converts existing callers to pass an empty filter.

ia64 emits additional information for each node, so skip_free_areas_zone()
must be made global to filter disallowed nodes and it is converted to use
a nid argument rather than a zone for this use case.

Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: James Bottomley <jejb@parisc-linux.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 93c02ab4 28-Apr-2011 Grant Likely <grant.likely@secretlab.ca>

arm/dt: probe for platforms via the device tree

If a dtb is passed to the kernel then the kernel needs to iterate
through compiled-in mdescs looking for one that matches and move the
dtb data to a safe location before it gets accidentally overwritten by
the kernel.

This patch creates a new function, setup_machine_fdt() which is
analogous to the setup_machine_atags() created in the previous patch.
It does all the early setup needed to use a device tree machine
description.

v5: - Print warning with neither dtb nor atags are passed to the kernel
- Fix bug in setting of __machine_arch_type to the selected machine,
not just the last machine in the list.
Reported-by: Tixy <tixy@yxit.co.uk>
- Copy command line directly into boot_command_line instead of cmd_line
v4: - Dump some output when a matching machine_desc cannot be found
v3: - Added processing of reserved list.
- Backed out the v2 change that copied instead of reserved the
dtb. dtb is reserved again and the real problem was fixed by
using alloc_bootmem_align() for early allocation of RAM for
unflattening the tree.
- Moved cmd_line and initrd changes to earlier patch to make series
bisectable.
v2: Changed to save the dtb by copying into an allocated buffer.
- Since the dtb will very likely be passed in the first 16k of ram
where the interrupt vectors live, memblock_reserve() is
insufficient to protect the dtb data.

[based on work originally written by Jeremy Kerr <jeremy.kerr@canonical.com>]
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>


# 9af386c8 28-Apr-2011 Will Deacon <will@kernel.org>

ARM: 6890/1: memmap: only free allocated memmap entries when using SPARSEMEM

The SPARSEMEM code allocates memmap entries only for sections which are
present (i.e. those which contain some valid memory). The membank checks
in free_unused_memmap do not take this into account and can incorrectly
attempt to free memory which is not allocated, resulting in a BUG() in
the bootmem code.

However, if memory is configured as follows:

|<----section---->|<----hole---->|<----section---->|
+--------+--------+--------------+--------+--------+
| bank 0 | unused | | bank 1 | unused |
+--------+--------+--------------+--------+--------+

where a bank only occupies part of a section, the memmap allocated for
the remainder of the section *can* be freed.

This patch modifies the checks in free_unused_memmap so that only valid
memmap entries are considered for removal.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# be20902b 11-May-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: use ARM_DMA_ZONE_SIZE to adjust the zone sizes

Rather than each platform providing its own function to adjust the
zone sizes, use the new ARM_DMA_ZONE_SIZE definition to perform this
adjustment. This ensures that the actual DMA zone size and the
ISA_DMA_THRESHOLD/MAX_DMA_ADDRESS definitions are consistent with
each other, and moves this complexity out of the platform code.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9eb8f674 28-Apr-2011 Grant Likely <grant.likely@secretlab.ca>

arm/dt: Allow CONFIG_OF on ARM

Add some basic empty infrastructure for DT support on ARM.

v5: - Fix off-by-one error in size calculation of initrd
- Stop mucking with cmd_line, and load command line from dt into
boot_command_line instead which matches the behaviour of ATAGS booting
v3: - moved cmd_line export and initrd setup to this patch to make the
series bisectable.
- switched to alloc_bootmem_align() for allocation when
unflattening the device tree. memblock_alloc() was not the
right interface.

Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>


# b2b755b5 24-Mar-2011 David Rientjes <rientjes@google.com>

lib, arch: add filter argument to show_mem and fix private implementations

Commit ddd588b5dd55 ("oom: suppress nodes that are not allowed from
meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which
resulted in build warnings on all architectures that implement their own
versions of show_mem():

lib/lib.a(show_mem.o): In function `show_mem':
show_mem.c:(.text+0x1f4): multiple definition of `show_mem'
arch/sparc/mm/built-in.o:(.text+0xd70): first defined here

The fix is to remove __show_mem() and add its argument to show_mem() in
all implementations to prevent this breakage.

Architectures that implement their own show_mem() actually don't do
anything with the argument yet, but they could be made to filter nodes
that aren't allowed in the current context in the future just like the
generic implementation.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: James Bottomley <James.Bottomley@hansenpartnership.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# cae6292b 14-Feb-2011 Will Deacon <will@kernel.org>

ARM: 6672/1: LPAE: use phys_addr_t instead of unsigned long in mapping functions

The unsigned long datatype is not sufficient for mapping physical addresses
>= 4GB.

This patch ensures that the phys_addr_t datatype is used to represent physical
addresses when converting from a PFN.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b0a2679d 30-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: initrd: disable initrd if passed address overlaps reserved region

Disable the initrd if the passed address already overlaps the reserved
region. This avoids oopses on Netwinders when NeTTrom tells the kernel
that an initrd is located at mem+4MB, but this overlaps the BSS,
resulting in the kernels in-use BSS being freed.

This should be applied to v2.6.37-stable.

Cc: <stable@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f25b4b4c 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: memblock: move meminfo into find_limits directly

bootmem_init() no longer makes several uses of the membank
information, so move this into the one remaining called function
which does use it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# df4f14c7 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: memblock: convert free_highpages() to use memblock

Free the high pages using the memblock memory lists - and more
importantly, exclude any memblock allocations in highmem from the
free'd memory.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d0e775af 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: move freeing of highmem pages out of mem_init()

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 47ea3c15 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: memblock: convert memory detail printing to use memblock

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a801d276 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: memblock: use memblock to free memory into arm_bootmem_init()

Switch arm_bootmem_init() to use memblock instead of membank to
free memory into bootmem.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a2c54d2a 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: memblock: use memblock when initializing memory allocators

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7dc50ec7 27-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: ensure membank array is always sorted

This was missing from the noMMU code, so there was the possibility
of things not working as expected if out of order memory information
was passed.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c7fc2de0 12-Oct-2010 Yinghai Lu <yinghai@kernel.org>

memblock, bootmem: Round pfn properly for memory and reserved regions

We need to round memory regions correctly -- specifically, we need to
round reserved region in the more expansive direction (lower limit
down, upper limit up) whereas usable memory regions need to be rounded
in the more restrictive direction (lower limit up, upper limit down).

This introduces two set of inlines:

memblock_region_memory_base_pfn()
memblock_region_memory_end_pfn()
memblock_region_reserved_base_pfn()
memblock_region_reserved_end_pfn()

Although they are antisymmetric (and therefore are technically
duplicates) the use of the different inlines explicitly documents the
programmer's intention.

The lack of proper rounding caused a bug on ARM, which was then found
to also affect other architectures.

Reported-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4CB4CDFD.4020105@kernel.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>


# 842eab40 01-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: vmlinux.lds: Refer to start of .data using _sdata rather than _data

Use _sdata as the start of the data section, rather than _data.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7c996361 16-Sep-2010 Yinghai Lu <yinghai@kernel.org>

arm, memblock: Fix the sparsemem build

Stephen Rothwell reported this build failure:

arch/arm/mm/init.c: In function 'arm_memory_present':
arch/arm/mm/init.c:260: warning: ISO C90 forbids mixed declarations and code

Caused by commit 719c1514f2 ("memblock/arm: Use new accessors")
which forgot a closing brace on a new for_each_memblock() in
arm_memory_present().

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <linux@arm.linux.org.uk>
LKML-Reference: <4C91C544.5050907@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 719c1514 04-Aug-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org>

memblock/arm: Use new accessors

CC: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# 5e6f6aa1 03-Aug-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org>

memblock/arm: pfn_valid uses memblock_is_memory()

The implementation is pretty much similar. There is a -small- added
overhead by having another function call and the address shift.

If that becomes a concern, I suppose we could actually have memblock
itself expose a memblock_pfn_valid() which then ARM can use directly
with an appropriate #define...

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# e3239ff9 03-Aug-2010 Benjamin Herrenschmidt <benh@kernel.crashing.org>

memblock: Rename memblock_region to memblock_type and memblock_property to memblock_region

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# 1dbd30e9 12-Jul-2010 Linus Walleij <linus.walleij@stericsson.com>

ARM: 6225/1: make TCM allocation static and common for all archs

This changes the TCM handling so that a fixed area is reserved at
0xfffe0000-0xfffeffff for TCM. This areas is used by XScale but
XScale does not have TCM so the mechanisms are mutually exclusive.

This change is needed to make TCM detection more dynamic while
still being able to compile code into it, and is a must for the
unified ARM goals: the current TCM allocation at different places
in memory for each machine would be a nightmare if you want to
compile a single image for more than one machine with TCM so it
has to be nailed down in one place.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a9deb137 01-Jul-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Remove unnecessary call to find_limits()

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e07b9e08 30-Jun-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: LMB: convert pfn_valid to use LMB

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# eda2e5dc 30-Jun-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: LMB: Convert arm_memory_present() to use LMB memory information

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 8d717a52 22-May-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Convert platform reservations to use LMB rather than bootmem

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2778f620 09-Jul-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: initial LMB trial

Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 07d2a5c7 12-Jul-2010 Linus Walleij <linus.walleij@stericsson.com>

ARM: 6224/1: print TCM whereabouts in init message

If TCM is in use, we should display it in the virtual memory
layout along with everything else.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 98c672cf 22-May-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Move platform memory reservations out of generic code

Move the platform specific bootmem memory reservations out of
arch/arm/mm/mmu.c into their respective platform files.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b65b4781 22-May-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Remove 'node' argument form arch_adjust_zones()

Since we no longer support discontigmem, node is always zero, so
remove this argument.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# be370302 07-May-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Remove DISCONTIGMEM support

Everything should now be using sparsemem rather than discontigmem, so
remove the code supporting discontigmem from ARM.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3260e529 14-Jun-2010 Michael Bohan <mbohan@codeaurora.org>

arm: mm: Don't free prohibited memmap entries

The VM subsystem assumes that there are valid memmap entries to
the bank end aligned to MAX_ORDER_NR_PAGES. It will try and read
these page structs, and so we cannot free any memmap entries that
it may inspect.

Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>


# ea208f64 26-May-2010 Linus Walleij <linus.walleij@stericsson.com>

ARM: 6144/1: TCM memory bug freeing bug

This fixes a bug in mm/init.c when freeing the TCM compile memory,
this was being referred to as a char * which is incorrect: this
will dereference the pointer and feed in the value at the location
instead of the address to it. Change it to a plain char and use
&(char) to reference it.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Cc: <stable@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a2227120 25-Mar-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Move memory mapping into mmu.c

Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ceb683d3 25-Mar-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Ensure meminfo is sorted prior to sanity_check_meminfo

Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ea056df7 04-May-2010 Catalin Marinas <catalin.marinas@arm.com>

ARM: 6093/1: Fix kernel memory printing for sparsemem

The show_mem() and mem_init() function are assuming that the page map is
contiguous and calculates the start and end page of a bank using (map +
pfn). This fails with SPARSEMEM where pfn_to_page() must be used.

Tested-by: Will Deacon <Will.Deacon@arm.com>
Tested-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5a0e3ad6 24-Mar-2010 Tejun Heo <tj@kernel.org>

include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h

percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.

2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).

* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>


# a1839272 07-Feb-2010 Fenkart/Bostandzhyan <andreas.fenkart@streamunlimited.com>

ARM: 5929/1: Add checks to detect overlap of memory regions.

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>

Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c931b4f6 07-Feb-2010 Fenkart/Bostandzhyan <andreas.fenkart@streamunlimited.com>

ARM: 5928/1: Change type of VMALLOC_END to unsigned long.

Makes it consistent with VMALLOC_START

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a7bd08c8 07-Feb-2010 Fenkart/Bostandzhyan <andreas.fenkart@streamunlimited.com>

ARM: 5927/1: Make delimiters of DMA area globally visibly.

Adds DMA area to 'virtual memory map' startup message

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# db9ef1af 07-Feb-2010 Fenkart/Bostandzhyan <andreas.fenkart@streamunlimited.com>

ARM: 5926/1: Add "Virtual kernel memory..." printout.

Code based on parisc and x86_32.

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2b0d8c25 11-Jan-2010 Jeremy Kerr <jeremy.kerr@canonical.com>

ARM: 5880/1: arm: use generic infrastructure for early params

The ARM setup code includes its own parser for early params, there's
also one in the generic init code.

This patch removes __early_init (and related code) from
arch/arm/kernel/setup.c, and changes users to the generic early_init
macro instead.

The generic macro takes a char * argument, rather than char **, so we
need to update the parser functions a little.

Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4b529401 08-Jan-2010 Andreas Fenkart <andreas.fenkart@streamunlimited.com>

mm: make totalhigh_pages unsigned long

Makes it consistent with the extern declaration, used when CONFIG_HIGHMEM
is set Removes redundant casts in printout messages

Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 657e12fd 29-Oct-2009 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Fix sparsemem with SPARSEMEM_EXTREME enabled

When SPARSEMEM_EXTREME is enabled, memory_present() wants to use bootmem
to allocate data structures. However, we call memory_present() after
declaring memory to bootmem, but before we've reserved areas.

This leads to sparsemem data structures being overwritten later in the
kernel's initialization (when slab initializes.)

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3257f43d 06-Oct-2009 Catalin Marinas <catalin.marinas@arm.com>

ARM: 5747/1: Fix the start_pg value in free_memmap()

If sparsemem is enabled, the start_pfn passed to the free_memmap()
function corresponds to an area of memory not known to the kernel and
pfn_to_page returns a wrong value. The (start_pfn - 1), however, is
known to the kernel.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cc013a88 21-Sep-2009 Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>

arches: drop superfluous casts in nr_free_pages() callers

Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'. This made the casts to 'unsigned long' in most callers superfluous,
so remove them.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# bc581770 15-Sep-2009 Linus Walleij <linus.walleij@stericsson.com>

ARM: 5580/2: ARM TCM (Tightly-Coupled Memory) support v3

This adds the TCM interface to Linux, when active, it will
detect and report TCM memories and sizes early in boot if
present, introduce generic TCM memory handling, provide a
generic TCM memory pool and select TCM memory for the U300
platform.

See the Documentation/arm/tcm.txt for documentation.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b7cfda9f 07-Sep-2009 Russell King <rmk@dyn-67.arm.linux.org.uk>

ARM: Fix pfn_valid() for sparse memory

On OMAP platforms, some people want to declare to segment up the memory
between the kernel and a separate application such that there is a hole
in the middle of the memory as far as Linux is concerned. However,
they want to be able to mmap() the hole.

This currently causes problems, because update_mmu_cache() thinks that
there are valid struct pages for the "hole". Fix this by making
pfn_valid() slightly more expensive, by checking whether the PFN is
contained within the meminfo array.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Khasim Syed Mohammed <khasim@ti.com>


# dde5828f 14-Aug-2009 Russell King <rmk@dyn-67.arm.linux.org.uk>

ARM: Fix broken highmem support

Currently, highmem is selectable, and you can request an increased
vmalloc area. However, none of this has any effect on the memory
layout since a patch in the highmem series was accidentally dropped.
Moreover, even if you did want highmem, all memory would still be
registered as lowmem, possibly resulting in overflow of the available
virtual mapping space.

The highmem boundary is determined by the highest allowed beginning
of the vmalloc area, which depends on its configurable minimum size
(see commit 60296c71f6c5063e3c1f1d2619ca0b60940162e7 for details on
this).

We should create mappings and initialize bootmem only for low memory,
while the zone allocator must still be told about highmem.

Currently, memory nodes which are completely located in high memory
are not supported. This is not a huge limitation since systems
relying on highmem support are unlikely to have discontiguous memory
with large holes.

[ A similar patch was meant to be merged before commit 5f0fbf9ecaf3
and be available in Linux v2.6.30, however some git rebase screw-up
of mine dropped the first commit of the series, and that goofage
escaped testing somehow as well. -- Nico ]

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@marvell.com>


# 3835f6cb 17-Sep-2008 Nicolas Pitre <nico@cam.org>

[ARM] mem_init(): make highmem pages available for use

Signed-off-by: Nicolas Pitre <nico@marvell.com>


# 1522ac3e 12-Mar-2009 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Fix virtual to physical translation macro corner cases

The current use of these macros works well when the conversion is
entirely linear. In this case, we can be assured that the following
holds true:

__va(p + s) - s = __va(p)

However, this is not always the case, especially when there is a
non-linear conversion (eg, when there is a 3.5GB hole in memory.)
In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
'p' is the final page, the above is most definitely not true.

So, we must ensure that __va() and __pa() are only used with valid
kernel direct mapped RAM addresses. This patch tweaks the code
to achieve this.

Tested-by: Charles Moschel <fred99@carolina.rr.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 37efe642 01-Dec-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] use asm/sections.h

Update to use the asm/sections.h header rather than declaring these
symbols ourselves. Change __data_start to _data to conform with the
naming found within asm/sections.h.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6db015e4 17-Sep-2008 Nicolas Pitre <nico@cam.org>

[ARM] mem_init() cleanups

Make free_area() arguments pfn based, and return number of freed pages.
This will simplify highmem initialization later.

Also, codepages, datapages and initpages are actually codesize, datasize
and initsize.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4b5f32ce 06-Oct-2008 Nicolas Pitre <nico@cam.org>

[ARM] rationalize memory configuration code some more

Currently there are two instances of struct meminfo: one in
kernel/setup.c marked __initdata, and another in mm/init.c with
permanent storage. Let's keep only the later to directly populate
the permanent version from arm_add_memory().

Also move common validation tests between the MMU and non-MMU cases
into arm_add_memory() to remove some duplication. Protection against
overflowing the membank array is also moved in there in order to cover
the kernel cmdline parsing path as well.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b7a69ac3 01-Oct-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] mm: finish ARM sparsemem support

... including some comments about the ordering required to bring
sparsemem up. You have to repeatedly guess, test, reguess, try
again and again to work out what the right ordering is. Many
hours later...

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d2a38ef9 01-Oct-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] mm: provide helpers for accessing membanks

Provide helpers for getting physical addresses or pfns from the
meminfo array, and use them. Move for_each_nodebank() to
asm/setup.h alongside the meminfo structure definition.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# eca73214 30-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] mm: move validation of membanks to one place

The newly introduced sanity_check_meminfo() function should be
used to collect all validation of the meminfo array, which we
have in bootmem_init(). Move it there.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5ed5fdf5 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] clean up a load of old declarations

... some of which are now in linux/*.h headers.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 012d1f4a 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] move initrd code from kernel/setup.c to mm/init.c

This quietens some sparse warnings about phys_initrd_start and
phys_initrd_size.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b962a286 30-Jul-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] initrd: claim initrd memory exclusively

Claim the initrd memory exclusively, and order other memory
reservations beforehand. This allows us to determine whether
the initrd memory was overwritten, and disable the initrd in
that case.

This avoids a 'bad page state' bug.

Tested-by: Ralph Siemsen <ralphs@netwinder.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9109fb7b 23-Jul-2008 Johannes Weiner <hannes@saeurebad.de>

mm: drop unneeded pgdat argument from free_area_init_node()

free_area_init_node() gets passed in the node id as well as the node
descriptor. This is redundant as the function can trivially get the node
descriptor itself by means of NODE_DATA() and the node's id.

I checked all the users and NODE_DATA() seems to be usable everywhere
from where this function is called.

Signed-off-by: Johannes Weiner <hannes@saeurebad.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# c48b2e90 18-Apr-2008 Johannes Weiner <hannes@saeurebad.de>

[ARM] remove redundant display of free swap space in show_mem()

Signed-off-by: Johannes Weiner <hannes@saeurebad.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 72a7fe39 07-Feb-2008 Bernhard Walle <bwalle@suse.de>

Introduce flags for reserve_bootmem()

This patchset adds a flags variable to reserve_bootmem() and uses the
BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
between crashkernel area and already used memory.

This patch:

Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
If that flag is set, the function returns with -EBUSY if the memory already
has been reserved in the past. This is to avoid conflicts.

Because that code runs before SMP initialisation, there's no race condition
inside reserve_bootmem_core().

[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix powerpc build]
Signed-off-by: Bernhard Walle <bwalle@suse.de>
Cc: <linux-arch@vger.kernel.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 0f0a00be 03-Mar-2007 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Remove needless linux/ptrace.h includes

Lots of places in arch/arm were needlessly including linux/ptrace.h,
resumably because we used to pass a struct pt_regs to interrupt
handlers. Now that we don't, all these ptrace.h includes are
redundant.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 204ecae4 16-Jan-2007 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Fix show_mem() for discontigmem

show_mem() was assuming incorrectly that the mem_map for any
node started at PFN 0. This is obviously wrong; fix it to
take account of node_start_pfn.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5e709827 06-Nov-2006 Ray Lehtiniemi <rayl@com.rmk.(none)>

[ARM] 3927/1: Allow show_mem() to work with holes in memory map.

show_mem() was not correctly handling holes in the memory
map. It was treating the freed sections of the map as
though they contained valid struct page entries. This
could cause incorrect debugging output or even a kernel
panic.

This patch keeps the struct meminfo around after system
initialization so that show_mem() can use it when
scanning memory. show_mem() now walks over each bank
of each online node, rather than assuming that each node
contains a single contiguous bank.

Signed-off-by: Ray Lehtiniemi <rayl@mail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d111e8f9 27-Sep-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Split ARM MM initialisation for !mmu

Move the MMU specific code from init.c into mmu.c, and add nommu
fixups to nommu.c

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 456335e2 27-Sep-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Separate page table manipulation code from bootmem initialisation

nommu does not require the page table manipulation code in the
bootmem initialisation paths. Move this into separate inline
functions.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4052ebb7 22-Sep-2006 George G. Davis <davis_g@mvista.com>

[ARM] 3859/1: Fix devicemaps_init() XIP_KERNEL odd 1MiB XIP_PHYS_ADDR translation error

The ARM XIP_KERNEL map created in devicemaps_init() is wrong.
The map.pfn is rounded down to an even 1MiB section boundary
which results in va/pa translations errors when XIP_PHYS_ADDR
starts on an odd 1MiB boundary and this causes the kernel to
hang. This patch fixes ARM XIP_KERNEL translation errors for
the odd 1MiB XIP_PHYS_ADDR boundary case.

Signed-off-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1b2e2b73 21-Aug-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Cleanup arch/arm/mm a little

Move top_pmd into arch/arm/mm/mm.h - nothing outside arch/arm/mm
references it.

Move the repeated definition of TOP_PTE into mm/mm.h, as well as
a few function prototypes.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6ab3d562 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de>

Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>


# 888e7bf1 24-Jun-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Remove TABLE_SIZE, and several unused function prototypes

TABLE_SIZE is never used in arch/arm/mm/init.c. create_memmap_holes(),
memtable_init, and setup_io_desc() no longer exist in the kernel.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 74d02fb9 04-Apr-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Move FLUSH_BASE macros to asm/arch/memory.h

FLUSH_BASE must be visible to arch/arm/mm/init.c in order for the
memory region to be setup. Move these definitions from
asm-arm/arch-*/hardware.h into asm-arm/arch-*/memory.h where mm
stuff can see them.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7835e98b 22-Mar-2006 Nick Piggin <npiggin@suse.de>

[PATCH] remove set_page_count() outside mm/

set_page_count usage outside mm/ is limited to setting the refcount to 1.
Remove set_page_count from outside mm/, and replace those users with
init_page_count() and set_page_refcounted().

This allows more debug checking, and tighter control on how code is allowed
to play around with page->_count.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# f78f1043 04-Mar-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Remove unnecessary asm/hardware.h includes

asm/hardware.h is not required for the majority of processor support
files, ioremap support, mm initialisation, acorn IO support, nor
the debug code (which picks up its machine specific includes via
debug-macros.S)

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 02b30839 17-Nov-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Fix some corner cases in new mm initialisation

Document that the VMALLOC_END address must be aligned to 2MB since
it must align with a PGD boundary.

Allocate the vectors page early so that the flush_cache_all() later
will cause any dirty cache lines in the direct mapping will be safely
written back.

Move the flush_cache_all() to the second local_flush_cache_tlb() and
remove the now redundant first local_flush_cache_tlb().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6bf7bd69 02-Nov-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Fix mm initialisation with write buffered write allocate caches

It seems that without the extra tlb flush, we may end up faulting
during the early kernel initialisation because the TLB can't see
the updated page tables.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1a47ebc0 29-Oct-2005 Nicolas Pitre <nico@cam.org>

[ARM] 3059/1: fix XIP support

Patch from Nicolas Pitre

Fix XIP support after recent bootmem code refactoring.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9769c246 28-Oct-2005 Deepak Saxena <dsaxena@plexity.net>

[ARM] 3016/1: Replace map_desc.physical with map_desc.pfn

Patch from Deepak Saxena

Convert map_desc.physical to map_desc.pfn. This allows us to add
support for 36-bit addressed physical devices in the static maps
without having to resort to u64 variables.

Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 90072059 28-Oct-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Re-jig bootmem initialisation

Make ARM independent of the way bootmem operates internally. We
now map each node as we initialise it, and place the bootmem bitmap
inside each node, rather than all in the first node.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 564c90aa 28-Jun-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM SMP: Use local_flush_tlb* where we really want to be local

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a013053d 27-Jun-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: Move memmap freeing into init.c

It doesn't make sense for this to be in mm-armv.c now that 26-bit
ARM support is no longer integrated into arch/arm.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 92a8cbed 22-Jun-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: Remove explicit page-alignments in memory init

Since meminfo.bank[] array contains page-aligned start/size, we
no longer need to explicitly round up/down the addresses when
converting to PFNs.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d42ce812 16-Apr-2005 akpm@osdl.org <akpm@osdl.org>

[PATCH] arm: add comment about max_low_pfn/max_pfn



From: Russell King <rmk+lkml@arm.linux.org.uk>

Oddly, max_low_pfn/max_pfn end up being the number of pages in the system,
rather than the maximum PFN on ARM. This doesn't seem to cause any problems,
so just add a note about it.

Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 1da177e4 16-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org>

Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!