/linux-master/arch/arm/common/ |
H A D | vlock.S | 8 * This algorithm is described in more detail in 57 ldrb r2, [r0, #VLOCK_OWNER_OFFSET] @ check whether lock is held
|
/linux-master/arch/arm/crypto/ |
H A D | poly1305-armv4.pl | 18 # (*) this is for -march=armv6, i.e. with bunch of ldrb loading data; 510 cmp r3,#-1 @ is value impossible? 593 @ Result of multiplication of n-bit number by m-bit number is 594 @ n+m bits wide. However! Even though 2^n is a n+1-bit number, 595 @ m-bit number multiplied by 2^n is still n+m bits wide. 597 @ Sum of two n-bit numbers is n+1 bits wide, sum of three - n+2, 598 @ and so is sum of four. Sum of 2^m n-m-bit numbers and n-bit 599 @ one is n+1 bits wide. 606 @ of 52-bit numbers as long as the amount of addends is not a 612 @ 5 * (2^52 + 2*2^32 + 2^12), which in turn is smalle [all...] |
H A D | sha1-armv4-large.S | 4 @ This code is taken from the OpenSSL project but the author (Andy Polyakov) 5 @ has relicensed it under the GPLv2. Therefore this program is free software; 14 @ project. The module is, however, dual licensed under OpenSSL and 41 @ performance is affected by prologue and epilogue overhead, 43 @ [**] While each Thumb instruction is twice smaller, they are not as 47 @ the same job in Thumb, therefore the code is never twice as 49 @ [***] which is also ~35% better than compiler generated code. Dual-
|
/linux-master/arch/arm/include/asm/ |
H A D | cmpxchg.h | 11 * On the StrongARM, "swp" is terminally broken since it bypasses the 13 * since we use normal loads/stores as well, this is really bad. 19 * We choose (1) since its the "easiest" to achieve here and is not 78 #error SMP is not supported on this platform 110 /* Cause a link-time error, the xchg() size is not supported */ 129 #error "SMP is not supported on this platform"
|
H A D | tls.h | 15 @ TLS register update is deferred until return to user space 91 * is merely redundant. 127 /* Since TPIDRURW is fully context-switched (unlike TPIDRURO),
|
/linux-master/arch/arm/include/debug/ |
H A D | brcmstb.S | 49 ldr \rv, [\rp] @ linked addr is stored there 56 mov \rv, #0 @ yes; record init is done 152 * In the kernel proper, this data is located in arch/arm/mach-bcm/brcmstb.c. 153 * That's because this header is included from multiple files, and we only 155 * assumes it's running using physical addresses. This is true when this file 156 * is included from head.o, but not when included from debug.o. So we need 162 * even though it's really data, since .data is discarded from the 163 * decompressor. Luckily, .text is writeable in the decompressor, unless 164 * CONFIG_ZBOOT_ROM. That dependency is handled in arch/arm/Kconfig.debug.
|
H A D | omap2plus.S | 37 cmp \rp, #0 @ is port configured?
|
H A D | sa1100.S | 27 @ see if Ser3 is active 32 @ if Ser3 is inactive, then try Ser1 37 @ if Ser1 is inactive, then try Ser2
|
H A D | tegra.S | 40 * Must be section-aligned since a section mapping is used early on. 69 ldr \rv, [\rp] @ linked addr is stored there 76 mov \rv, #0 @ yes; record init is done 157 * In the kernel proper, this data is located in arch/arm/mach-tegra/tegra.c. 158 * That's because this header is included from multiple files, and we only 160 * assumes it's running using physical addresses. This is true when this file 161 * is included from head.o, but not when included from debug.o. So we need 167 * .text even though it's really data, since .data is discarded from the 168 * decompressor. Luckily, .text is writeable in the decompressor, unless 169 * CONFIG_ZBOOT_ROM. That dependency is handle [all...] |
H A D | zynq.S | 50 bne 1002b @ wait if FIFO is full
|
/linux-master/arch/arm/kernel/ |
H A D | entry-armv.S | 11 * Note: there is a StrongARM bug in the STMIA rn, {regs}^ instruction 241 1: bl preempt_schedule_irq @ irq en/disable is done inside 249 @ Correct the PC such that it is pointing at the instruction 252 @ subtract 4. Otherwise, it is Thumb, and the PC will be 264 @ If a kprobe is about to simulate a "stmdb sp..." instruction, 308 @ Taking a FIQ in abort mode is similar to taking a FIQ in SVC mode 320 mrs r2, spsr @ Save spsr_abt, abort is now safe 333 mov lr, r1 @ Restore lr_abt, abort is unsafe 346 * EABI note: sp_svc is always 64-bit aligned here, so should PT_REGS_SIZE 411 @ Make sure our user space atomic helper is restarte [all...] |
H A D | entry-common.S | 35 * This is the fast syscall return path. We do as little as possible here, 60 * or rseq debug is enabled. As we will need to call out to some C functions, 127 * This is how we return from a fork. 200 * value to determine if it is an EABI or an old ABI call. 218 tst saved_psr, #PSR_T_BIT @ this is SPSR from save_user_regs 236 * If the swi argument is zero, this is an EABI call and we do nothing. 238 * If this is an old ABI call, get the syscall number into scno and 294 * This is the really slow path. We're going to be doing 349 * This is th [all...] |
H A D | entry-ftrace.S | 148 mov r3, #0 @ regs is NULL
|
H A D | entry-header.S | 29 * The SWI code relies on the fact that R0 is at the bottom of the stack 65 * If exception is taken while in user mode, SP_main is 66 * empty. Otherwise, SP_main is aligned to 64 bit automatically 70 * exception handler and it may BUG if this is not the case. Interrupts 73 * v7m_exception_slow_exit is used when returning from SVC or PendSV. 79 @ exception happend that is either on the main or the process stack. 99 @ r8-r12 is OK. 108 @ another 32-bit value is included in the stack. 137 @ an exception frame is alway [all...] |
H A D | head-common.S | 26 * If CONFIG_DEBUG_LL is set we try to print out something about the error 34 * that the ATAG_CORE marker is first and present. If CONFIG_OF_FLATTREE 35 * is selected, then it will also accept a dtb pointer. Future revisions 49 ldr r6, =OF_DT_MAGIC @ is it a DTB? 53 cmp r5, #ATAG_CORE_SIZE @ is first tag ATAG_CORE? 61 2: ret lr @ atag/dtb pointer is ok 68 * The following fragment of code is executed with the MMU on in MMU mode, 69 * and uses absolute addresses; this is not position independent.
|
H A D | head-nommu.S | 27 * This is normally called from the decompressor code. The requirements 45 THUMB( badr r9, 1f ) @ Kernel is always entered in ARM. 46 THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 88 * the processor type - there is no need to check the machine type 280 /* Determine whether the D/I-side memory map is unified. We set the 433 /* There is no alias for n == 4 */ 472 /* Determine whether the D/I-side memory map is unified. We set the
|
H A D | head.S | 27 * swapper_pg_dir is the virtual address of the initial page table. 29 * make sure that KERNEL_RAM_VADDR is correctly set. Currently, we expect 41 #define PMD_ENTRY_ORDER 3 /* PMD entry size is 2^PMD_ENTRY_ORDER */ 76 * This is normally called from the decompressor code. The requirements 80 * This code is mostly position independent, so if you link the kernel at 88 * circumstances, zImage) is for. 96 THUMB( badr r9, 1f ) @ Kernel is always entered in ARM. 97 THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 165 mov r8, r4, lsr #12 @ TTBR1 is swapper_pg_dir pfn 206 * entry is 6 [all...] |
H A D | hyp-stub.S | 19 * This is not in .bss, because we set it sufficiently early that the boot-time 95 * is modified, it can't compare equal to the CPSR mode field any 102 retne lr @ give up if the CPU is not in HYP mode 111 * various coprocessor accesses. This is done when we switch to SVC 132 @ Make sure NS-SVC is initialised appropriately 187 bx lr @ The boot CPU mode is left in r4. 212 * __hyp_set_vectors is only used when ZIMAGE must bounce between HYP
|
H A D | phys2virt.S | 23 * PHYS_OFFSET and PAGE_OFFSET, which is assumed to be 79 @ second halfword of the opcode (the 16-bit immediate is encoded 89 @ to a MVN instruction if the offset is negative. In this case, we 91 @ it is MOVW or MOV/MVN, and to perform the MOV to MVN patching if 92 @ needed. The encoding of the immediate is rather complex for values 114 bne 0f @ skip to MOVW handling (Z flag is clear) 119 @ Z flag is set 157 @ immediate field of the opcode, which is emitted with the correct 158 @ rotation value. (The effective value of the immediate is imm12<7:0> 172 @ instruction if the offset is negativ [all...] |
H A D | unwind.c | 9 * An ARM EABI version of gcc is required to generate the unwind 19 #warning ARM unwind is known to compile only with EABI compilers. 62 * 0 : save overhead if there is plenty of stack remaining. 100 * origin = first entry with positive offset (or stop if there is no such entry) 131 * As addr_prel31 is relative to start an offset is needed to 232 /* Before poping a register check whether it is feasible or not */ 319 * loop until we get an instruction byte where bit 7 is not set. 322 * max is 0xfffffff: that will cover a vsp increment of 1073742336, hence 323 * it is sufficien [all...] |
/linux-master/arch/arm/lib/ |
H A D | backtrace-clang.S | 13 /* fp is 0 or stack frame */ 69 * The frame for c_backtrace has pointers to the code of dump_stack. This is 70 * why the frame of c_backtrace is used to for the pc calculation of 71 * dump_stack. This is why we must move back a frame to print dump_stack. 77 * To print locals we must know where the function start is. If we read the 101 movs frame, r0 @ if frame pointer is zero 116 * sv_fp is the stack frame with the locals for the current considered 119 * sv_pc is the saved lr frame the frame above. This is a pointer to a code 120 * address within the current considered function, but it is no [all...] |
H A D | backtrace.S | 14 @ fp is 0 or stack frame 30 movs frame, r0 @ if frame pointer is zero 104 @ frame is below the previous frame, accept it as long as it
|
H A D | copy_template.S | 25 * 'ptr' to the next word. The 'abort' argument is used for fixup tables. 32 * The'abort' argument is used for fixup tables. 38 * "al" condition is assumed by default. 44 * Same as their ldr* counterparts, but data is stored to 'ptr' location 55 * Unwind annotation macro is corresponding for 'enter' macro. 91 CALGN( sbcsne r4, r3, r2 ) @ C is always set here 117 addne pc, pc, ip @ C is always clear here 190 CALGN( sbcsne r4, ip, r2 ) @ C is always set here 258 * If a fixup handler is required then those macros must surround it. 259 * It is assume [all...] |
H A D | csumpartialcopygeneric.S | 41 reteq lr @ dst is now 32bit aligned 49 ret lr @ dst is now 32bit aligned 99 * Ok, the dst pointer is now 32bit aligned, and we know
|
H A D | memmove.S | 21 * If the memory regions don't overlap, we simply branch to memcpy which is 22 * normally a bit faster. Otherwise the copy is done going downwards. This 23 * is a transposition of the code from copy_template.S but with the copy 56 CALGN( sbcsne r4, ip, r2 ) @ C is always set here 59 CALGN( subs r2, r2, ip ) @ C is set here 80 addne pc, pc, ip @ C is always clear here 140 CALGN( sbcsne r4, ip, r2 ) @ C is always set here
|