Searched refs:we (Results 26 - 50 of 144) sorted by relevance

123456

/linux-master/arch/arm/lib/
H A Dbacktrace-clang.S31 * Clang does not store pc or sp in function prologues so we don't know exactly
35 * frame's lr as the current frame's lr, but we can't trace the most recent
39 * If the call instruction was a bl we can look at the callers branch
41 * but in cases such as calling function pointers we cannot. In this case,
45 * Unfortunately due to the stack frame layout we can't dump r0 - r3, but these
71 * dump_stack. This is why we must move back a frame to print dump_stack.
74 * to fully print dump_stack's frame we need both the frame for dump_stack (for
77 * To print locals we must know where the function start is. If we read the
78 * function prologue opcodes we ca
[all...]
H A Dio-readsw-armv3.S30 teq r2, #0 @ do we have to check for the zero len?
H A Dio-writesb.S44 teq r2, #0 @ do we have to check for the zero len?
H A Dio-readsl.S11 teq r2, #0 @ do we have to check for the zero len?
H A Dbacktrace.S29 stmfd sp!, {r4 - r9, lr} @ Save an extra register so we have a location...
31 beq no_frame @ we have no stack frames
/linux-master/arch/arm/kernel/
H A Dhyp-stub.S16 * For the kernel proper, we need to find out the CPU boot mode long after
17 * boot, so we need to store it in a writable variable.
19 * This is not in .bss, because we set it sufficiently early that the boot-time
57 * The zImage loader only runs on one CPU, so we don't bother with mult-CPU
93 * Once we have given up on one CPU, we do not try to install the
111 * various coprocessor accesses. This is done when we switch to SVC
118 @ Disable all traps, so we don't get any nasty surprise
H A Dphys2virt.S78 @ instructions, where we need to patch in the offset into the
87 @ In the LPAE case, we also need to patch in the high word of the
89 @ to a MVN instruction if the offset is negative. In this case, we
93 @ of i:imm3 != 0b0000, but fortunately, we never need more than 8 lower
131 @ in BE8, we load data in BE, but instructions still in LE
156 @ instructions, where we need to patch in the offset into the
169 @ In the LPAE case, we use a MOVW instruction to carry the low offset
/linux-master/arch/arm/boot/compressed/
H A Dhead-sa1100.S31 @ memory to be sure we hit the same cache.
/linux-master/arch/alpha/lib/
H A Dev6-memcpy.S57 and $16, 7, $1 # E : Are we at 0mod8 yet?
62 cmple $18, 127, $1 # E : Can we unroll the loop?
80 cmple $18, 127, $1 # E : Can we go through the unrolled loop?
196 bne $1, $aligndest # U : go until we are aligned.
H A Dev67-strncat.S9 * This differs slightly from the semantics in libc in that we never write
63 cmplt $27, $24, $5 # E : did we fill the buffer completely?
83 1: /* Here we must clear the first byte of the next DST word */
H A Dmemchr.S45 # search til the end of the address space, we will overflow
46 # below when we find the address of the last byte. Given
47 # that we will never have a 56-bit address space, cropping
H A Dev6-memchr.S43 # search til the end of the address space, we will overflow
44 # below when we find the address of the last byte. Given
45 # that we will never have a 56-bit address space, cropping
90 * Since we are guaranteed to have set one of the bits, we don't
H A Dcsum_ipv6_magic.S26 extqh $18,1,$4 # e0 : byte swap len & proto while we wait
H A Dstrrchr.S41 bne t1, $eos # .. e1 : did we already hit the terminator?
52 beq t1, $loop # .. e1 : if we havnt seen a null, loop
H A Dev67-strrchr.S63 bne t1, $eos # U : did we already hit the terminator?
79 beq t1, $loop # U : if we havnt seen a null, loop
H A Dstxcpy.S69 On entry to this basic block we have:
77 if we're not going to need it. */
112 beq t0, stxcpy_aligned # .. e1 : ... if we wont need it
147 mskql t6, a1, t6 # e0 : mask out the bits we have
152 /* Finally, we've got all the stupid leading edge cases taken care
153 of and we can set up to enter the main loop. */
168 prevent nastiness from accumulating in the very thing we want
190 /* We've found a zero somewhere in the source word we just read.
191 If it resides in the lower half, we have one (probably partial)
192 word to write out, and if it resides in the upper half, we
[all...]
/linux-master/arch/arm/nwfpe/
H A Dentry.S69 mov sl, sp @ we access the registers via 'sl'
119 @ If yes, we need to call the relevant co-processor handler.
142 @ Test if we need to give access to iWMMXt coprocessors
/linux-master/drivers/gpu/drm/i915/gvt/
H A Dgtt.c169 * table type, as we know l4 root entry doesn't have a PSE bit,
439 * it also works, so we need to treat root pointer entry
1070 struct intel_vgpu *vgpu, struct intel_gvt_gtt_entry *we)
1077 GEM_BUG_ON(!gtt_type_is_pt(get_next_pt_type(we->type)));
1079 if (we->type == GTT_TYPE_PPGTT_PDE_ENTRY)
1080 ips = vgpu_ips_enabled(vgpu) && ops->test_ips(we);
1082 spt = intel_vgpu_find_spt_by_gfn(vgpu, ops->get_pfn(we));
1098 int type = get_next_pt_type(we->type);
1105 spt = ppgtt_alloc_spt_gfn(vgpu, type, ops->get_pfn(we), ips);
1129 spt, we
1069 ppgtt_populate_spt_by_guest_entry( struct intel_vgpu *vgpu, struct intel_gvt_gtt_entry *we) argument
1365 ppgtt_handle_guest_entry_add(struct intel_vgpu_ppgtt_spt *spt, struct intel_gvt_gtt_entry *we, unsigned long index) argument
1570 ppgtt_handle_guest_write_page_table( struct intel_vgpu_ppgtt_spt *spt, struct intel_gvt_gtt_entry *we, unsigned long index) argument
1695 struct intel_gvt_gtt_entry we, se; local
[all...]
/linux-master/arch/sparc/kernel/
H A Dwof.S79 /* Compute what the new %wim will be if we save the
106 * Basically if we are here, this means that we trapped
125 rett %t_npc ! we are done
137 * %glob_tmp. We cannot set the new %wim first because we
139 * a trap (traps are off, we'd get a watchdog wheee)...
173 /* The users stack is ok and we can safely save it at
183 /* We have spilled successfully, and we have properly stored
200 * how to proceed based upon whether we came from kernel mode
201 * or not. If we cam
[all...]
/linux-master/arch/arc/lib/
H A Dstrchr-700.S6 /* ARC700 has a relatively long pipeline and branch prediction, so we want
41 breq r7,0,.Loop ; For speed, we want this branch to be unaligned.
45 breq r12,0,.Loop ; For speed, we want this branch to be unaligned.
/linux-master/arch/powerpc/boot/
H A Ddiv64.S29 cntlzw r0,r5 # we are shifting the dividend right
39 divwu r11,r11,r9 # then we divide the shifted quantities
/linux-master/arch/x86/boot/
H A Dpmjump.S56 # The 32-bit code sets up its own stack, but this way we do have
/linux-master/arch/powerpc/kernel/vdso/
H A Dsigtramp64.S303 # Do we really need to describe the frame at this point? ie. will
304 # we ever have some call chain that returns somewhere past the addi?
/linux-master/arch/arc/kernel/
H A Dentry.S159 ; Do the Sys Call as we normally would.
251 ; If ret to user mode do we need to handle signals, schedule() et al.
265 ; (and we don't end up missing a NEED_RESCHED/SIGPENDING due to an
293 ; However, here we need to explicitly save callee regs because
307 ; Ideally we want to discard the Callee reg above, however if this was
/linux-master/arch/m68k/ifpsp060/src/
H A Dpfpsp.S665 # maybe we can avoid the subroutine call.
710 # maybe we can make these entry points ONLY the OVFL entry points of each routine.
716 # we must save the default result regardless of whether
721 # the exceptional possibilities we have left ourselves with are ONLY overflow
738 # overflow is enabled AND overflow, of course, occurred. so, we have the EXOP
756 # we must jump to real_inex().
914 # now, what's left that's not dyadic is fsincos. we can distinguish it
953 # maybe we can make these entry points ONLY the OVFL entry points of each routine.
963 # underflow exception. Since this is incorrect, we need to check
969 # the exceptional possibilities we hav
[all...]

Completed in 203 milliseconds

123456