• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /freebsd-13-stable/contrib/llvm-project/llvm/lib/Target/X86/

Lines Matching refs:ops

531       // These might be better off as horizontal vector ops.
1065 // These might be better off as horizontal vector ops.
2237 /// For vector ops we check that the overall size isn't larger than our
2311 // NonTemporal vector memory ops must be aligned.
5170 // convert this to shl+add/sub and then still have to type legalize those ops.
5756 Vec->ops().slice(IdxVal, ElemsPerChunk));
5846 // Helper function to collect subvector ops that are concatenated together,
5850 assert(Ops.empty() && "Expected an empty ops vector");
5900 // Split an unary integer op into 2 half sized ops.
5926 /// Break a binary integer operation into 2 half sized ops and then
7319 // Attempt to decode ops that could be represented as a shuffle mask.
8564 // Combine a vector ops (shuffles etc.) that is equal to build_vector load1,
9113 assert(VT.is256BitVector() && "Only use for matching partial 256-bit h-ops");
9461 // x86 256-bit horizontal ops are defined in a non-obvious way. Each 128-bit
9601 // Try harder to match 256-bit ops by using extract/concat.
9740 for (SDValue Elt : Op->ops()) {
10346 DAG.getBuildVector(HVT, dl, Op->ops().slice(0, NumElems / 2));
10348 HVT, dl, Op->ops().slice(NumElems / 2, NumElems /2));
10505 ArrayRef<SDUse> Ops = Op->ops();
10594 ArrayRef<SDUse> Ops = Op->ops();
11983 // Rotate the 2 ops so we can access both ranges, then permute the result.
12371 // Use VSHLDQ/VSRLDQ ops to zero the ends of a vector and leave an
15944 // Always extract lowers when setting lower - these are all free subreg ops.
17541 // Try using bit ops for masking and blending before falling back to
19276 /// try to vectorize the cast ops. This will avoid an expensive round-trip
19316 // penalties) with cast ops.
21131 // Allow commuted 'hadd' ops.
21318 // TODO: If we had general constant folding for FP logic ops, this check
22957 // Lower FP selects into a CMP/AND/ANDN/OR sequence when the necessary SSE ops
23483 // Splitting volatile memory ops is not allowed unless the operation was not
23517 // Splitting volatile memory ops is not allowed unless the operation was not
23572 // If this is a 256-bit store of concatenated ops, we are better off splitting
23573 // that store into two 128-bit stores. This avoids spurious use of 256-bit ops
26332 // Decompose 256-bit ops into smaller 128-bit ops.
26336 // Decompose 512-bit ops into smaller 256-bit ops.
26514 // For AVX1 cases, split to use legal ops (everything but v4i64).
26561 // Decompose 256-bit ops into 128-bit ops.
26710 // Decompose 256-bit ops into 128-bit ops.
28468 // Decompose 256-bit ops into smaller 128-bit ops.
28472 // Decompose 512-bit ops into smaller 256-bit ops.
28516 // Decompose 256-bit ops into smaller 128-bit ops.
28563 // Decompose 256-bit ops into smaller 128-bit ops on pre-AVX2.
28652 "Used AtomicRMW ops other than Add should have been expanded!");
29152 // no-ops in the case of a null GC strategy (or a GC strategy which does not
33767 // TODO - handle target shuffle ops with different value types.
33904 // TODO - handle target shuffle ops with different value types.
35230 // Remove unused/repeated shuffle source ops.
35253 // Attempt to constant fold all of the constant source ops.
35487 // knowledge that we have about the mask sizes to replace div/rem ops with
35558 // Remove unused/repeated shuffle source ops.
35585 // Don't recurse if we already have more source ops than we can combine in
35605 // Attempt to constant fold all of the constant source ops.
36864 // subregister (zmm<->ymm or ymm<->xmm) ops. That leaves us with a shuffle
37187 // Aggressively peek through ops to get at the demanded elts.
37188 // TODO - we should do this for all target/faux shuffles ops.
37282 // TODO - we should do this for all target/faux shuffles ops.
37308 // For 256/512-bit ops that are 128/256-bit ops glued together, if we do not
37316 // See if 512-bit ops only use the bottom 128-bits.
37533 // Aggressively peek through ops to get at the demanded low bits.
37749 // Attempt to avoid multi-use ops if we don't need anything from them.
37906 // Bitmask that indicates which ops have only been accessed 'inline'.
38182 // Look for logic ops.
38242 // Convert build vector ops to MMX data in the bottom elements.
38292 // integer. If so, replace the scalar ops with bool vector equivalents back down
38847 // possible to reduce the number of vector ops.
39170 // Vector FP compares don't fit the pattern of FP math ops (propagate, not
39189 // Vector FP selects don't fit the pattern of FP math ops (because the
39211 // TODO: This switch could include FNEG and the x86-specific FP logic ops
39244 for (SDValue Op : Vec->ops())
39255 /// into horizontal ops.
39663 // We're going to use the condition bit in math or logic ops. We could allow
44012 // simplify ops leading up to it. We only demand the MSB of each lane.
44079 // simplify ops leading up to it. We only demand the MSB of each lane.
44161 StoredVal->ops().slice(0, 32));
44164 StoredVal->ops().slice(32, 32));
44591 /// Attempt to pre-truncate inputs to arithmetic ops if it will simplify
45014 // Attempt to pre-truncate inputs to arithmetic ops instead.
45293 // Fill in the non-negated ops with the original values.
45842 // Attempt to promote any comparison mask ops before moving the
45859 /// opportunities to combine math ops, use an LEA, or use a complex addressing
47910 /// Helper that combines an array of subvector ops as if they were the operands
47912 /// ISD::INSERT_SUBVECTOR). The ops are assumed to be of the same type.
48323 // split the 'and' into 128-bit ops to avoid the concatenate and extract.
48378 InVec.getNode()->ops().slice(IdxVal, VT.getVectorNumElements()));
49033 // TODO: Almost no 8-bit ops are desirable because they have no actual
49034 // size/speed advantages vs. 32-bit ops, but they do have a major
49038 // we have specializations to turn 32-bit multiply/shl into LEA or other ops.
49095 // using LEA and/or other ALU ops.
49225 // ops instead of emitting the bswap asm. For now, we don't support 486 or