1251607Sdim//===---------------------------------------------------------------------===//
2251607Sdim// Random notes about and ideas for the SystemZ backend.
3251607Sdim//===---------------------------------------------------------------------===//
4251607Sdim
5251607SdimThe initial backend is deliberately restricted to z10.  We should add support
6251607Sdimfor later architectures at some point.
7251607Sdim
8251607Sdim--
9251607Sdim
10251607SdimSystemZDAGToDAGISel::SelectInlineAsmMemoryOperand() is passed "m" for all
11251607Sdiminline asm memory constraints; it doesn't get to see the original constraint.
12251607SdimThis means that it must conservatively treat all inline asm constraints
13251607Sdimas the most restricted type, "R".
14251607Sdim
15251607Sdim--
16251607Sdim
17251607SdimIf an inline asm ties an i32 "r" result to an i64 input, the input
18251607Sdimwill be treated as an i32, leaving the upper bits uninitialised.
19251607SdimFor example:
20251607Sdim
21251607Sdimdefine void @f4(i32 *%dst) {
22251607Sdim  %val = call i32 asm "blah $0", "=r,0" (i64 103)
23251607Sdim  store i32 %val, i32 *%dst
24251607Sdim  ret void
25251607Sdim}
26251607Sdim
27251607Sdimfrom CodeGen/SystemZ/asm-09.ll will use LHI rather than LGHI.
28251607Sdimto load 103.  This seems to be a general target-independent problem.
29251607Sdim
30251607Sdim--
31251607Sdim
32263508SdimThe tuning of the choice between LOAD ADDRESS (LA) and addition in
33251607SdimSystemZISelDAGToDAG.cpp is suspect.  It should be tweaked based on
34251607Sdimperformance measurements.
35251607Sdim
36251607Sdim--
37251607Sdim
38251607SdimThere is no scheduling support.
39251607Sdim
40251607Sdim--
41251607Sdim
42263508SdimWe don't use the BRANCH ON INDEX instructions.
43251607Sdim
44251607Sdim--
45251607Sdim
46263508SdimWe might want to use BRANCH ON CONDITION for conditional indirect calls
47263508Sdimand conditional returns.
48251607Sdim
49263508Sdim--
50251607Sdim
51263508SdimWe don't use the TEST DATA CLASS instructions.
52263508Sdim
53251607Sdim--
54251607Sdim
55263508SdimWe could use the generic floating-point forms of LOAD COMPLEMENT,
56263508SdimLOAD NEGATIVE and LOAD POSITIVE in cases where we don't need the
57263508Sdimcondition codes.  For example, we could use LCDFR instead of LCDBR.
58251607Sdim
59251607Sdim--
60251607Sdim
61263508SdimWe only use MVC, XC and CLC for constant-length block operations.
62263508SdimWe could extend them to variable-length operations too,
63263508Sdimusing EXECUTE RELATIVE LONG.
64263508Sdim
65263508SdimMVCIN, MVCLE and CLCLE may be worthwhile too.
66263508Sdim
67263508Sdim--
68263508Sdim
69263508SdimWe don't use CUSE or the TRANSLATE family of instructions for string
70263508Sdimoperations.  The TRANSLATE ones are probably more difficult to exploit.
71263508Sdim
72263508Sdim--
73263508Sdim
74251607SdimWe don't take full advantage of builtins like fabsl because the calling
75251607Sdimconventions require f128s to be returned by invisible reference.
76251607Sdim
77251607Sdim--
78251607Sdim
79263508SdimADD LOGICAL WITH SIGNED IMMEDIATE could be useful when we need to
80263508Sdimproduce a carry.  SUBTRACT LOGICAL IMMEDIATE could be useful when we
81263508Sdimneed to produce a borrow.  (Note that there are no memory forms of
82263508SdimADD LOGICAL WITH CARRY and SUBTRACT LOGICAL WITH BORROW, so the high
83263508Sdimpart of 128-bit memory operations would probably need to be done
84263508Sdimvia a register.)
85251607Sdim
86251607Sdim--
87251607Sdim
88263508SdimWe don't use the halfword forms of LOAD REVERSED and STORE REVERSED
89263508Sdim(LRVH and STRVH).
90263508Sdim
91263508Sdim--
92263508Sdim
93263508SdimWe don't use ICM or STCM.
94263508Sdim
95263508Sdim--
96263508Sdim
97251607SdimDAGCombiner doesn't yet fold truncations of extended loads.  Functions like:
98251607Sdim
99251607Sdim    unsigned long f (unsigned long x, unsigned short *y)
100251607Sdim    {
101251607Sdim      return (x << 32) | *y;
102251607Sdim    }
103251607Sdim
104251607Sdimtherefore end up as:
105251607Sdim
106251607Sdim        sllg    %r2, %r2, 32
107251607Sdim        llgh    %r0, 0(%r3)
108251607Sdim        lr      %r2, %r0
109251607Sdim        br      %r14
110251607Sdim
111251607Sdimbut truncating the load would give:
112251607Sdim
113251607Sdim        sllg    %r2, %r2, 32
114251607Sdim        lh      %r2, 0(%r3)
115251607Sdim        br      %r14
116251607Sdim
117251607Sdim--
118251607Sdim
119251607SdimFunctions like:
120251607Sdim
121251607Sdimdefine i64 @f1(i64 %a) {
122251607Sdim  %and = and i64 %a, 1
123251607Sdim  ret i64 %and
124251607Sdim}
125251607Sdim
126251607Sdimought to be implemented as:
127251607Sdim
128251607Sdim        lhi     %r0, 1
129251607Sdim        ngr     %r2, %r0
130251607Sdim        br      %r14
131251607Sdim
132251607Sdimbut two-address optimisations reverse the order of the AND and force:
133251607Sdim
134251607Sdim        lhi     %r0, 1
135251607Sdim        ngr     %r0, %r2
136251607Sdim        lgr     %r2, %r0
137251607Sdim        br      %r14
138251607Sdim
139251607SdimCodeGen/SystemZ/and-04.ll has several examples of this.
140251607Sdim
141251607Sdim--
142251607Sdim
143251607SdimOut-of-range displacements are usually handled by loading the full
144251607Sdimaddress into a register.  In many cases it would be better to create
145251607Sdiman anchor point instead.  E.g. for:
146251607Sdim
147251607Sdimdefine void @f4a(i128 *%aptr, i64 %base) {
148251607Sdim  %addr = add i64 %base, 524288
149251607Sdim  %bptr = inttoptr i64 %addr to i128 *
150251607Sdim  %a = load volatile i128 *%aptr
151251607Sdim  %b = load i128 *%bptr
152251607Sdim  %add = add i128 %a, %b
153251607Sdim  store i128 %add, i128 *%aptr
154251607Sdim  ret void
155251607Sdim}
156251607Sdim
157251607Sdim(from CodeGen/SystemZ/int-add-08.ll) we load %base+524288 and %base+524296
158251607Sdiminto separate registers, rather than using %base+524288 as a base for both.
159251607Sdim
160251607Sdim--
161251607Sdim
162251607SdimDynamic stack allocations round the size to 8 bytes and then allocate
163251607Sdimthat rounded amount.  It would be simpler to subtract the unrounded
164251607Sdimsize from the copy of the stack pointer and then align the result.
165251607SdimSee CodeGen/SystemZ/alloca-01.ll for an example.
166251607Sdim
167251607Sdim--
168251607Sdim
169251607SdimAtomic loads and stores use the default compare-and-swap based implementation.
170263508SdimThis is much too conservative in practice, since the architecture guarantees
171263508Sdimthat 1-, 2-, 4- and 8-byte loads and stores to aligned addresses are
172263508Sdiminherently atomic.
173263508Sdim
174263508Sdim--
175263508Sdim
176263508SdimIf needed, we can support 16-byte atomics using LPQ, STPQ and CSDG.
177263508Sdim
178263508Sdim--
179263508Sdim
180263508SdimWe might want to model all access registers and use them to spill
181263508Sdim32-bit values.
182