1/* Operations with very long integers.  -*- C++ -*-
2   Copyright (C) 2012-2015 Free Software Foundation, Inc.
3
4This file is part of GCC.
5
6GCC is free software; you can redistribute it and/or modify it
7under the terms of the GNU General Public License as published by the
8Free Software Foundation; either version 3, or (at your option) any
9later version.
10
11GCC is distributed in the hope that it will be useful, but WITHOUT
12ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
14for more details.
15
16You should have received a copy of the GNU General Public License
17along with GCC; see the file COPYING3.  If not see
18<http://www.gnu.org/licenses/>.  */
19
20#ifndef WIDE_INT_H
21#define WIDE_INT_H
22
23/* wide-int.[cc|h] implements a class that efficiently performs
24   mathematical operations on finite precision integers.  wide_ints
25   are designed to be transient - they are not for long term storage
26   of values.  There is tight integration between wide_ints and the
27   other longer storage GCC representations (rtl and tree).
28
29   The actual precision of a wide_int depends on the flavor.  There
30   are three predefined flavors:
31
32     1) wide_int (the default).  This flavor does the math in the
33     precision of its input arguments.  It is assumed (and checked)
34     that the precisions of the operands and results are consistent.
35     This is the most efficient flavor.  It is not possible to examine
36     bits above the precision that has been specified.  Because of
37     this, the default flavor has semantics that are simple to
38     understand and in general model the underlying hardware that the
39     compiler is targetted for.
40
41     This flavor must be used at the RTL level of gcc because there
42     is, in general, not enough information in the RTL representation
43     to extend a value beyond the precision specified in the mode.
44
45     This flavor should also be used at the TREE and GIMPLE levels of
46     the compiler except for the circumstances described in the
47     descriptions of the other two flavors.
48
49     The default wide_int representation does not contain any
50     information inherent about signedness of the represented value,
51     so it can be used to represent both signed and unsigned numbers.
52     For operations where the results depend on signedness (full width
53     multiply, division, shifts, comparisons, and operations that need
54     overflow detected), the signedness must be specified separately.
55
56     2) offset_int.  This is a fixed size representation that is
57     guaranteed to be large enough to compute any bit or byte sized
58     address calculation on the target.  Currently the value is 64 + 4
59     bits rounded up to the next number even multiple of
60     HOST_BITS_PER_WIDE_INT (but this can be changed when the first
61     port needs more than 64 bits for the size of a pointer).
62
63     This flavor can be used for all address math on the target.  In
64     this representation, the values are sign or zero extended based
65     on their input types to the internal precision.  All math is done
66     in this precision and then the values are truncated to fit in the
67     result type.  Unlike most gimple or rtl intermediate code, it is
68     not useful to perform the address arithmetic at the same
69     precision in which the operands are represented because there has
70     been no effort by the front ends to convert most addressing
71     arithmetic to canonical types.
72
73     3) widest_int.  This representation is an approximation of
74     infinite precision math.  However, it is not really infinite
75     precision math as in the GMP library.  It is really finite
76     precision math where the precision is 4 times the size of the
77     largest integer that the target port can represent.
78
79     widest_int is supposed to be wider than any number that it needs to
80     store, meaning that there is always at least one leading sign bit.
81     All widest_int values are therefore signed.
82
83     There are several places in the GCC where this should/must be used:
84
85     * Code that does induction variable optimizations.  This code
86       works with induction variables of many different types at the
87       same time.  Because of this, it ends up doing many different
88       calculations where the operands are not compatible types.  The
89       widest_int makes this easy, because it provides a field where
90       nothing is lost when converting from any variable,
91
92     * There are a small number of passes that currently use the
93       widest_int that should use the default.  These should be
94       changed.
95
96   There are surprising features of offset_int and widest_int
97   that the users should be careful about:
98
99     1) Shifts and rotations are just weird.  You have to specify a
100     precision in which the shift or rotate is to happen in.  The bits
101     above this precision are zeroed.  While this is what you
102     want, it is clearly non obvious.
103
104     2) Larger precision math sometimes does not produce the same
105     answer as would be expected for doing the math at the proper
106     precision.  In particular, a multiply followed by a divide will
107     produce a different answer if the first product is larger than
108     what can be represented in the input precision.
109
110   The offset_int and the widest_int flavors are more expensive
111   than the default wide int, so in addition to the caveats with these
112   two, the default is the prefered representation.
113
114   All three flavors of wide_int are represented as a vector of
115   HOST_WIDE_INTs.  The default and widest_int vectors contain enough elements
116   to hold a value of MAX_BITSIZE_MODE_ANY_INT bits.  offset_int contains only
117   enough elements to hold ADDR_MAX_PRECISION bits.  The values are stored
118   in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
119   in element 0.
120
121   The default wide_int contains three fields: the vector (VAL),
122   the precision and a length (LEN).  The length is the number of HWIs
123   needed to represent the value.  widest_int and offset_int have a
124   constant precision that cannot be changed, so they only store the
125   VAL and LEN fields.
126
127   Since most integers used in a compiler are small values, it is
128   generally profitable to use a representation of the value that is
129   as small as possible.  LEN is used to indicate the number of
130   elements of the vector that are in use.  The numbers are stored as
131   sign extended numbers as a means of compression.  Leading
132   HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
133   as long as they can be reconstructed from the top bit that is being
134   represented.
135
136   The precision and length of a wide_int are always greater than 0.
137   Any bits in a wide_int above the precision are sign-extended from the
138   most significant bit.  For example, a 4-bit value 0x8 is represented as
139   VAL = { 0xf...fff8 }.  However, as an optimization, we allow other integer
140   constants to be represented with undefined bits above the precision.
141   This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
142   so that the INTEGER_CST representation can be used both in TYPE_PRECISION
143   and in wider precisions.
144
145   There are constructors to create the various forms of wide_int from
146   trees, rtl and constants.  For trees you can simply say:
147
148	     tree t = ...;
149	     wide_int x = t;
150
151   However, a little more syntax is required for rtl constants since
152   they do not have an explicit precision.  To make an rtl into a
153   wide_int, you have to pair it with a mode.  The canonical way to do
154   this is with std::make_pair as in:
155
156	     rtx r = ...
157	     wide_int x = std::make_pair (r, mode);
158
159   Similarly, a wide_int can only be constructed from a host value if
160   the target precision is given explicitly, such as in:
161
162	     wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
163	     wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
164
165   However, offset_int and widest_int have an inherent precision and so
166   can be initialized directly from a host value:
167
168	     offset_int x = (int) c;          // sign-extend C
169	     widest_int x = (unsigned int) c; // zero-extend C
170
171   It is also possible to do arithmetic directly on trees, rtxes and
172   constants.  For example:
173
174	     wi::add (t1, t2);	  // add equal-sized INTEGER_CSTs t1 and t2
175	     wi::add (t1, 1);     // add 1 to INTEGER_CST t1
176	     wi::add (r1, r2);    // add equal-sized rtx constants r1 and r2
177	     wi::lshift (1, 100); // 1 << 100 as a widest_int
178
179   Many binary operations place restrictions on the combinations of inputs,
180   using the following rules:
181
182   - {tree, rtx, wide_int} op {tree, rtx, wide_int} -> wide_int
183       The inputs must be the same precision.  The result is a wide_int
184       of the same precision
185
186   - {tree, rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
187     (un)signed HOST_WIDE_INT op {tree, rtx, wide_int} -> wide_int
188       The HOST_WIDE_INT is extended or truncated to the precision of
189       the other input.  The result is a wide_int of the same precision
190       as that input.
191
192   - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
193       The inputs are extended to widest_int precision and produce a
194       widest_int result.
195
196   - offset_int op offset_int -> offset_int
197     offset_int op (un)signed HOST_WIDE_INT -> offset_int
198     (un)signed HOST_WIDE_INT op offset_int -> offset_int
199
200   - widest_int op widest_int -> widest_int
201     widest_int op (un)signed HOST_WIDE_INT -> widest_int
202     (un)signed HOST_WIDE_INT op widest_int -> widest_int
203
204   Other combinations like:
205
206   - widest_int op offset_int and
207   - wide_int op offset_int
208
209   are not allowed.  The inputs should instead be extended or truncated
210   so that they match.
211
212   The inputs to comparison functions like wi::eq_p and wi::lts_p
213   follow the same compatibility rules, although their return types
214   are different.  Unary functions on X produce the same result as
215   a binary operation X + X.  Shift functions X op Y also produce
216   the same result as X + X; the precision of the shift amount Y
217   can be arbitrarily different from X.  */
218
219#include "system.h"
220#include "hwint.h"
221#include "signop.h"
222#include "insn-modes.h"
223
224/* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
225   early examination of the target's mode file.  The WIDE_INT_MAX_ELTS
226   can accomodate at least 1 more bit so that unsigned numbers of that
227   mode can be represented as a signed value.  Note that it is still
228   possible to create fixed_wide_ints that have precisions greater than
229   MAX_BITSIZE_MODE_ANY_INT.  This can be useful when representing a
230   double-width multiplication result, for example.  */
231#define WIDE_INT_MAX_ELTS \
232  ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
233
234#define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
235
236/* This is the max size of any pointer on any machine.  It does not
237   seem to be as easy to sniff this out of the machine description as
238   it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
239   multiple address sizes and may have different address sizes for
240   different address spaces.  However, currently the largest pointer
241   on any platform is 64 bits.  When that changes, then it is likely
242   that a target hook should be defined so that targets can make this
243   value larger for those targets.  */
244#define ADDR_MAX_BITSIZE 64
245
246/* This is the internal precision used when doing any address
247   arithmetic.  The '4' is really 3 + 1.  Three of the bits are for
248   the number of extra bits needed to do bit addresses and the other bit
249   is to allow everything to be signed without loosing any precision.
250   Then everything is rounded up to the next HWI for efficiency.  */
251#define ADDR_MAX_PRECISION \
252  ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
253   & ~(HOST_BITS_PER_WIDE_INT - 1))
254
255/* The number of HWIs needed to store an offset_int.  */
256#define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
257
258/* The type of result produced by a binary operation on types T1 and T2.
259   Defined purely for brevity.  */
260#define WI_BINARY_RESULT(T1, T2) \
261  typename wi::binary_traits <T1, T2>::result_type
262
263/* The type of result produced by a unary operation on type T.  */
264#define WI_UNARY_RESULT(T) \
265  typename wi::unary_traits <T>::result_type
266
267/* Define a variable RESULT to hold the result of a binary operation on
268   X and Y, which have types T1 and T2 respectively.  Define VAL to
269   point to the blocks of RESULT.  Once the user of the macro has
270   filled in VAL, it should call RESULT.set_len to set the number
271   of initialized blocks.  */
272#define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
273  WI_BINARY_RESULT (T1, T2) RESULT = \
274    wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
275  HOST_WIDE_INT *VAL = RESULT.write_val ()
276
277/* Similar for the result of a unary operation on X, which has type T.  */
278#define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
279  WI_UNARY_RESULT (T) RESULT = \
280    wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
281  HOST_WIDE_INT *VAL = RESULT.write_val ()
282
283template <typename T> class generic_wide_int;
284template <int N> struct fixed_wide_int_storage;
285class wide_int_storage;
286
287/* An N-bit integer.  Until we can use typedef templates, use this instead.  */
288#define FIXED_WIDE_INT(N) \
289  generic_wide_int < fixed_wide_int_storage <N> >
290
291typedef generic_wide_int <wide_int_storage> wide_int;
292typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
293typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
294
295template <bool SE>
296struct wide_int_ref_storage;
297
298typedef generic_wide_int <wide_int_ref_storage <false> > wide_int_ref;
299
300/* This can be used instead of wide_int_ref if the referenced value is
301   known to have type T.  It carries across properties of T's representation,
302   such as whether excess upper bits in a HWI are defined, and can therefore
303   help avoid redundant work.
304
305   The macro could be replaced with a template typedef, once we're able
306   to use those.  */
307#define WIDE_INT_REF_FOR(T) \
308  generic_wide_int \
309    <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended> >
310
311namespace wi
312{
313  /* Classifies an integer based on its precision.  */
314  enum precision_type {
315    /* The integer has both a precision and defined signedness.  This allows
316       the integer to be converted to any width, since we know whether to fill
317       any extra bits with zeros or signs.  */
318    FLEXIBLE_PRECISION,
319
320    /* The integer has a variable precision but no defined signedness.  */
321    VAR_PRECISION,
322
323    /* The integer has a constant precision (known at GCC compile time)
324       but no defined signedness.  */
325    CONST_PRECISION
326  };
327
328  /* This class, which has no default implementation, is expected to
329     provide the following members:
330
331     static const enum precision_type precision_type;
332       Classifies the type of T.
333
334     static const unsigned int precision;
335       Only defined if precision_type == CONST_PRECISION.  Specifies the
336       precision of all integers of type T.
337
338     static const bool host_dependent_precision;
339       True if the precision of T depends (or can depend) on the host.
340
341     static unsigned int get_precision (const T &x)
342       Return the number of bits in X.
343
344     static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
345					unsigned int precision, const T &x)
346       Decompose X as a PRECISION-bit integer, returning the associated
347       wi::storage_ref.  SCRATCH is available as scratch space if needed.
348       The routine should assert that PRECISION is acceptable.  */
349  template <typename T> struct int_traits;
350
351  /* This class provides a single type, result_type, which specifies the
352     type of integer produced by a binary operation whose inputs have
353     types T1 and T2.  The definition should be symmetric.  */
354  template <typename T1, typename T2,
355	    enum precision_type P1 = int_traits <T1>::precision_type,
356	    enum precision_type P2 = int_traits <T2>::precision_type>
357  struct binary_traits;
358
359  /* The result of a unary operation on T is the same as the result of
360     a binary operation on two values of type T.  */
361  template <typename T>
362  struct unary_traits : public binary_traits <T, T> {};
363
364  /* Specify the result type for each supported combination of binary
365     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
366     mixed, in order to give stronger type checking.  When both inputs
367     are CONST_PRECISION, they must have the same precision.  */
368  template <typename T1, typename T2>
369  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
370  {
371    typedef widest_int result_type;
372  };
373
374  template <typename T1, typename T2>
375  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
376  {
377    typedef wide_int result_type;
378  };
379
380  template <typename T1, typename T2>
381  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
382  {
383    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
384       so as not to confuse gengtype.  */
385    typedef generic_wide_int < fixed_wide_int_storage
386			       <int_traits <T2>::precision> > result_type;
387  };
388
389  template <typename T1, typename T2>
390  struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
391  {
392    typedef wide_int result_type;
393  };
394
395  template <typename T1, typename T2>
396  struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
397  {
398    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
399       so as not to confuse gengtype.  */
400    typedef generic_wide_int < fixed_wide_int_storage
401			       <int_traits <T1>::precision> > result_type;
402  };
403
404  template <typename T1, typename T2>
405  struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
406  {
407    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
408       so as not to confuse gengtype.  */
409    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
410    typedef generic_wide_int < fixed_wide_int_storage
411			       <int_traits <T1>::precision> > result_type;
412  };
413
414  template <typename T1, typename T2>
415  struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
416  {
417    typedef wide_int result_type;
418  };
419}
420
421/* Public functions for querying and operating on integers.  */
422namespace wi
423{
424  template <typename T>
425  unsigned int get_precision (const T &);
426
427  template <typename T1, typename T2>
428  unsigned int get_binary_precision (const T1 &, const T2 &);
429
430  template <typename T1, typename T2>
431  void copy (T1 &, const T2 &);
432
433#define UNARY_PREDICATE \
434  template <typename T> bool
435#define UNARY_FUNCTION \
436  template <typename T> WI_UNARY_RESULT (T)
437#define BINARY_PREDICATE \
438  template <typename T1, typename T2> bool
439#define BINARY_FUNCTION \
440  template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
441#define SHIFT_FUNCTION \
442  template <typename T1, typename T2> WI_UNARY_RESULT (T1)
443
444  UNARY_PREDICATE fits_shwi_p (const T &);
445  UNARY_PREDICATE fits_uhwi_p (const T &);
446  UNARY_PREDICATE neg_p (const T &, signop = SIGNED);
447
448  template <typename T>
449  HOST_WIDE_INT sign_mask (const T &);
450
451  BINARY_PREDICATE eq_p (const T1 &, const T2 &);
452  BINARY_PREDICATE ne_p (const T1 &, const T2 &);
453  BINARY_PREDICATE lt_p (const T1 &, const T2 &, signop);
454  BINARY_PREDICATE lts_p (const T1 &, const T2 &);
455  BINARY_PREDICATE ltu_p (const T1 &, const T2 &);
456  BINARY_PREDICATE le_p (const T1 &, const T2 &, signop);
457  BINARY_PREDICATE les_p (const T1 &, const T2 &);
458  BINARY_PREDICATE leu_p (const T1 &, const T2 &);
459  BINARY_PREDICATE gt_p (const T1 &, const T2 &, signop);
460  BINARY_PREDICATE gts_p (const T1 &, const T2 &);
461  BINARY_PREDICATE gtu_p (const T1 &, const T2 &);
462  BINARY_PREDICATE ge_p (const T1 &, const T2 &, signop);
463  BINARY_PREDICATE ges_p (const T1 &, const T2 &);
464  BINARY_PREDICATE geu_p (const T1 &, const T2 &);
465
466  template <typename T1, typename T2>
467  int cmp (const T1 &, const T2 &, signop);
468
469  template <typename T1, typename T2>
470  int cmps (const T1 &, const T2 &);
471
472  template <typename T1, typename T2>
473  int cmpu (const T1 &, const T2 &);
474
475  UNARY_FUNCTION bit_not (const T &);
476  UNARY_FUNCTION neg (const T &);
477  UNARY_FUNCTION neg (const T &, bool *);
478  UNARY_FUNCTION abs (const T &);
479  UNARY_FUNCTION ext (const T &, unsigned int, signop);
480  UNARY_FUNCTION sext (const T &, unsigned int);
481  UNARY_FUNCTION zext (const T &, unsigned int);
482  UNARY_FUNCTION set_bit (const T &, unsigned int);
483
484  BINARY_FUNCTION min (const T1 &, const T2 &, signop);
485  BINARY_FUNCTION smin (const T1 &, const T2 &);
486  BINARY_FUNCTION umin (const T1 &, const T2 &);
487  BINARY_FUNCTION max (const T1 &, const T2 &, signop);
488  BINARY_FUNCTION smax (const T1 &, const T2 &);
489  BINARY_FUNCTION umax (const T1 &, const T2 &);
490
491  BINARY_FUNCTION bit_and (const T1 &, const T2 &);
492  BINARY_FUNCTION bit_and_not (const T1 &, const T2 &);
493  BINARY_FUNCTION bit_or (const T1 &, const T2 &);
494  BINARY_FUNCTION bit_or_not (const T1 &, const T2 &);
495  BINARY_FUNCTION bit_xor (const T1 &, const T2 &);
496  BINARY_FUNCTION add (const T1 &, const T2 &);
497  BINARY_FUNCTION add (const T1 &, const T2 &, signop, bool *);
498  BINARY_FUNCTION sub (const T1 &, const T2 &);
499  BINARY_FUNCTION sub (const T1 &, const T2 &, signop, bool *);
500  BINARY_FUNCTION mul (const T1 &, const T2 &);
501  BINARY_FUNCTION mul (const T1 &, const T2 &, signop, bool *);
502  BINARY_FUNCTION smul (const T1 &, const T2 &, bool *);
503  BINARY_FUNCTION umul (const T1 &, const T2 &, bool *);
504  BINARY_FUNCTION mul_high (const T1 &, const T2 &, signop);
505  BINARY_FUNCTION div_trunc (const T1 &, const T2 &, signop, bool * = 0);
506  BINARY_FUNCTION sdiv_trunc (const T1 &, const T2 &);
507  BINARY_FUNCTION udiv_trunc (const T1 &, const T2 &);
508  BINARY_FUNCTION div_floor (const T1 &, const T2 &, signop, bool * = 0);
509  BINARY_FUNCTION udiv_floor (const T1 &, const T2 &);
510  BINARY_FUNCTION sdiv_floor (const T1 &, const T2 &);
511  BINARY_FUNCTION div_ceil (const T1 &, const T2 &, signop, bool * = 0);
512  BINARY_FUNCTION div_round (const T1 &, const T2 &, signop, bool * = 0);
513  BINARY_FUNCTION divmod_trunc (const T1 &, const T2 &, signop,
514				WI_BINARY_RESULT (T1, T2) *);
515  BINARY_FUNCTION mod_trunc (const T1 &, const T2 &, signop, bool * = 0);
516  BINARY_FUNCTION smod_trunc (const T1 &, const T2 &);
517  BINARY_FUNCTION umod_trunc (const T1 &, const T2 &);
518  BINARY_FUNCTION mod_floor (const T1 &, const T2 &, signop, bool * = 0);
519  BINARY_FUNCTION umod_floor (const T1 &, const T2 &);
520  BINARY_FUNCTION mod_ceil (const T1 &, const T2 &, signop, bool * = 0);
521  BINARY_FUNCTION mod_round (const T1 &, const T2 &, signop, bool * = 0);
522
523  template <typename T1, typename T2>
524  bool multiple_of_p (const T1 &, const T2 &, signop);
525
526  template <typename T1, typename T2>
527  bool multiple_of_p (const T1 &, const T2 &, signop,
528		      WI_BINARY_RESULT (T1, T2) *);
529
530  SHIFT_FUNCTION lshift (const T1 &, const T2 &);
531  SHIFT_FUNCTION lrshift (const T1 &, const T2 &);
532  SHIFT_FUNCTION arshift (const T1 &, const T2 &);
533  SHIFT_FUNCTION rshift (const T1 &, const T2 &, signop sgn);
534  SHIFT_FUNCTION lrotate (const T1 &, const T2 &, unsigned int = 0);
535  SHIFT_FUNCTION rrotate (const T1 &, const T2 &, unsigned int = 0);
536
537#undef SHIFT_FUNCTION
538#undef BINARY_PREDICATE
539#undef BINARY_FUNCTION
540#undef UNARY_PREDICATE
541#undef UNARY_FUNCTION
542
543  bool only_sign_bit_p (const wide_int_ref &, unsigned int);
544  bool only_sign_bit_p (const wide_int_ref &);
545  int clz (const wide_int_ref &);
546  int clrsb (const wide_int_ref &);
547  int ctz (const wide_int_ref &);
548  int exact_log2 (const wide_int_ref &);
549  int floor_log2 (const wide_int_ref &);
550  int ffs (const wide_int_ref &);
551  int popcount (const wide_int_ref &);
552  int parity (const wide_int_ref &);
553
554  template <typename T>
555  unsigned HOST_WIDE_INT extract_uhwi (const T &, unsigned int, unsigned int);
556
557  template <typename T>
558  unsigned int min_precision (const T &, signop);
559}
560
561namespace wi
562{
563  /* Contains the components of a decomposed integer for easy, direct
564     access.  */
565  struct storage_ref
566  {
567    storage_ref (const HOST_WIDE_INT *, unsigned int, unsigned int);
568
569    const HOST_WIDE_INT *val;
570    unsigned int len;
571    unsigned int precision;
572
573    /* Provide enough trappings for this class to act as storage for
574       generic_wide_int.  */
575    unsigned int get_len () const;
576    unsigned int get_precision () const;
577    const HOST_WIDE_INT *get_val () const;
578  };
579}
580
581inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT *val_in,
582				      unsigned int len_in,
583				      unsigned int precision_in)
584  : val (val_in), len (len_in), precision (precision_in)
585{
586}
587
588inline unsigned int
589wi::storage_ref::get_len () const
590{
591  return len;
592}
593
594inline unsigned int
595wi::storage_ref::get_precision () const
596{
597  return precision;
598}
599
600inline const HOST_WIDE_INT *
601wi::storage_ref::get_val () const
602{
603  return val;
604}
605
606/* This class defines an integer type using the storage provided by the
607   template argument.  The storage class must provide the following
608   functions:
609
610   unsigned int get_precision () const
611     Return the number of bits in the integer.
612
613   HOST_WIDE_INT *get_val () const
614     Return a pointer to the array of blocks that encodes the integer.
615
616   unsigned int get_len () const
617     Return the number of blocks in get_val ().  If this is smaller
618     than the number of blocks implied by get_precision (), the
619     remaining blocks are sign extensions of block get_len () - 1.
620
621   Although not required by generic_wide_int itself, writable storage
622   classes can also provide the following functions:
623
624   HOST_WIDE_INT *write_val ()
625     Get a modifiable version of get_val ()
626
627   unsigned int set_len (unsigned int len)
628     Set the value returned by get_len () to LEN.  */
629template <typename storage>
630class GTY(()) generic_wide_int : public storage
631{
632public:
633  generic_wide_int ();
634
635  template <typename T>
636  generic_wide_int (const T &);
637
638  template <typename T>
639  generic_wide_int (const T &, unsigned int);
640
641  /* Conversions.  */
642  HOST_WIDE_INT to_shwi (unsigned int) const;
643  HOST_WIDE_INT to_shwi () const;
644  unsigned HOST_WIDE_INT to_uhwi (unsigned int) const;
645  unsigned HOST_WIDE_INT to_uhwi () const;
646  HOST_WIDE_INT to_short_addr () const;
647
648  /* Public accessors for the interior of a wide int.  */
649  HOST_WIDE_INT sign_mask () const;
650  HOST_WIDE_INT elt (unsigned int) const;
651  unsigned HOST_WIDE_INT ulow () const;
652  unsigned HOST_WIDE_INT uhigh () const;
653  HOST_WIDE_INT slow () const;
654  HOST_WIDE_INT shigh () const;
655
656  template <typename T>
657  generic_wide_int &operator = (const T &);
658
659#define BINARY_PREDICATE(OP, F) \
660  template <typename T> \
661  bool OP (const T &c) const { return wi::F (*this, c); }
662
663#define UNARY_OPERATOR(OP, F) \
664  WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
665
666#define BINARY_OPERATOR(OP, F) \
667  template <typename T> \
668    WI_BINARY_RESULT (generic_wide_int, T) \
669    OP (const T &c) const { return wi::F (*this, c); }
670
671#define ASSIGNMENT_OPERATOR(OP, F) \
672  template <typename T> \
673    generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
674
675#define INCDEC_OPERATOR(OP, DELTA) \
676  generic_wide_int &OP () { *this += DELTA; return *this; }
677
678  UNARY_OPERATOR (operator ~, bit_not)
679  UNARY_OPERATOR (operator -, neg)
680  BINARY_PREDICATE (operator ==, eq_p)
681  BINARY_PREDICATE (operator !=, ne_p)
682  BINARY_OPERATOR (operator &, bit_and)
683  BINARY_OPERATOR (and_not, bit_and_not)
684  BINARY_OPERATOR (operator |, bit_or)
685  BINARY_OPERATOR (or_not, bit_or_not)
686  BINARY_OPERATOR (operator ^, bit_xor)
687  BINARY_OPERATOR (operator +, add)
688  BINARY_OPERATOR (operator -, sub)
689  BINARY_OPERATOR (operator *, mul)
690  ASSIGNMENT_OPERATOR (operator &=, bit_and)
691  ASSIGNMENT_OPERATOR (operator |=, bit_or)
692  ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
693  ASSIGNMENT_OPERATOR (operator +=, add)
694  ASSIGNMENT_OPERATOR (operator -=, sub)
695  ASSIGNMENT_OPERATOR (operator *=, mul)
696  INCDEC_OPERATOR (operator ++, 1)
697  INCDEC_OPERATOR (operator --, -1)
698
699#undef BINARY_PREDICATE
700#undef UNARY_OPERATOR
701#undef BINARY_OPERATOR
702#undef ASSIGNMENT_OPERATOR
703#undef INCDEC_OPERATOR
704
705  /* Debugging functions.  */
706  void dump () const;
707
708  static const bool is_sign_extended
709    = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
710};
711
712template <typename storage>
713inline generic_wide_int <storage>::generic_wide_int () {}
714
715template <typename storage>
716template <typename T>
717inline generic_wide_int <storage>::generic_wide_int (const T &x)
718  : storage (x)
719{
720}
721
722template <typename storage>
723template <typename T>
724inline generic_wide_int <storage>::generic_wide_int (const T &x,
725						     unsigned int precision)
726  : storage (x, precision)
727{
728}
729
730/* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
731   If THIS does not fit in PRECISION, the information is lost.  */
732template <typename storage>
733inline HOST_WIDE_INT
734generic_wide_int <storage>::to_shwi (unsigned int precision) const
735{
736  if (precision < HOST_BITS_PER_WIDE_INT)
737    return sext_hwi (this->get_val ()[0], precision);
738  else
739    return this->get_val ()[0];
740}
741
742/* Return THIS as a signed HOST_WIDE_INT, in its natural precision.  */
743template <typename storage>
744inline HOST_WIDE_INT
745generic_wide_int <storage>::to_shwi () const
746{
747  if (is_sign_extended)
748    return this->get_val ()[0];
749  else
750    return to_shwi (this->get_precision ());
751}
752
753/* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
754   PRECISION.  If THIS does not fit in PRECISION, the information
755   is lost.  */
756template <typename storage>
757inline unsigned HOST_WIDE_INT
758generic_wide_int <storage>::to_uhwi (unsigned int precision) const
759{
760  if (precision < HOST_BITS_PER_WIDE_INT)
761    return zext_hwi (this->get_val ()[0], precision);
762  else
763    return this->get_val ()[0];
764}
765
766/* Return THIS as an signed HOST_WIDE_INT, in its natural precision.  */
767template <typename storage>
768inline unsigned HOST_WIDE_INT
769generic_wide_int <storage>::to_uhwi () const
770{
771  return to_uhwi (this->get_precision ());
772}
773
774/* TODO: The compiler is half converted from using HOST_WIDE_INT to
775   represent addresses to using offset_int to represent addresses.
776   We use to_short_addr at the interface from new code to old,
777   unconverted code.  */
778template <typename storage>
779inline HOST_WIDE_INT
780generic_wide_int <storage>::to_short_addr () const
781{
782  return this->get_val ()[0];
783}
784
785/* Return the implicit value of blocks above get_len ().  */
786template <typename storage>
787inline HOST_WIDE_INT
788generic_wide_int <storage>::sign_mask () const
789{
790  unsigned int len = this->get_len ();
791  unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
792  if (!is_sign_extended)
793    {
794      unsigned int precision = this->get_precision ();
795      int excess = len * HOST_BITS_PER_WIDE_INT - precision;
796      if (excess > 0)
797	high <<= excess;
798    }
799  return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
800}
801
802/* Return the signed value of the least-significant explicitly-encoded
803   block.  */
804template <typename storage>
805inline HOST_WIDE_INT
806generic_wide_int <storage>::slow () const
807{
808  return this->get_val ()[0];
809}
810
811/* Return the signed value of the most-significant explicitly-encoded
812   block.  */
813template <typename storage>
814inline HOST_WIDE_INT
815generic_wide_int <storage>::shigh () const
816{
817  return this->get_val ()[this->get_len () - 1];
818}
819
820/* Return the unsigned value of the least-significant
821   explicitly-encoded block.  */
822template <typename storage>
823inline unsigned HOST_WIDE_INT
824generic_wide_int <storage>::ulow () const
825{
826  return this->get_val ()[0];
827}
828
829/* Return the unsigned value of the most-significant
830   explicitly-encoded block.  */
831template <typename storage>
832inline unsigned HOST_WIDE_INT
833generic_wide_int <storage>::uhigh () const
834{
835  return this->get_val ()[this->get_len () - 1];
836}
837
838/* Return block I, which might be implicitly or explicit encoded.  */
839template <typename storage>
840inline HOST_WIDE_INT
841generic_wide_int <storage>::elt (unsigned int i) const
842{
843  if (i >= this->get_len ())
844    return sign_mask ();
845  else
846    return this->get_val ()[i];
847}
848
849template <typename storage>
850template <typename T>
851generic_wide_int <storage> &
852generic_wide_int <storage>::operator = (const T &x)
853{
854  storage::operator = (x);
855  return *this;
856}
857
858/* Dump the contents of the integer to stderr, for debugging.  */
859template <typename storage>
860void
861generic_wide_int <storage>::dump () const
862{
863  unsigned int len = this->get_len ();
864  const HOST_WIDE_INT *val = this->get_val ();
865  unsigned int precision = this->get_precision ();
866  fprintf (stderr, "[");
867  if (len * HOST_BITS_PER_WIDE_INT < precision)
868    fprintf (stderr, "...,");
869  for (unsigned int i = 0; i < len - 1; ++i)
870    fprintf (stderr, HOST_WIDE_INT_PRINT_HEX ",", val[len - 1 - i]);
871  fprintf (stderr, HOST_WIDE_INT_PRINT_HEX "], precision = %d\n",
872	   val[0], precision);
873}
874
875namespace wi
876{
877  template <typename storage>
878  struct int_traits < generic_wide_int <storage> >
879    : public wi::int_traits <storage>
880  {
881    static unsigned int get_precision (const generic_wide_int <storage> &);
882    static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
883				      const generic_wide_int <storage> &);
884  };
885}
886
887template <typename storage>
888inline unsigned int
889wi::int_traits < generic_wide_int <storage> >::
890get_precision (const generic_wide_int <storage> &x)
891{
892  return x.get_precision ();
893}
894
895template <typename storage>
896inline wi::storage_ref
897wi::int_traits < generic_wide_int <storage> >::
898decompose (HOST_WIDE_INT *, unsigned int precision,
899	   const generic_wide_int <storage> &x)
900{
901  gcc_checking_assert (precision == x.get_precision ());
902  return wi::storage_ref (x.get_val (), x.get_len (), precision);
903}
904
905/* Provide the storage for a wide_int_ref.  This acts like a read-only
906   wide_int, with the optimization that VAL is normally a pointer to
907   another integer's storage, so that no array copy is needed.  */
908template <bool SE>
909struct wide_int_ref_storage : public wi::storage_ref
910{
911private:
912  /* Scratch space that can be used when decomposing the original integer.
913     It must live as long as this object.  */
914  HOST_WIDE_INT scratch[2];
915
916public:
917  wide_int_ref_storage (const wi::storage_ref &);
918
919  template <typename T>
920  wide_int_ref_storage (const T &);
921
922  template <typename T>
923  wide_int_ref_storage (const T &, unsigned int);
924};
925
926/* Create a reference from an existing reference.  */
927template <bool SE>
928inline wide_int_ref_storage <SE>::
929wide_int_ref_storage (const wi::storage_ref &x)
930  : storage_ref (x)
931{}
932
933/* Create a reference to integer X in its natural precision.  Note
934   that the natural precision is host-dependent for primitive
935   types.  */
936template <bool SE>
937template <typename T>
938inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x)
939  : storage_ref (wi::int_traits <T>::decompose (scratch,
940						wi::get_precision (x), x))
941{
942}
943
944/* Create a reference to integer X in precision PRECISION.  */
945template <bool SE>
946template <typename T>
947inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x,
948							unsigned int precision)
949  : storage_ref (wi::int_traits <T>::decompose (scratch, precision, x))
950{
951}
952
953namespace wi
954{
955  template <bool SE>
956  struct int_traits <wide_int_ref_storage <SE> >
957  {
958    static const enum precision_type precision_type = VAR_PRECISION;
959    /* wi::storage_ref can be a reference to a primitive type,
960       so this is the conservatively-correct setting.  */
961    static const bool host_dependent_precision = true;
962    static const bool is_sign_extended = SE;
963  };
964}
965
966namespace wi
967{
968  unsigned int force_to_size (HOST_WIDE_INT *, const HOST_WIDE_INT *,
969			      unsigned int, unsigned int, unsigned int,
970			      signop sgn);
971  unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
972			   unsigned int, unsigned int, bool = true);
973}
974
975/* The storage used by wide_int.  */
976class GTY(()) wide_int_storage
977{
978private:
979  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
980  unsigned int len;
981  unsigned int precision;
982
983public:
984  wide_int_storage ();
985  template <typename T>
986  wide_int_storage (const T &);
987
988  /* The standard generic_wide_int storage methods.  */
989  unsigned int get_precision () const;
990  const HOST_WIDE_INT *get_val () const;
991  unsigned int get_len () const;
992  HOST_WIDE_INT *write_val ();
993  void set_len (unsigned int, bool = false);
994
995  static wide_int from (const wide_int_ref &, unsigned int, signop);
996  static wide_int from_array (const HOST_WIDE_INT *, unsigned int,
997			      unsigned int, bool = true);
998  static wide_int create (unsigned int);
999
1000  /* FIXME: target-dependent, so should disappear.  */
1001  wide_int bswap () const;
1002};
1003
1004namespace wi
1005{
1006  template <>
1007  struct int_traits <wide_int_storage>
1008  {
1009    static const enum precision_type precision_type = VAR_PRECISION;
1010    /* Guaranteed by a static assert in the wide_int_storage constructor.  */
1011    static const bool host_dependent_precision = false;
1012    static const bool is_sign_extended = true;
1013    template <typename T1, typename T2>
1014    static wide_int get_binary_result (const T1 &, const T2 &);
1015  };
1016}
1017
1018inline wide_int_storage::wide_int_storage () {}
1019
1020/* Initialize the storage from integer X, in its natural precision.
1021   Note that we do not allow integers with host-dependent precision
1022   to become wide_ints; wide_ints must always be logically independent
1023   of the host.  */
1024template <typename T>
1025inline wide_int_storage::wide_int_storage (const T &x)
1026{
1027  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
1028  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
1029  WIDE_INT_REF_FOR (T) xi (x);
1030  precision = xi.precision;
1031  wi::copy (*this, xi);
1032}
1033
1034inline unsigned int
1035wide_int_storage::get_precision () const
1036{
1037  return precision;
1038}
1039
1040inline const HOST_WIDE_INT *
1041wide_int_storage::get_val () const
1042{
1043  return val;
1044}
1045
1046inline unsigned int
1047wide_int_storage::get_len () const
1048{
1049  return len;
1050}
1051
1052inline HOST_WIDE_INT *
1053wide_int_storage::write_val ()
1054{
1055  return val;
1056}
1057
1058inline void
1059wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
1060{
1061  len = l;
1062  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
1063    val[len - 1] = sext_hwi (val[len - 1],
1064			     precision % HOST_BITS_PER_WIDE_INT);
1065}
1066
1067/* Treat X as having signedness SGN and convert it to a PRECISION-bit
1068   number.  */
1069inline wide_int
1070wide_int_storage::from (const wide_int_ref &x, unsigned int precision,
1071			signop sgn)
1072{
1073  wide_int result = wide_int::create (precision);
1074  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1075				     x.precision, precision, sgn));
1076  return result;
1077}
1078
1079/* Create a wide_int from the explicit block encoding given by VAL and
1080   LEN.  PRECISION is the precision of the integer.  NEED_CANON_P is
1081   true if the encoding may have redundant trailing blocks.  */
1082inline wide_int
1083wide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
1084			      unsigned int precision, bool need_canon_p)
1085{
1086  wide_int result = wide_int::create (precision);
1087  result.set_len (wi::from_array (result.write_val (), val, len, precision,
1088				  need_canon_p));
1089  return result;
1090}
1091
1092/* Return an uninitialized wide_int with precision PRECISION.  */
1093inline wide_int
1094wide_int_storage::create (unsigned int precision)
1095{
1096  wide_int x;
1097  x.precision = precision;
1098  return x;
1099}
1100
1101template <typename T1, typename T2>
1102inline wide_int
1103wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
1104{
1105  /* This shouldn't be used for two flexible-precision inputs.  */
1106  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
1107		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
1108  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
1109    return wide_int::create (wi::get_precision (y));
1110  else
1111    return wide_int::create (wi::get_precision (x));
1112}
1113
1114/* The storage used by FIXED_WIDE_INT (N).  */
1115template <int N>
1116class GTY(()) fixed_wide_int_storage
1117{
1118private:
1119  HOST_WIDE_INT val[(N + HOST_BITS_PER_WIDE_INT + 1) / HOST_BITS_PER_WIDE_INT];
1120  unsigned int len;
1121
1122public:
1123  fixed_wide_int_storage ();
1124  template <typename T>
1125  fixed_wide_int_storage (const T &);
1126
1127  /* The standard generic_wide_int storage methods.  */
1128  unsigned int get_precision () const;
1129  const HOST_WIDE_INT *get_val () const;
1130  unsigned int get_len () const;
1131  HOST_WIDE_INT *write_val ();
1132  void set_len (unsigned int, bool = false);
1133
1134  static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
1135  static FIXED_WIDE_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
1136					bool = true);
1137};
1138
1139namespace wi
1140{
1141  template <int N>
1142  struct int_traits < fixed_wide_int_storage <N> >
1143  {
1144    static const enum precision_type precision_type = CONST_PRECISION;
1145    static const bool host_dependent_precision = false;
1146    static const bool is_sign_extended = true;
1147    static const unsigned int precision = N;
1148    template <typename T1, typename T2>
1149    static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
1150  };
1151}
1152
1153template <int N>
1154inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
1155
1156/* Initialize the storage from integer X, in precision N.  */
1157template <int N>
1158template <typename T>
1159inline fixed_wide_int_storage <N>::fixed_wide_int_storage (const T &x)
1160{
1161  /* Check for type compatibility.  We don't want to initialize a
1162     fixed-width integer from something like a wide_int.  */
1163  WI_BINARY_RESULT (T, FIXED_WIDE_INT (N)) *assertion ATTRIBUTE_UNUSED;
1164  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
1165}
1166
1167template <int N>
1168inline unsigned int
1169fixed_wide_int_storage <N>::get_precision () const
1170{
1171  return N;
1172}
1173
1174template <int N>
1175inline const HOST_WIDE_INT *
1176fixed_wide_int_storage <N>::get_val () const
1177{
1178  return val;
1179}
1180
1181template <int N>
1182inline unsigned int
1183fixed_wide_int_storage <N>::get_len () const
1184{
1185  return len;
1186}
1187
1188template <int N>
1189inline HOST_WIDE_INT *
1190fixed_wide_int_storage <N>::write_val ()
1191{
1192  return val;
1193}
1194
1195template <int N>
1196inline void
1197fixed_wide_int_storage <N>::set_len (unsigned int l, bool)
1198{
1199  len = l;
1200  /* There are no excess bits in val[len - 1].  */
1201  STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
1202}
1203
1204/* Treat X as having signedness SGN and convert it to an N-bit number.  */
1205template <int N>
1206inline FIXED_WIDE_INT (N)
1207fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
1208{
1209  FIXED_WIDE_INT (N) result;
1210  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1211				     x.precision, N, sgn));
1212  return result;
1213}
1214
1215/* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1216   VAL and LEN.  NEED_CANON_P is true if the encoding may have redundant
1217   trailing blocks.  */
1218template <int N>
1219inline FIXED_WIDE_INT (N)
1220fixed_wide_int_storage <N>::from_array (const HOST_WIDE_INT *val,
1221					unsigned int len,
1222					bool need_canon_p)
1223{
1224  FIXED_WIDE_INT (N) result;
1225  result.set_len (wi::from_array (result.write_val (), val, len,
1226				  N, need_canon_p));
1227  return result;
1228}
1229
1230template <int N>
1231template <typename T1, typename T2>
1232inline FIXED_WIDE_INT (N)
1233wi::int_traits < fixed_wide_int_storage <N> >::
1234get_binary_result (const T1 &, const T2 &)
1235{
1236  return FIXED_WIDE_INT (N) ();
1237}
1238
1239/* A reference to one element of a trailing_wide_ints structure.  */
1240class trailing_wide_int_storage
1241{
1242private:
1243  /* The precision of the integer, which is a fixed property of the
1244     parent trailing_wide_ints.  */
1245  unsigned int m_precision;
1246
1247  /* A pointer to the length field.  */
1248  unsigned char *m_len;
1249
1250  /* A pointer to the HWI array.  There are enough elements to hold all
1251     values of precision M_PRECISION.  */
1252  HOST_WIDE_INT *m_val;
1253
1254public:
1255  trailing_wide_int_storage (unsigned int, unsigned char *, HOST_WIDE_INT *);
1256
1257  /* The standard generic_wide_int storage methods.  */
1258  unsigned int get_len () const;
1259  unsigned int get_precision () const;
1260  const HOST_WIDE_INT *get_val () const;
1261  HOST_WIDE_INT *write_val ();
1262  void set_len (unsigned int, bool = false);
1263
1264  template <typename T>
1265  trailing_wide_int_storage &operator = (const T &);
1266};
1267
1268typedef generic_wide_int <trailing_wide_int_storage> trailing_wide_int;
1269
1270/* trailing_wide_int behaves like a wide_int.  */
1271namespace wi
1272{
1273  template <>
1274  struct int_traits <trailing_wide_int_storage>
1275    : public int_traits <wide_int_storage> {};
1276}
1277
1278/* An array of N wide_int-like objects that can be put at the end of
1279   a variable-sized structure.  Use extra_size to calculate how many
1280   bytes beyond the sizeof need to be allocated.  Use set_precision
1281   to initialize the structure.  */
1282template <int N>
1283class GTY(()) trailing_wide_ints
1284{
1285private:
1286  /* The shared precision of each number.  */
1287  unsigned short m_precision;
1288
1289  /* The shared maximum length of each number.  */
1290  unsigned char m_max_len;
1291
1292  /* The current length of each number.  */
1293  unsigned char m_len[N];
1294
1295  /* The variable-length part of the structure, which always contains
1296     at least one HWI.  Element I starts at index I * M_MAX_LEN.  */
1297  HOST_WIDE_INT m_val[1];
1298
1299public:
1300  void set_precision (unsigned int);
1301  trailing_wide_int operator [] (unsigned int);
1302  static size_t extra_size (unsigned int);
1303};
1304
1305inline trailing_wide_int_storage::
1306trailing_wide_int_storage (unsigned int precision, unsigned char *len,
1307			   HOST_WIDE_INT *val)
1308  : m_precision (precision), m_len (len), m_val (val)
1309{
1310}
1311
1312inline unsigned int
1313trailing_wide_int_storage::get_len () const
1314{
1315  return *m_len;
1316}
1317
1318inline unsigned int
1319trailing_wide_int_storage::get_precision () const
1320{
1321  return m_precision;
1322}
1323
1324inline const HOST_WIDE_INT *
1325trailing_wide_int_storage::get_val () const
1326{
1327  return m_val;
1328}
1329
1330inline HOST_WIDE_INT *
1331trailing_wide_int_storage::write_val ()
1332{
1333  return m_val;
1334}
1335
1336inline void
1337trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
1338{
1339  *m_len = len;
1340  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
1341    m_val[len - 1] = sext_hwi (m_val[len - 1],
1342			       m_precision % HOST_BITS_PER_WIDE_INT);
1343}
1344
1345template <typename T>
1346inline trailing_wide_int_storage &
1347trailing_wide_int_storage::operator = (const T &x)
1348{
1349  WIDE_INT_REF_FOR (T) xi (x, m_precision);
1350  wi::copy (*this, xi);
1351  return *this;
1352}
1353
1354/* Initialize the structure and record that all elements have precision
1355   PRECISION.  */
1356template <int N>
1357inline void
1358trailing_wide_ints <N>::set_precision (unsigned int precision)
1359{
1360  m_precision = precision;
1361  m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1362	       / HOST_BITS_PER_WIDE_INT);
1363}
1364
1365/* Return a reference to element INDEX.  */
1366template <int N>
1367inline trailing_wide_int
1368trailing_wide_ints <N>::operator [] (unsigned int index)
1369{
1370  return trailing_wide_int_storage (m_precision, &m_len[index],
1371				    &m_val[index * m_max_len]);
1372}
1373
1374/* Return how many extra bytes need to be added to the end of the structure
1375   in order to handle N wide_ints of precision PRECISION.  */
1376template <int N>
1377inline size_t
1378trailing_wide_ints <N>::extra_size (unsigned int precision)
1379{
1380  unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1381			  / HOST_BITS_PER_WIDE_INT);
1382  return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
1383}
1384
1385/* This macro is used in structures that end with a trailing_wide_ints field
1386   called FIELD.  It declares get_NAME() and set_NAME() methods to access
1387   element I of FIELD.  */
1388#define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
1389  trailing_wide_int get_##NAME () { return FIELD[I]; } \
1390  template <typename T> void set_##NAME (const T &x) { FIELD[I] = x; }
1391
1392namespace wi
1393{
1394  /* Implementation of int_traits for primitive integer types like "int".  */
1395  template <typename T, bool signed_p>
1396  struct primitive_int_traits
1397  {
1398    static const enum precision_type precision_type = FLEXIBLE_PRECISION;
1399    static const bool host_dependent_precision = true;
1400    static const bool is_sign_extended = true;
1401    static unsigned int get_precision (T);
1402    static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
1403  };
1404}
1405
1406template <typename T, bool signed_p>
1407inline unsigned int
1408wi::primitive_int_traits <T, signed_p>::get_precision (T)
1409{
1410  return sizeof (T) * CHAR_BIT;
1411}
1412
1413template <typename T, bool signed_p>
1414inline wi::storage_ref
1415wi::primitive_int_traits <T, signed_p>::decompose (HOST_WIDE_INT *scratch,
1416						   unsigned int precision, T x)
1417{
1418  scratch[0] = x;
1419  if (signed_p || scratch[0] >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1420    return wi::storage_ref (scratch, 1, precision);
1421  scratch[1] = 0;
1422  return wi::storage_ref (scratch, 2, precision);
1423}
1424
1425/* Allow primitive C types to be used in wi:: routines.  */
1426namespace wi
1427{
1428  template <>
1429  struct int_traits <int>
1430    : public primitive_int_traits <int, true> {};
1431
1432  template <>
1433  struct int_traits <unsigned int>
1434    : public primitive_int_traits <unsigned int, false> {};
1435
1436  template <>
1437  struct int_traits <long>
1438    : public primitive_int_traits <long, true> {};
1439
1440  template <>
1441  struct int_traits <unsigned long>
1442    : public primitive_int_traits <unsigned long, false> {};
1443
1444#if defined HAVE_LONG_LONG
1445  template <>
1446  struct int_traits <long long>
1447    : public primitive_int_traits <long long, true> {};
1448
1449  template <>
1450  struct int_traits <unsigned long long>
1451    : public primitive_int_traits <unsigned long long, false> {};
1452#endif
1453}
1454
1455namespace wi
1456{
1457  /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1458     and precision PRECISION.  */
1459  struct hwi_with_prec
1460  {
1461    hwi_with_prec (HOST_WIDE_INT, unsigned int, signop);
1462    HOST_WIDE_INT val;
1463    unsigned int precision;
1464    signop sgn;
1465  };
1466
1467  hwi_with_prec shwi (HOST_WIDE_INT, unsigned int);
1468  hwi_with_prec uhwi (unsigned HOST_WIDE_INT, unsigned int);
1469
1470  hwi_with_prec minus_one (unsigned int);
1471  hwi_with_prec zero (unsigned int);
1472  hwi_with_prec one (unsigned int);
1473  hwi_with_prec two (unsigned int);
1474}
1475
1476inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v, unsigned int p,
1477					 signop s)
1478  : val (v), precision (p), sgn (s)
1479{
1480}
1481
1482/* Return a signed integer that has value VAL and precision PRECISION.  */
1483inline wi::hwi_with_prec
1484wi::shwi (HOST_WIDE_INT val, unsigned int precision)
1485{
1486  return hwi_with_prec (val, precision, SIGNED);
1487}
1488
1489/* Return an unsigned integer that has value VAL and precision PRECISION.  */
1490inline wi::hwi_with_prec
1491wi::uhwi (unsigned HOST_WIDE_INT val, unsigned int precision)
1492{
1493  return hwi_with_prec (val, precision, UNSIGNED);
1494}
1495
1496/* Return a wide int of -1 with precision PRECISION.  */
1497inline wi::hwi_with_prec
1498wi::minus_one (unsigned int precision)
1499{
1500  return wi::shwi (-1, precision);
1501}
1502
1503/* Return a wide int of 0 with precision PRECISION.  */
1504inline wi::hwi_with_prec
1505wi::zero (unsigned int precision)
1506{
1507  return wi::shwi (0, precision);
1508}
1509
1510/* Return a wide int of 1 with precision PRECISION.  */
1511inline wi::hwi_with_prec
1512wi::one (unsigned int precision)
1513{
1514  return wi::shwi (1, precision);
1515}
1516
1517/* Return a wide int of 2 with precision PRECISION.  */
1518inline wi::hwi_with_prec
1519wi::two (unsigned int precision)
1520{
1521  return wi::shwi (2, precision);
1522}
1523
1524namespace wi
1525{
1526  template <>
1527  struct int_traits <wi::hwi_with_prec>
1528  {
1529    static const enum precision_type precision_type = VAR_PRECISION;
1530    /* hwi_with_prec has an explicitly-given precision, rather than the
1531       precision of HOST_WIDE_INT.  */
1532    static const bool host_dependent_precision = false;
1533    static const bool is_sign_extended = true;
1534    static unsigned int get_precision (const wi::hwi_with_prec &);
1535    static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
1536				      const wi::hwi_with_prec &);
1537  };
1538}
1539
1540inline unsigned int
1541wi::int_traits <wi::hwi_with_prec>::get_precision (const wi::hwi_with_prec &x)
1542{
1543  return x.precision;
1544}
1545
1546inline wi::storage_ref
1547wi::int_traits <wi::hwi_with_prec>::
1548decompose (HOST_WIDE_INT *scratch, unsigned int precision,
1549	   const wi::hwi_with_prec &x)
1550{
1551  gcc_checking_assert (precision == x.precision);
1552  scratch[0] = x.val;
1553  if (x.sgn == SIGNED || x.val >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1554    return wi::storage_ref (scratch, 1, precision);
1555  scratch[1] = 0;
1556  return wi::storage_ref (scratch, 2, precision);
1557}
1558
1559/* Private functions for handling large cases out of line.  They take
1560   individual length and array parameters because that is cheaper for
1561   the inline caller than constructing an object on the stack and
1562   passing a reference to it.  (Although many callers use wide_int_refs,
1563   we generally want those to be removed by SRA.)  */
1564namespace wi
1565{
1566  bool eq_p_large (const HOST_WIDE_INT *, unsigned int,
1567		   const HOST_WIDE_INT *, unsigned int, unsigned int);
1568  bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1569		    const HOST_WIDE_INT *, unsigned int);
1570  bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1571		    const HOST_WIDE_INT *, unsigned int);
1572  int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1573		  const HOST_WIDE_INT *, unsigned int);
1574  int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1575		  const HOST_WIDE_INT *, unsigned int);
1576  unsigned int sext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1577			   unsigned int,
1578			   unsigned int, unsigned int);
1579  unsigned int zext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1580			   unsigned int,
1581			   unsigned int, unsigned int);
1582  unsigned int set_bit_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1583			      unsigned int, unsigned int, unsigned int);
1584  unsigned int lshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1585			     unsigned int, unsigned int, unsigned int);
1586  unsigned int lrshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1587			      unsigned int, unsigned int, unsigned int,
1588			      unsigned int);
1589  unsigned int arshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1590			      unsigned int, unsigned int, unsigned int,
1591			      unsigned int);
1592  unsigned int and_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1593			  const HOST_WIDE_INT *, unsigned int, unsigned int);
1594  unsigned int and_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1595			      unsigned int, const HOST_WIDE_INT *,
1596			      unsigned int, unsigned int);
1597  unsigned int or_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1598			 const HOST_WIDE_INT *, unsigned int, unsigned int);
1599  unsigned int or_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1600			     unsigned int, const HOST_WIDE_INT *,
1601			     unsigned int, unsigned int);
1602  unsigned int xor_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1603			  const HOST_WIDE_INT *, unsigned int, unsigned int);
1604  unsigned int add_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1605			  const HOST_WIDE_INT *, unsigned int, unsigned int,
1606			  signop, bool *);
1607  unsigned int sub_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1608			  const HOST_WIDE_INT *, unsigned int, unsigned int,
1609			  signop, bool *);
1610  unsigned int mul_internal (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1611			     unsigned int, const HOST_WIDE_INT *,
1612			     unsigned int, unsigned int, signop, bool *,
1613			     bool);
1614  unsigned int divmod_internal (HOST_WIDE_INT *, unsigned int *,
1615				HOST_WIDE_INT *, const HOST_WIDE_INT *,
1616				unsigned int, unsigned int,
1617				const HOST_WIDE_INT *,
1618				unsigned int, unsigned int,
1619				signop, bool *);
1620}
1621
1622/* Return the number of bits that integer X can hold.  */
1623template <typename T>
1624inline unsigned int
1625wi::get_precision (const T &x)
1626{
1627  return wi::int_traits <T>::get_precision (x);
1628}
1629
1630/* Return the number of bits that the result of a binary operation can
1631   hold when the input operands are X and Y.  */
1632template <typename T1, typename T2>
1633inline unsigned int
1634wi::get_binary_precision (const T1 &x, const T2 &y)
1635{
1636  return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
1637			get_binary_result (x, y));
1638}
1639
1640/* Copy the contents of Y to X, but keeping X's current precision.  */
1641template <typename T1, typename T2>
1642inline void
1643wi::copy (T1 &x, const T2 &y)
1644{
1645  HOST_WIDE_INT *xval = x.write_val ();
1646  const HOST_WIDE_INT *yval = y.get_val ();
1647  unsigned int len = y.get_len ();
1648  unsigned int i = 0;
1649  do
1650    xval[i] = yval[i];
1651  while (++i < len);
1652  x.set_len (len, y.is_sign_extended);
1653}
1654
1655/* Return true if X fits in a HOST_WIDE_INT with no loss of precision.  */
1656template <typename T>
1657inline bool
1658wi::fits_shwi_p (const T &x)
1659{
1660  WIDE_INT_REF_FOR (T) xi (x);
1661  return xi.len == 1;
1662}
1663
1664/* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1665   precision.  */
1666template <typename T>
1667inline bool
1668wi::fits_uhwi_p (const T &x)
1669{
1670  WIDE_INT_REF_FOR (T) xi (x);
1671  if (xi.precision <= HOST_BITS_PER_WIDE_INT)
1672    return true;
1673  if (xi.len == 1)
1674    return xi.slow () >= 0;
1675  return xi.len == 2 && xi.uhigh () == 0;
1676}
1677
1678/* Return true if X is negative based on the interpretation of SGN.
1679   For UNSIGNED, this is always false.  */
1680template <typename T>
1681inline bool
1682wi::neg_p (const T &x, signop sgn)
1683{
1684  WIDE_INT_REF_FOR (T) xi (x);
1685  if (sgn == UNSIGNED)
1686    return false;
1687  return xi.sign_mask () < 0;
1688}
1689
1690/* Return -1 if the top bit of X is set and 0 if the top bit is clear.  */
1691template <typename T>
1692inline HOST_WIDE_INT
1693wi::sign_mask (const T &x)
1694{
1695  WIDE_INT_REF_FOR (T) xi (x);
1696  return xi.sign_mask ();
1697}
1698
1699/* Return true if X == Y.  X and Y must be binary-compatible.  */
1700template <typename T1, typename T2>
1701inline bool
1702wi::eq_p (const T1 &x, const T2 &y)
1703{
1704  unsigned int precision = get_binary_precision (x, y);
1705  WIDE_INT_REF_FOR (T1) xi (x, precision);
1706  WIDE_INT_REF_FOR (T2) yi (y, precision);
1707  if (xi.is_sign_extended && yi.is_sign_extended)
1708    {
1709      /* This case reduces to array equality.  */
1710      if (xi.len != yi.len)
1711	return false;
1712      unsigned int i = 0;
1713      do
1714	if (xi.val[i] != yi.val[i])
1715	  return false;
1716      while (++i != xi.len);
1717      return true;
1718    }
1719  if (__builtin_expect (yi.len == 1, true))
1720    {
1721      /* XI is only equal to YI if it too has a single HWI.  */
1722      if (xi.len != 1)
1723	return false;
1724      /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1725	 with 0 are simple.  */
1726      if (STATIC_CONSTANT_P (yi.val[0] == 0))
1727	return xi.val[0] == 0;
1728      /* Otherwise flush out any excess bits first.  */
1729      unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
1730      int excess = HOST_BITS_PER_WIDE_INT - precision;
1731      if (excess > 0)
1732	diff <<= excess;
1733      return diff == 0;
1734    }
1735  return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
1736}
1737
1738/* Return true if X != Y.  X and Y must be binary-compatible.  */
1739template <typename T1, typename T2>
1740inline bool
1741wi::ne_p (const T1 &x, const T2 &y)
1742{
1743  return !eq_p (x, y);
1744}
1745
1746/* Return true if X < Y when both are treated as signed values.  */
1747template <typename T1, typename T2>
1748inline bool
1749wi::lts_p (const T1 &x, const T2 &y)
1750{
1751  unsigned int precision = get_binary_precision (x, y);
1752  WIDE_INT_REF_FOR (T1) xi (x, precision);
1753  WIDE_INT_REF_FOR (T2) yi (y, precision);
1754  /* We optimize x < y, where y is 64 or fewer bits.  */
1755  if (wi::fits_shwi_p (yi))
1756    {
1757      /* Make lts_p (x, 0) as efficient as wi::neg_p (x).  */
1758      if (STATIC_CONSTANT_P (yi.val[0] == 0))
1759	return neg_p (xi);
1760      /* If x fits directly into a shwi, we can compare directly.  */
1761      if (wi::fits_shwi_p (xi))
1762	return xi.to_shwi () < yi.to_shwi ();
1763      /* If x doesn't fit and is negative, then it must be more
1764	 negative than any value in y, and hence smaller than y.  */
1765      if (neg_p (xi))
1766	return true;
1767      /* If x is positive, then it must be larger than any value in y,
1768	 and hence greater than y.  */
1769      return false;
1770    }
1771  /* Optimize the opposite case, if it can be detected at compile time.  */
1772  if (STATIC_CONSTANT_P (xi.len == 1))
1773    /* If YI is negative it is lower than the least HWI.
1774       If YI is positive it is greater than the greatest HWI.  */
1775    return !neg_p (yi);
1776  return lts_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1777}
1778
1779/* Return true if X < Y when both are treated as unsigned values.  */
1780template <typename T1, typename T2>
1781inline bool
1782wi::ltu_p (const T1 &x, const T2 &y)
1783{
1784  unsigned int precision = get_binary_precision (x, y);
1785  WIDE_INT_REF_FOR (T1) xi (x, precision);
1786  WIDE_INT_REF_FOR (T2) yi (y, precision);
1787  /* Optimize comparisons with constants.  */
1788  if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1789    return xi.len == 1 && xi.to_uhwi () < (unsigned HOST_WIDE_INT) yi.val[0];
1790  if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1791    return yi.len != 1 || yi.to_uhwi () > (unsigned HOST_WIDE_INT) xi.val[0];
1792  /* Optimize the case of two HWIs.  The HWIs are implicitly sign-extended
1793     for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1794     values does not change the result.  */
1795  if (__builtin_expect (xi.len + yi.len == 2, true))
1796    {
1797      unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1798      unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1799      return xl < yl;
1800    }
1801  return ltu_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1802}
1803
1804/* Return true if X < Y.  Signedness of X and Y is indicated by SGN.  */
1805template <typename T1, typename T2>
1806inline bool
1807wi::lt_p (const T1 &x, const T2 &y, signop sgn)
1808{
1809  if (sgn == SIGNED)
1810    return lts_p (x, y);
1811  else
1812    return ltu_p (x, y);
1813}
1814
1815/* Return true if X <= Y when both are treated as signed values.  */
1816template <typename T1, typename T2>
1817inline bool
1818wi::les_p (const T1 &x, const T2 &y)
1819{
1820  return !lts_p (y, x);
1821}
1822
1823/* Return true if X <= Y when both are treated as unsigned values.  */
1824template <typename T1, typename T2>
1825inline bool
1826wi::leu_p (const T1 &x, const T2 &y)
1827{
1828  return !ltu_p (y, x);
1829}
1830
1831/* Return true if X <= Y.  Signedness of X and Y is indicated by SGN.  */
1832template <typename T1, typename T2>
1833inline bool
1834wi::le_p (const T1 &x, const T2 &y, signop sgn)
1835{
1836  if (sgn == SIGNED)
1837    return les_p (x, y);
1838  else
1839    return leu_p (x, y);
1840}
1841
1842/* Return true if X > Y when both are treated as signed values.  */
1843template <typename T1, typename T2>
1844inline bool
1845wi::gts_p (const T1 &x, const T2 &y)
1846{
1847  return lts_p (y, x);
1848}
1849
1850/* Return true if X > Y when both are treated as unsigned values.  */
1851template <typename T1, typename T2>
1852inline bool
1853wi::gtu_p (const T1 &x, const T2 &y)
1854{
1855  return ltu_p (y, x);
1856}
1857
1858/* Return true if X > Y.  Signedness of X and Y is indicated by SGN.  */
1859template <typename T1, typename T2>
1860inline bool
1861wi::gt_p (const T1 &x, const T2 &y, signop sgn)
1862{
1863  if (sgn == SIGNED)
1864    return gts_p (x, y);
1865  else
1866    return gtu_p (x, y);
1867}
1868
1869/* Return true if X >= Y when both are treated as signed values.  */
1870template <typename T1, typename T2>
1871inline bool
1872wi::ges_p (const T1 &x, const T2 &y)
1873{
1874  return !lts_p (x, y);
1875}
1876
1877/* Return true if X >= Y when both are treated as unsigned values.  */
1878template <typename T1, typename T2>
1879inline bool
1880wi::geu_p (const T1 &x, const T2 &y)
1881{
1882  return !ltu_p (x, y);
1883}
1884
1885/* Return true if X >= Y.  Signedness of X and Y is indicated by SGN.  */
1886template <typename T1, typename T2>
1887inline bool
1888wi::ge_p (const T1 &x, const T2 &y, signop sgn)
1889{
1890  if (sgn == SIGNED)
1891    return ges_p (x, y);
1892  else
1893    return geu_p (x, y);
1894}
1895
1896/* Return -1 if X < Y, 0 if X == Y and 1 if X > Y.  Treat both X and Y
1897   as signed values.  */
1898template <typename T1, typename T2>
1899inline int
1900wi::cmps (const T1 &x, const T2 &y)
1901{
1902  unsigned int precision = get_binary_precision (x, y);
1903  WIDE_INT_REF_FOR (T1) xi (x, precision);
1904  WIDE_INT_REF_FOR (T2) yi (y, precision);
1905  if (wi::fits_shwi_p (yi))
1906    {
1907      /* Special case for comparisons with 0.  */
1908      if (STATIC_CONSTANT_P (yi.val[0] == 0))
1909	return neg_p (xi) ? -1 : !(xi.len == 1 && xi.val[0] == 0);
1910      /* If x fits into a signed HWI, we can compare directly.  */
1911      if (wi::fits_shwi_p (xi))
1912	{
1913	  HOST_WIDE_INT xl = xi.to_shwi ();
1914	  HOST_WIDE_INT yl = yi.to_shwi ();
1915	  return xl < yl ? -1 : xl > yl;
1916	}
1917      /* If x doesn't fit and is negative, then it must be more
1918	 negative than any signed HWI, and hence smaller than y.  */
1919      if (neg_p (xi))
1920	return -1;
1921      /* If x is positive, then it must be larger than any signed HWI,
1922	 and hence greater than y.  */
1923      return 1;
1924    }
1925  /* Optimize the opposite case, if it can be detected at compile time.  */
1926  if (STATIC_CONSTANT_P (xi.len == 1))
1927    /* If YI is negative it is lower than the least HWI.
1928       If YI is positive it is greater than the greatest HWI.  */
1929    return neg_p (yi) ? 1 : -1;
1930  return cmps_large (xi.val, xi.len, precision, yi.val, yi.len);
1931}
1932
1933/* Return -1 if X < Y, 0 if X == Y and 1 if X > Y.  Treat both X and Y
1934   as unsigned values.  */
1935template <typename T1, typename T2>
1936inline int
1937wi::cmpu (const T1 &x, const T2 &y)
1938{
1939  unsigned int precision = get_binary_precision (x, y);
1940  WIDE_INT_REF_FOR (T1) xi (x, precision);
1941  WIDE_INT_REF_FOR (T2) yi (y, precision);
1942  /* Optimize comparisons with constants.  */
1943  if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1944    {
1945      /* If XI doesn't fit in a HWI then it must be larger than YI.  */
1946      if (xi.len != 1)
1947	return 1;
1948      /* Otherwise compare directly.  */
1949      unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1950      unsigned HOST_WIDE_INT yl = yi.val[0];
1951      return xl < yl ? -1 : xl > yl;
1952    }
1953  if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1954    {
1955      /* If YI doesn't fit in a HWI then it must be larger than XI.  */
1956      if (yi.len != 1)
1957	return -1;
1958      /* Otherwise compare directly.  */
1959      unsigned HOST_WIDE_INT xl = xi.val[0];
1960      unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1961      return xl < yl ? -1 : xl > yl;
1962    }
1963  /* Optimize the case of two HWIs.  The HWIs are implicitly sign-extended
1964     for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1965     values does not change the result.  */
1966  if (__builtin_expect (xi.len + yi.len == 2, true))
1967    {
1968      unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1969      unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1970      return xl < yl ? -1 : xl > yl;
1971    }
1972  return cmpu_large (xi.val, xi.len, precision, yi.val, yi.len);
1973}
1974
1975/* Return -1 if X < Y, 0 if X == Y and 1 if X > Y.  Signedness of
1976   X and Y indicated by SGN.  */
1977template <typename T1, typename T2>
1978inline int
1979wi::cmp (const T1 &x, const T2 &y, signop sgn)
1980{
1981  if (sgn == SIGNED)
1982    return cmps (x, y);
1983  else
1984    return cmpu (x, y);
1985}
1986
1987/* Return ~x.  */
1988template <typename T>
1989inline WI_UNARY_RESULT (T)
1990wi::bit_not (const T &x)
1991{
1992  WI_UNARY_RESULT_VAR (result, val, T, x);
1993  WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
1994  for (unsigned int i = 0; i < xi.len; ++i)
1995    val[i] = ~xi.val[i];
1996  result.set_len (xi.len);
1997  return result;
1998}
1999
2000/* Return -x.  */
2001template <typename T>
2002inline WI_UNARY_RESULT (T)
2003wi::neg (const T &x)
2004{
2005  return sub (0, x);
2006}
2007
2008/* Return -x.  Indicate in *OVERFLOW if X is the minimum signed value.  */
2009template <typename T>
2010inline WI_UNARY_RESULT (T)
2011wi::neg (const T &x, bool *overflow)
2012{
2013  *overflow = only_sign_bit_p (x);
2014  return sub (0, x);
2015}
2016
2017/* Return the absolute value of x.  */
2018template <typename T>
2019inline WI_UNARY_RESULT (T)
2020wi::abs (const T &x)
2021{
2022  return neg_p (x) ? neg (x) : WI_UNARY_RESULT (T) (x);
2023}
2024
2025/* Return the result of sign-extending the low OFFSET bits of X.  */
2026template <typename T>
2027inline WI_UNARY_RESULT (T)
2028wi::sext (const T &x, unsigned int offset)
2029{
2030  WI_UNARY_RESULT_VAR (result, val, T, x);
2031  unsigned int precision = get_precision (result);
2032  WIDE_INT_REF_FOR (T) xi (x, precision);
2033
2034  if (offset <= HOST_BITS_PER_WIDE_INT)
2035    {
2036      val[0] = sext_hwi (xi.ulow (), offset);
2037      result.set_len (1, true);
2038    }
2039  else
2040    result.set_len (sext_large (val, xi.val, xi.len, precision, offset));
2041  return result;
2042}
2043
2044/* Return the result of zero-extending the low OFFSET bits of X.  */
2045template <typename T>
2046inline WI_UNARY_RESULT (T)
2047wi::zext (const T &x, unsigned int offset)
2048{
2049  WI_UNARY_RESULT_VAR (result, val, T, x);
2050  unsigned int precision = get_precision (result);
2051  WIDE_INT_REF_FOR (T) xi (x, precision);
2052
2053  /* This is not just an optimization, it is actually required to
2054     maintain canonization.  */
2055  if (offset >= precision)
2056    {
2057      wi::copy (result, xi);
2058      return result;
2059    }
2060
2061  /* In these cases we know that at least the top bit will be clear,
2062     so no sign extension is necessary.  */
2063  if (offset < HOST_BITS_PER_WIDE_INT)
2064    {
2065      val[0] = zext_hwi (xi.ulow (), offset);
2066      result.set_len (1, true);
2067    }
2068  else
2069    result.set_len (zext_large (val, xi.val, xi.len, precision, offset), true);
2070  return result;
2071}
2072
2073/* Return the result of extending the low OFFSET bits of X according to
2074   signedness SGN.  */
2075template <typename T>
2076inline WI_UNARY_RESULT (T)
2077wi::ext (const T &x, unsigned int offset, signop sgn)
2078{
2079  return sgn == SIGNED ? sext (x, offset) : zext (x, offset);
2080}
2081
2082/* Return an integer that represents X | (1 << bit).  */
2083template <typename T>
2084inline WI_UNARY_RESULT (T)
2085wi::set_bit (const T &x, unsigned int bit)
2086{
2087  WI_UNARY_RESULT_VAR (result, val, T, x);
2088  unsigned int precision = get_precision (result);
2089  WIDE_INT_REF_FOR (T) xi (x, precision);
2090  if (precision <= HOST_BITS_PER_WIDE_INT)
2091    {
2092      val[0] = xi.ulow () | ((unsigned HOST_WIDE_INT) 1 << bit);
2093      result.set_len (1);
2094    }
2095  else
2096    result.set_len (set_bit_large (val, xi.val, xi.len, precision, bit));
2097  return result;
2098}
2099
2100/* Return the mininum of X and Y, treating them both as having
2101   signedness SGN.  */
2102template <typename T1, typename T2>
2103inline WI_BINARY_RESULT (T1, T2)
2104wi::min (const T1 &x, const T2 &y, signop sgn)
2105{
2106  WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2107  unsigned int precision = get_precision (result);
2108  if (wi::le_p (x, y, sgn))
2109    wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2110  else
2111    wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2112  return result;
2113}
2114
2115/* Return the minimum of X and Y, treating both as signed values.  */
2116template <typename T1, typename T2>
2117inline WI_BINARY_RESULT (T1, T2)
2118wi::smin (const T1 &x, const T2 &y)
2119{
2120  return wi::min (x, y, SIGNED);
2121}
2122
2123/* Return the minimum of X and Y, treating both as unsigned values.  */
2124template <typename T1, typename T2>
2125inline WI_BINARY_RESULT (T1, T2)
2126wi::umin (const T1 &x, const T2 &y)
2127{
2128  return wi::min (x, y, UNSIGNED);
2129}
2130
2131/* Return the maxinum of X and Y, treating them both as having
2132   signedness SGN.  */
2133template <typename T1, typename T2>
2134inline WI_BINARY_RESULT (T1, T2)
2135wi::max (const T1 &x, const T2 &y, signop sgn)
2136{
2137  WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2138  unsigned int precision = get_precision (result);
2139  if (wi::ge_p (x, y, sgn))
2140    wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2141  else
2142    wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2143  return result;
2144}
2145
2146/* Return the maximum of X and Y, treating both as signed values.  */
2147template <typename T1, typename T2>
2148inline WI_BINARY_RESULT (T1, T2)
2149wi::smax (const T1 &x, const T2 &y)
2150{
2151  return wi::max (x, y, SIGNED);
2152}
2153
2154/* Return the maximum of X and Y, treating both as unsigned values.  */
2155template <typename T1, typename T2>
2156inline WI_BINARY_RESULT (T1, T2)
2157wi::umax (const T1 &x, const T2 &y)
2158{
2159  return wi::max (x, y, UNSIGNED);
2160}
2161
2162/* Return X & Y.  */
2163template <typename T1, typename T2>
2164inline WI_BINARY_RESULT (T1, T2)
2165wi::bit_and (const T1 &x, const T2 &y)
2166{
2167  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2168  unsigned int precision = get_precision (result);
2169  WIDE_INT_REF_FOR (T1) xi (x, precision);
2170  WIDE_INT_REF_FOR (T2) yi (y, precision);
2171  bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2172  if (__builtin_expect (xi.len + yi.len == 2, true))
2173    {
2174      val[0] = xi.ulow () & yi.ulow ();
2175      result.set_len (1, is_sign_extended);
2176    }
2177  else
2178    result.set_len (and_large (val, xi.val, xi.len, yi.val, yi.len,
2179			       precision), is_sign_extended);
2180  return result;
2181}
2182
2183/* Return X & ~Y.  */
2184template <typename T1, typename T2>
2185inline WI_BINARY_RESULT (T1, T2)
2186wi::bit_and_not (const T1 &x, const T2 &y)
2187{
2188  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2189  unsigned int precision = get_precision (result);
2190  WIDE_INT_REF_FOR (T1) xi (x, precision);
2191  WIDE_INT_REF_FOR (T2) yi (y, precision);
2192  bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2193  if (__builtin_expect (xi.len + yi.len == 2, true))
2194    {
2195      val[0] = xi.ulow () & ~yi.ulow ();
2196      result.set_len (1, is_sign_extended);
2197    }
2198  else
2199    result.set_len (and_not_large (val, xi.val, xi.len, yi.val, yi.len,
2200				   precision), is_sign_extended);
2201  return result;
2202}
2203
2204/* Return X | Y.  */
2205template <typename T1, typename T2>
2206inline WI_BINARY_RESULT (T1, T2)
2207wi::bit_or (const T1 &x, const T2 &y)
2208{
2209  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2210  unsigned int precision = get_precision (result);
2211  WIDE_INT_REF_FOR (T1) xi (x, precision);
2212  WIDE_INT_REF_FOR (T2) yi (y, precision);
2213  bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2214  if (__builtin_expect (xi.len + yi.len == 2, true))
2215    {
2216      val[0] = xi.ulow () | yi.ulow ();
2217      result.set_len (1, is_sign_extended);
2218    }
2219  else
2220    result.set_len (or_large (val, xi.val, xi.len,
2221			      yi.val, yi.len, precision), is_sign_extended);
2222  return result;
2223}
2224
2225/* Return X | ~Y.  */
2226template <typename T1, typename T2>
2227inline WI_BINARY_RESULT (T1, T2)
2228wi::bit_or_not (const T1 &x, const T2 &y)
2229{
2230  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2231  unsigned int precision = get_precision (result);
2232  WIDE_INT_REF_FOR (T1) xi (x, precision);
2233  WIDE_INT_REF_FOR (T2) yi (y, precision);
2234  bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2235  if (__builtin_expect (xi.len + yi.len == 2, true))
2236    {
2237      val[0] = xi.ulow () | ~yi.ulow ();
2238      result.set_len (1, is_sign_extended);
2239    }
2240  else
2241    result.set_len (or_not_large (val, xi.val, xi.len, yi.val, yi.len,
2242				  precision), is_sign_extended);
2243  return result;
2244}
2245
2246/* Return X ^ Y.  */
2247template <typename T1, typename T2>
2248inline WI_BINARY_RESULT (T1, T2)
2249wi::bit_xor (const T1 &x, const T2 &y)
2250{
2251  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2252  unsigned int precision = get_precision (result);
2253  WIDE_INT_REF_FOR (T1) xi (x, precision);
2254  WIDE_INT_REF_FOR (T2) yi (y, precision);
2255  bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2256  if (__builtin_expect (xi.len + yi.len == 2, true))
2257    {
2258      val[0] = xi.ulow () ^ yi.ulow ();
2259      result.set_len (1, is_sign_extended);
2260    }
2261  else
2262    result.set_len (xor_large (val, xi.val, xi.len,
2263			       yi.val, yi.len, precision), is_sign_extended);
2264  return result;
2265}
2266
2267/* Return X + Y.  */
2268template <typename T1, typename T2>
2269inline WI_BINARY_RESULT (T1, T2)
2270wi::add (const T1 &x, const T2 &y)
2271{
2272  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2273  unsigned int precision = get_precision (result);
2274  WIDE_INT_REF_FOR (T1) xi (x, precision);
2275  WIDE_INT_REF_FOR (T2) yi (y, precision);
2276  if (precision <= HOST_BITS_PER_WIDE_INT)
2277    {
2278      val[0] = xi.ulow () + yi.ulow ();
2279      result.set_len (1);
2280    }
2281  /* If the precision is known at compile time to be greater than
2282     HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2283     knowing that (a) all bits in those HWIs are significant and
2284     (b) the result has room for at least two HWIs.  This provides
2285     a fast path for things like offset_int and widest_int.
2286
2287     The STATIC_CONSTANT_P test prevents this path from being
2288     used for wide_ints.  wide_ints with precisions greater than
2289     HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2290     point handling them inline.  */
2291  else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2292	   && __builtin_expect (xi.len + yi.len == 2, true))
2293    {
2294      unsigned HOST_WIDE_INT xl = xi.ulow ();
2295      unsigned HOST_WIDE_INT yl = yi.ulow ();
2296      unsigned HOST_WIDE_INT resultl = xl + yl;
2297      val[0] = resultl;
2298      val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2299      result.set_len (1 + (((resultl ^ xl) & (resultl ^ yl))
2300			   >> (HOST_BITS_PER_WIDE_INT - 1)));
2301    }
2302  else
2303    result.set_len (add_large (val, xi.val, xi.len,
2304			       yi.val, yi.len, precision,
2305			       UNSIGNED, 0));
2306  return result;
2307}
2308
2309/* Return X + Y.  Treat X and Y as having the signednes given by SGN
2310   and indicate in *OVERFLOW whether the operation overflowed.  */
2311template <typename T1, typename T2>
2312inline WI_BINARY_RESULT (T1, T2)
2313wi::add (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2314{
2315  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2316  unsigned int precision = get_precision (result);
2317  WIDE_INT_REF_FOR (T1) xi (x, precision);
2318  WIDE_INT_REF_FOR (T2) yi (y, precision);
2319  if (precision <= HOST_BITS_PER_WIDE_INT)
2320    {
2321      unsigned HOST_WIDE_INT xl = xi.ulow ();
2322      unsigned HOST_WIDE_INT yl = yi.ulow ();
2323      unsigned HOST_WIDE_INT resultl = xl + yl;
2324      if (sgn == SIGNED)
2325	*overflow = (((resultl ^ xl) & (resultl ^ yl))
2326		     >> (precision - 1)) & 1;
2327      else
2328	*overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2329		     < (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2330      val[0] = resultl;
2331      result.set_len (1);
2332    }
2333  else
2334    result.set_len (add_large (val, xi.val, xi.len,
2335			       yi.val, yi.len, precision,
2336			       sgn, overflow));
2337  return result;
2338}
2339
2340/* Return X - Y.  */
2341template <typename T1, typename T2>
2342inline WI_BINARY_RESULT (T1, T2)
2343wi::sub (const T1 &x, const T2 &y)
2344{
2345  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2346  unsigned int precision = get_precision (result);
2347  WIDE_INT_REF_FOR (T1) xi (x, precision);
2348  WIDE_INT_REF_FOR (T2) yi (y, precision);
2349  if (precision <= HOST_BITS_PER_WIDE_INT)
2350    {
2351      val[0] = xi.ulow () - yi.ulow ();
2352      result.set_len (1);
2353    }
2354  /* If the precision is known at compile time to be greater than
2355     HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2356     knowing that (a) all bits in those HWIs are significant and
2357     (b) the result has room for at least two HWIs.  This provides
2358     a fast path for things like offset_int and widest_int.
2359
2360     The STATIC_CONSTANT_P test prevents this path from being
2361     used for wide_ints.  wide_ints with precisions greater than
2362     HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2363     point handling them inline.  */
2364  else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2365	   && __builtin_expect (xi.len + yi.len == 2, true))
2366    {
2367      unsigned HOST_WIDE_INT xl = xi.ulow ();
2368      unsigned HOST_WIDE_INT yl = yi.ulow ();
2369      unsigned HOST_WIDE_INT resultl = xl - yl;
2370      val[0] = resultl;
2371      val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2372      result.set_len (1 + (((resultl ^ xl) & (xl ^ yl))
2373			   >> (HOST_BITS_PER_WIDE_INT - 1)));
2374    }
2375  else
2376    result.set_len (sub_large (val, xi.val, xi.len,
2377			       yi.val, yi.len, precision,
2378			       UNSIGNED, 0));
2379  return result;
2380}
2381
2382/* Return X - Y.  Treat X and Y as having the signednes given by SGN
2383   and indicate in *OVERFLOW whether the operation overflowed.  */
2384template <typename T1, typename T2>
2385inline WI_BINARY_RESULT (T1, T2)
2386wi::sub (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2387{
2388  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2389  unsigned int precision = get_precision (result);
2390  WIDE_INT_REF_FOR (T1) xi (x, precision);
2391  WIDE_INT_REF_FOR (T2) yi (y, precision);
2392  if (precision <= HOST_BITS_PER_WIDE_INT)
2393    {
2394      unsigned HOST_WIDE_INT xl = xi.ulow ();
2395      unsigned HOST_WIDE_INT yl = yi.ulow ();
2396      unsigned HOST_WIDE_INT resultl = xl - yl;
2397      if (sgn == SIGNED)
2398	*overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
2399      else
2400	*overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2401		     > (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2402      val[0] = resultl;
2403      result.set_len (1);
2404    }
2405  else
2406    result.set_len (sub_large (val, xi.val, xi.len,
2407			       yi.val, yi.len, precision,
2408			       sgn, overflow));
2409  return result;
2410}
2411
2412/* Return X * Y.  */
2413template <typename T1, typename T2>
2414inline WI_BINARY_RESULT (T1, T2)
2415wi::mul (const T1 &x, const T2 &y)
2416{
2417  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2418  unsigned int precision = get_precision (result);
2419  WIDE_INT_REF_FOR (T1) xi (x, precision);
2420  WIDE_INT_REF_FOR (T2) yi (y, precision);
2421  if (precision <= HOST_BITS_PER_WIDE_INT)
2422    {
2423      val[0] = xi.ulow () * yi.ulow ();
2424      result.set_len (1);
2425    }
2426  else
2427    result.set_len (mul_internal (val, xi.val, xi.len, yi.val, yi.len,
2428				  precision, UNSIGNED, 0, false));
2429  return result;
2430}
2431
2432/* Return X * Y.  Treat X and Y as having the signednes given by SGN
2433   and indicate in *OVERFLOW whether the operation overflowed.  */
2434template <typename T1, typename T2>
2435inline WI_BINARY_RESULT (T1, T2)
2436wi::mul (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2437{
2438  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2439  unsigned int precision = get_precision (result);
2440  WIDE_INT_REF_FOR (T1) xi (x, precision);
2441  WIDE_INT_REF_FOR (T2) yi (y, precision);
2442  result.set_len (mul_internal (val, xi.val, xi.len,
2443				yi.val, yi.len, precision,
2444				sgn, overflow, false));
2445  return result;
2446}
2447
2448/* Return X * Y, treating both X and Y as signed values.  Indicate in
2449   *OVERFLOW whether the operation overflowed.  */
2450template <typename T1, typename T2>
2451inline WI_BINARY_RESULT (T1, T2)
2452wi::smul (const T1 &x, const T2 &y, bool *overflow)
2453{
2454  return mul (x, y, SIGNED, overflow);
2455}
2456
2457/* Return X * Y, treating both X and Y as unsigned values.  Indicate in
2458   *OVERFLOW whether the operation overflowed.  */
2459template <typename T1, typename T2>
2460inline WI_BINARY_RESULT (T1, T2)
2461wi::umul (const T1 &x, const T2 &y, bool *overflow)
2462{
2463  return mul (x, y, UNSIGNED, overflow);
2464}
2465
2466/* Perform a widening multiplication of X and Y, extending the values
2467   according to SGN, and return the high part of the result.  */
2468template <typename T1, typename T2>
2469inline WI_BINARY_RESULT (T1, T2)
2470wi::mul_high (const T1 &x, const T2 &y, signop sgn)
2471{
2472  WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2473  unsigned int precision = get_precision (result);
2474  WIDE_INT_REF_FOR (T1) xi (x, precision);
2475  WIDE_INT_REF_FOR (T2) yi (y, precision);
2476  result.set_len (mul_internal (val, xi.val, xi.len,
2477				yi.val, yi.len, precision,
2478				sgn, 0, true));
2479  return result;
2480}
2481
2482/* Return X / Y, rouding towards 0.  Treat X and Y as having the
2483   signedness given by SGN.  Indicate in *OVERFLOW if the result
2484   overflows.  */
2485template <typename T1, typename T2>
2486inline WI_BINARY_RESULT (T1, T2)
2487wi::div_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2488{
2489  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2490  unsigned int precision = get_precision (quotient);
2491  WIDE_INT_REF_FOR (T1) xi (x, precision);
2492  WIDE_INT_REF_FOR (T2) yi (y);
2493
2494  quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
2495				     precision,
2496				     yi.val, yi.len, yi.precision,
2497				     sgn, overflow));
2498  return quotient;
2499}
2500
2501/* Return X / Y, rouding towards 0.  Treat X and Y as signed values.  */
2502template <typename T1, typename T2>
2503inline WI_BINARY_RESULT (T1, T2)
2504wi::sdiv_trunc (const T1 &x, const T2 &y)
2505{
2506  return div_trunc (x, y, SIGNED);
2507}
2508
2509/* Return X / Y, rouding towards 0.  Treat X and Y as unsigned values.  */
2510template <typename T1, typename T2>
2511inline WI_BINARY_RESULT (T1, T2)
2512wi::udiv_trunc (const T1 &x, const T2 &y)
2513{
2514  return div_trunc (x, y, UNSIGNED);
2515}
2516
2517/* Return X / Y, rouding towards -inf.  Treat X and Y as having the
2518   signedness given by SGN.  Indicate in *OVERFLOW if the result
2519   overflows.  */
2520template <typename T1, typename T2>
2521inline WI_BINARY_RESULT (T1, T2)
2522wi::div_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2523{
2524  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2525  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2526  unsigned int precision = get_precision (quotient);
2527  WIDE_INT_REF_FOR (T1) xi (x, precision);
2528  WIDE_INT_REF_FOR (T2) yi (y);
2529
2530  unsigned int remainder_len;
2531  quotient.set_len (divmod_internal (quotient_val,
2532				     &remainder_len, remainder_val,
2533				     xi.val, xi.len, precision,
2534				     yi.val, yi.len, yi.precision, sgn,
2535				     overflow));
2536  remainder.set_len (remainder_len);
2537  if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2538    return quotient - 1;
2539  return quotient;
2540}
2541
2542/* Return X / Y, rouding towards -inf.  Treat X and Y as signed values.  */
2543template <typename T1, typename T2>
2544inline WI_BINARY_RESULT (T1, T2)
2545wi::sdiv_floor (const T1 &x, const T2 &y)
2546{
2547  return div_floor (x, y, SIGNED);
2548}
2549
2550/* Return X / Y, rouding towards -inf.  Treat X and Y as unsigned values.  */
2551/* ??? Why do we have both this and udiv_trunc.  Aren't they the same?  */
2552template <typename T1, typename T2>
2553inline WI_BINARY_RESULT (T1, T2)
2554wi::udiv_floor (const T1 &x, const T2 &y)
2555{
2556  return div_floor (x, y, UNSIGNED);
2557}
2558
2559/* Return X / Y, rouding towards +inf.  Treat X and Y as having the
2560   signedness given by SGN.  Indicate in *OVERFLOW if the result
2561   overflows.  */
2562template <typename T1, typename T2>
2563inline WI_BINARY_RESULT (T1, T2)
2564wi::div_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2565{
2566  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2567  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2568  unsigned int precision = get_precision (quotient);
2569  WIDE_INT_REF_FOR (T1) xi (x, precision);
2570  WIDE_INT_REF_FOR (T2) yi (y);
2571
2572  unsigned int remainder_len;
2573  quotient.set_len (divmod_internal (quotient_val,
2574				     &remainder_len, remainder_val,
2575				     xi.val, xi.len, precision,
2576				     yi.val, yi.len, yi.precision, sgn,
2577				     overflow));
2578  remainder.set_len (remainder_len);
2579  if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2580    return quotient + 1;
2581  return quotient;
2582}
2583
2584/* Return X / Y, rouding towards nearest with ties away from zero.
2585   Treat X and Y as having the signedness given by SGN.  Indicate
2586   in *OVERFLOW if the result overflows.  */
2587template <typename T1, typename T2>
2588inline WI_BINARY_RESULT (T1, T2)
2589wi::div_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2590{
2591  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2592  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2593  unsigned int precision = get_precision (quotient);
2594  WIDE_INT_REF_FOR (T1) xi (x, precision);
2595  WIDE_INT_REF_FOR (T2) yi (y);
2596
2597  unsigned int remainder_len;
2598  quotient.set_len (divmod_internal (quotient_val,
2599				     &remainder_len, remainder_val,
2600				     xi.val, xi.len, precision,
2601				     yi.val, yi.len, yi.precision, sgn,
2602				     overflow));
2603  remainder.set_len (remainder_len);
2604
2605  if (remainder != 0)
2606    {
2607      if (sgn == SIGNED)
2608	{
2609	  WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2610	  if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2611	    {
2612	      if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2613		return quotient - 1;
2614	      else
2615		return quotient + 1;
2616	    }
2617	}
2618      else
2619	{
2620	  if (wi::geu_p (remainder, wi::sub (y, remainder)))
2621	    return quotient + 1;
2622	}
2623    }
2624  return quotient;
2625}
2626
2627/* Return X / Y, rouding towards 0.  Treat X and Y as having the
2628   signedness given by SGN.  Store the remainder in *REMAINDER_PTR.  */
2629template <typename T1, typename T2>
2630inline WI_BINARY_RESULT (T1, T2)
2631wi::divmod_trunc (const T1 &x, const T2 &y, signop sgn,
2632		  WI_BINARY_RESULT (T1, T2) *remainder_ptr)
2633{
2634  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2635  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2636  unsigned int precision = get_precision (quotient);
2637  WIDE_INT_REF_FOR (T1) xi (x, precision);
2638  WIDE_INT_REF_FOR (T2) yi (y);
2639
2640  unsigned int remainder_len;
2641  quotient.set_len (divmod_internal (quotient_val,
2642				     &remainder_len, remainder_val,
2643				     xi.val, xi.len, precision,
2644				     yi.val, yi.len, yi.precision, sgn, 0));
2645  remainder.set_len (remainder_len);
2646
2647  *remainder_ptr = remainder;
2648  return quotient;
2649}
2650
2651/* Compute X / Y, rouding towards 0, and return the remainder.
2652   Treat X and Y as having the signedness given by SGN.  Indicate
2653   in *OVERFLOW if the division overflows.  */
2654template <typename T1, typename T2>
2655inline WI_BINARY_RESULT (T1, T2)
2656wi::mod_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2657{
2658  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2659  unsigned int precision = get_precision (remainder);
2660  WIDE_INT_REF_FOR (T1) xi (x, precision);
2661  WIDE_INT_REF_FOR (T2) yi (y);
2662
2663  unsigned int remainder_len;
2664  divmod_internal (0, &remainder_len, remainder_val,
2665		   xi.val, xi.len, precision,
2666		   yi.val, yi.len, yi.precision, sgn, overflow);
2667  remainder.set_len (remainder_len);
2668
2669  return remainder;
2670}
2671
2672/* Compute X / Y, rouding towards 0, and return the remainder.
2673   Treat X and Y as signed values.  */
2674template <typename T1, typename T2>
2675inline WI_BINARY_RESULT (T1, T2)
2676wi::smod_trunc (const T1 &x, const T2 &y)
2677{
2678  return mod_trunc (x, y, SIGNED);
2679}
2680
2681/* Compute X / Y, rouding towards 0, and return the remainder.
2682   Treat X and Y as unsigned values.  */
2683template <typename T1, typename T2>
2684inline WI_BINARY_RESULT (T1, T2)
2685wi::umod_trunc (const T1 &x, const T2 &y)
2686{
2687  return mod_trunc (x, y, UNSIGNED);
2688}
2689
2690/* Compute X / Y, rouding towards -inf, and return the remainder.
2691   Treat X and Y as having the signedness given by SGN.  Indicate
2692   in *OVERFLOW if the division overflows.  */
2693template <typename T1, typename T2>
2694inline WI_BINARY_RESULT (T1, T2)
2695wi::mod_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2696{
2697  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2698  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2699  unsigned int precision = get_precision (quotient);
2700  WIDE_INT_REF_FOR (T1) xi (x, precision);
2701  WIDE_INT_REF_FOR (T2) yi (y);
2702
2703  unsigned int remainder_len;
2704  quotient.set_len (divmod_internal (quotient_val,
2705				     &remainder_len, remainder_val,
2706				     xi.val, xi.len, precision,
2707				     yi.val, yi.len, yi.precision, sgn,
2708				     overflow));
2709  remainder.set_len (remainder_len);
2710
2711  if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2712    return remainder + y;
2713  return remainder;
2714}
2715
2716/* Compute X / Y, rouding towards -inf, and return the remainder.
2717   Treat X and Y as unsigned values.  */
2718/* ??? Why do we have both this and umod_trunc.  Aren't they the same?  */
2719template <typename T1, typename T2>
2720inline WI_BINARY_RESULT (T1, T2)
2721wi::umod_floor (const T1 &x, const T2 &y)
2722{
2723  return mod_floor (x, y, UNSIGNED);
2724}
2725
2726/* Compute X / Y, rouding towards +inf, and return the remainder.
2727   Treat X and Y as having the signedness given by SGN.  Indicate
2728   in *OVERFLOW if the division overflows.  */
2729template <typename T1, typename T2>
2730inline WI_BINARY_RESULT (T1, T2)
2731wi::mod_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2732{
2733  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2734  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2735  unsigned int precision = get_precision (quotient);
2736  WIDE_INT_REF_FOR (T1) xi (x, precision);
2737  WIDE_INT_REF_FOR (T2) yi (y);
2738
2739  unsigned int remainder_len;
2740  quotient.set_len (divmod_internal (quotient_val,
2741				     &remainder_len, remainder_val,
2742				     xi.val, xi.len, precision,
2743				     yi.val, yi.len, yi.precision, sgn,
2744				     overflow));
2745  remainder.set_len (remainder_len);
2746
2747  if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2748    return remainder - y;
2749  return remainder;
2750}
2751
2752/* Compute X / Y, rouding towards nearest with ties away from zero,
2753   and return the remainder.  Treat X and Y as having the signedness
2754   given by SGN.  Indicate in *OVERFLOW if the division overflows.  */
2755template <typename T1, typename T2>
2756inline WI_BINARY_RESULT (T1, T2)
2757wi::mod_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2758{
2759  WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2760  WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2761  unsigned int precision = get_precision (quotient);
2762  WIDE_INT_REF_FOR (T1) xi (x, precision);
2763  WIDE_INT_REF_FOR (T2) yi (y);
2764
2765  unsigned int remainder_len;
2766  quotient.set_len (divmod_internal (quotient_val,
2767				     &remainder_len, remainder_val,
2768				     xi.val, xi.len, precision,
2769				     yi.val, yi.len, yi.precision, sgn,
2770				     overflow));
2771  remainder.set_len (remainder_len);
2772
2773  if (remainder != 0)
2774    {
2775      if (sgn == SIGNED)
2776	{
2777	  WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2778	  if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2779	    {
2780	      if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2781		return remainder + y;
2782	      else
2783		return remainder - y;
2784	    }
2785	}
2786      else
2787	{
2788	  if (wi::geu_p (remainder, wi::sub (y, remainder)))
2789	    return remainder - y;
2790	}
2791    }
2792  return remainder;
2793}
2794
2795/* Return true if X is a multiple of Y.  Treat X and Y as having the
2796   signedness given by SGN.  */
2797template <typename T1, typename T2>
2798inline bool
2799wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn)
2800{
2801  return wi::mod_trunc (x, y, sgn) == 0;
2802}
2803
2804/* Return true if X is a multiple of Y, storing X / Y in *RES if so.
2805   Treat X and Y as having the signedness given by SGN.  */
2806template <typename T1, typename T2>
2807inline bool
2808wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn,
2809		   WI_BINARY_RESULT (T1, T2) *res)
2810{
2811  WI_BINARY_RESULT (T1, T2) remainder;
2812  WI_BINARY_RESULT (T1, T2) quotient
2813    = divmod_trunc (x, y, sgn, &remainder);
2814  if (remainder == 0)
2815    {
2816      *res = quotient;
2817      return true;
2818    }
2819  return false;
2820}
2821
2822/* Return X << Y.  Return 0 if Y is greater than or equal to
2823   the precision of X.  */
2824template <typename T1, typename T2>
2825inline WI_UNARY_RESULT (T1)
2826wi::lshift (const T1 &x, const T2 &y)
2827{
2828  WI_UNARY_RESULT_VAR (result, val, T1, x);
2829  unsigned int precision = get_precision (result);
2830  WIDE_INT_REF_FOR (T1) xi (x, precision);
2831  WIDE_INT_REF_FOR (T2) yi (y);
2832  /* Handle the simple cases quickly.   */
2833  if (geu_p (yi, precision))
2834    {
2835      val[0] = 0;
2836      result.set_len (1);
2837    }
2838  else
2839    {
2840      unsigned int shift = yi.to_uhwi ();
2841      /* For fixed-precision integers like offset_int and widest_int,
2842	 handle the case where the shift value is constant and the
2843	 result is a single nonnegative HWI (meaning that we don't
2844	 need to worry about val[1]).  This is particularly common
2845	 for converting a byte count to a bit count.
2846
2847	 For variable-precision integers like wide_int, handle HWI
2848	 and sub-HWI integers inline.  */
2849      if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
2850	  ? (STATIC_CONSTANT_P (shift < HOST_BITS_PER_WIDE_INT - 1)
2851	     && xi.len == 1
2852	     && xi.val[0] <= (HOST_WIDE_INT) ((unsigned HOST_WIDE_INT)
2853					      HOST_WIDE_INT_MAX >> shift))
2854	  : precision <= HOST_BITS_PER_WIDE_INT)
2855	{
2856	  val[0] = xi.ulow () << shift;
2857	  result.set_len (1);
2858	}
2859      else
2860	result.set_len (lshift_large (val, xi.val, xi.len,
2861				      precision, shift));
2862    }
2863  return result;
2864}
2865
2866/* Return X >> Y, using a logical shift.  Return 0 if Y is greater than
2867   or equal to the precision of X.  */
2868template <typename T1, typename T2>
2869inline WI_UNARY_RESULT (T1)
2870wi::lrshift (const T1 &x, const T2 &y)
2871{
2872  WI_UNARY_RESULT_VAR (result, val, T1, x);
2873  /* Do things in the precision of the input rather than the output,
2874     since the result can be no larger than that.  */
2875  WIDE_INT_REF_FOR (T1) xi (x);
2876  WIDE_INT_REF_FOR (T2) yi (y);
2877  /* Handle the simple cases quickly.   */
2878  if (geu_p (yi, xi.precision))
2879    {
2880      val[0] = 0;
2881      result.set_len (1);
2882    }
2883  else
2884    {
2885      unsigned int shift = yi.to_uhwi ();
2886      /* For fixed-precision integers like offset_int and widest_int,
2887	 handle the case where the shift value is constant and the
2888	 shifted value is a single nonnegative HWI (meaning that all
2889	 bits above the HWI are zero).  This is particularly common
2890	 for converting a bit count to a byte count.
2891
2892	 For variable-precision integers like wide_int, handle HWI
2893	 and sub-HWI integers inline.  */
2894      if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
2895	  ? (shift < HOST_BITS_PER_WIDE_INT
2896	     && xi.len == 1
2897	     && xi.val[0] >= 0)
2898	  : xi.precision <= HOST_BITS_PER_WIDE_INT)
2899	{
2900	  val[0] = xi.to_uhwi () >> shift;
2901	  result.set_len (1);
2902	}
2903      else
2904	result.set_len (lrshift_large (val, xi.val, xi.len, xi.precision,
2905				       get_precision (result), shift));
2906    }
2907  return result;
2908}
2909
2910/* Return X >> Y, using an arithmetic shift.  Return a sign mask if
2911   Y is greater than or equal to the precision of X.  */
2912template <typename T1, typename T2>
2913inline WI_UNARY_RESULT (T1)
2914wi::arshift (const T1 &x, const T2 &y)
2915{
2916  WI_UNARY_RESULT_VAR (result, val, T1, x);
2917  /* Do things in the precision of the input rather than the output,
2918     since the result can be no larger than that.  */
2919  WIDE_INT_REF_FOR (T1) xi (x);
2920  WIDE_INT_REF_FOR (T2) yi (y);
2921  /* Handle the simple cases quickly.   */
2922  if (geu_p (yi, xi.precision))
2923    {
2924      val[0] = sign_mask (x);
2925      result.set_len (1);
2926    }
2927  else
2928    {
2929      unsigned int shift = yi.to_uhwi ();
2930      if (xi.precision <= HOST_BITS_PER_WIDE_INT)
2931	{
2932	  val[0] = sext_hwi (xi.ulow () >> shift, xi.precision - shift);
2933	  result.set_len (1, true);
2934	}
2935      else
2936	result.set_len (arshift_large (val, xi.val, xi.len, xi.precision,
2937				       get_precision (result), shift));
2938    }
2939  return result;
2940}
2941
2942/* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
2943   logical shift otherwise.  */
2944template <typename T1, typename T2>
2945inline WI_UNARY_RESULT (T1)
2946wi::rshift (const T1 &x, const T2 &y, signop sgn)
2947{
2948  if (sgn == UNSIGNED)
2949    return lrshift (x, y);
2950  else
2951    return arshift (x, y);
2952}
2953
2954/* Return the result of rotating the low WIDTH bits of X left by Y
2955   bits and zero-extending the result.  Use a full-width rotate if
2956   WIDTH is zero.  */
2957template <typename T1, typename T2>
2958WI_UNARY_RESULT (T1)
2959wi::lrotate (const T1 &x, const T2 &y, unsigned int width)
2960{
2961  unsigned int precision = get_binary_precision (x, x);
2962  if (width == 0)
2963    width = precision;
2964  WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
2965  WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod);
2966  WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod));
2967  if (width != precision)
2968    return wi::zext (left, width) | wi::zext (right, width);
2969  return left | right;
2970}
2971
2972/* Return the result of rotating the low WIDTH bits of X right by Y
2973   bits and zero-extending the result.  Use a full-width rotate if
2974   WIDTH is zero.  */
2975template <typename T1, typename T2>
2976WI_UNARY_RESULT (T1)
2977wi::rrotate (const T1 &x, const T2 &y, unsigned int width)
2978{
2979  unsigned int precision = get_binary_precision (x, x);
2980  if (width == 0)
2981    width = precision;
2982  WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
2983  WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod);
2984  WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod));
2985  if (width != precision)
2986    return wi::zext (left, width) | wi::zext (right, width);
2987  return left | right;
2988}
2989
2990/* Return 0 if the number of 1s in X is even and 1 if the number of 1s
2991   is odd.  */
2992inline int
2993wi::parity (const wide_int_ref &x)
2994{
2995  return popcount (x) & 1;
2996}
2997
2998/* Extract WIDTH bits from X, starting at BITPOS.  */
2999template <typename T>
3000inline unsigned HOST_WIDE_INT
3001wi::extract_uhwi (const T &x, unsigned int bitpos, unsigned int width)
3002{
3003  unsigned precision = get_precision (x);
3004  if (precision < bitpos + width)
3005    precision = bitpos + width;
3006  WIDE_INT_REF_FOR (T) xi (x, precision);
3007
3008  /* Handle this rare case after the above, so that we assert about
3009     bogus BITPOS values.  */
3010  if (width == 0)
3011    return 0;
3012
3013  unsigned int start = bitpos / HOST_BITS_PER_WIDE_INT;
3014  unsigned int shift = bitpos % HOST_BITS_PER_WIDE_INT;
3015  unsigned HOST_WIDE_INT res = xi.elt (start);
3016  res >>= shift;
3017  if (shift + width > HOST_BITS_PER_WIDE_INT)
3018    {
3019      unsigned HOST_WIDE_INT upper = xi.elt (start + 1);
3020      res |= upper << (-shift % HOST_BITS_PER_WIDE_INT);
3021    }
3022  return zext_hwi (res, width);
3023}
3024
3025/* Return the minimum precision needed to store X with sign SGN.  */
3026template <typename T>
3027inline unsigned int
3028wi::min_precision (const T &x, signop sgn)
3029{
3030  if (sgn == SIGNED)
3031    return get_precision (x) - clrsb (x);
3032  else
3033    return get_precision (x) - clz (x);
3034}
3035
3036template<typename T>
3037void
3038gt_ggc_mx (generic_wide_int <T> *)
3039{
3040}
3041
3042template<typename T>
3043void
3044gt_pch_nx (generic_wide_int <T> *)
3045{
3046}
3047
3048template<typename T>
3049void
3050gt_pch_nx (generic_wide_int <T> *, void (*) (void *, void *), void *)
3051{
3052}
3053
3054template<int N>
3055void
3056gt_ggc_mx (trailing_wide_ints <N> *)
3057{
3058}
3059
3060template<int N>
3061void
3062gt_pch_nx (trailing_wide_ints <N> *)
3063{
3064}
3065
3066template<int N>
3067void
3068gt_pch_nx (trailing_wide_ints <N> *, void (*) (void *, void *), void *)
3069{
3070}
3071
3072namespace wi
3073{
3074  /* Used for overloaded functions in which the only other acceptable
3075     scalar type is a pointer.  It stops a plain 0 from being treated
3076     as a null pointer.  */
3077  struct never_used1 {};
3078  struct never_used2 {};
3079
3080  wide_int min_value (unsigned int, signop);
3081  wide_int min_value (never_used1 *);
3082  wide_int min_value (never_used2 *);
3083  wide_int max_value (unsigned int, signop);
3084  wide_int max_value (never_used1 *);
3085  wide_int max_value (never_used2 *);
3086
3087  /* FIXME: this is target dependent, so should be elsewhere.
3088     It also seems to assume that CHAR_BIT == BITS_PER_UNIT.  */
3089  wide_int from_buffer (const unsigned char *, unsigned int);
3090
3091#ifndef GENERATOR_FILE
3092  void to_mpz (const wide_int_ref &, mpz_t, signop);
3093#endif
3094
3095  wide_int mask (unsigned int, bool, unsigned int);
3096  wide_int shifted_mask (unsigned int, unsigned int, bool, unsigned int);
3097  wide_int set_bit_in_zero (unsigned int, unsigned int);
3098  wide_int insert (const wide_int &x, const wide_int &y, unsigned int,
3099		   unsigned int);
3100
3101  template <typename T>
3102  T mask (unsigned int, bool);
3103
3104  template <typename T>
3105  T shifted_mask (unsigned int, unsigned int, bool);
3106
3107  template <typename T>
3108  T set_bit_in_zero (unsigned int);
3109
3110  unsigned int mask (HOST_WIDE_INT *, unsigned int, bool, unsigned int);
3111  unsigned int shifted_mask (HOST_WIDE_INT *, unsigned int, unsigned int,
3112			     bool, unsigned int);
3113  unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
3114			   unsigned int, unsigned int, bool);
3115}
3116
3117/* Return a PRECISION-bit integer in which the low WIDTH bits are set
3118   and the other bits are clear, or the inverse if NEGATE_P.  */
3119inline wide_int
3120wi::mask (unsigned int width, bool negate_p, unsigned int precision)
3121{
3122  wide_int result = wide_int::create (precision);
3123  result.set_len (mask (result.write_val (), width, negate_p, precision));
3124  return result;
3125}
3126
3127/* Return a PRECISION-bit integer in which the low START bits are clear,
3128   the next WIDTH bits are set, and the other bits are clear,
3129   or the inverse if NEGATE_P.  */
3130inline wide_int
3131wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p,
3132		  unsigned int precision)
3133{
3134  wide_int result = wide_int::create (precision);
3135  result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
3136				precision));
3137  return result;
3138}
3139
3140/* Return a PRECISION-bit integer in which bit BIT is set and all the
3141   others are clear.  */
3142inline wide_int
3143wi::set_bit_in_zero (unsigned int bit, unsigned int precision)
3144{
3145  return shifted_mask (bit, 1, false, precision);
3146}
3147
3148/* Return an integer of type T in which the low WIDTH bits are set
3149   and the other bits are clear, or the inverse if NEGATE_P.  */
3150template <typename T>
3151inline T
3152wi::mask (unsigned int width, bool negate_p)
3153{
3154  STATIC_ASSERT (wi::int_traits<T>::precision);
3155  T result;
3156  result.set_len (mask (result.write_val (), width, negate_p,
3157			wi::int_traits <T>::precision));
3158  return result;
3159}
3160
3161/* Return an integer of type T in which the low START bits are clear,
3162   the next WIDTH bits are set, and the other bits are clear, or the
3163   inverse if NEGATE_P.  */
3164template <typename T>
3165inline T
3166wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p)
3167{
3168  STATIC_ASSERT (wi::int_traits<T>::precision);
3169  T result;
3170  result.set_len (shifted_mask (result.write_val (), start, width,
3171				negate_p,
3172				wi::int_traits <T>::precision));
3173  return result;
3174}
3175
3176/* Return an integer of type T in which bit BIT is set and all the
3177   others are clear.  */
3178template <typename T>
3179inline T
3180wi::set_bit_in_zero (unsigned int bit)
3181{
3182  return shifted_mask <T> (bit, 1, false);
3183}
3184
3185#endif /* WIDE_INT_H */
3186