1<?xml version="1.0" encoding="UTF-8" standalone="no"?>
2<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>Design</title><meta name="generator" content="DocBook XSL Stylesheets Vsnapshot" /><meta name="keywords" content="C++, library, parallel" /><meta name="keywords" content="ISO C++, library" /><meta name="keywords" content="ISO C++, runtime, library" /><link rel="home" href="../index.html" title="The GNU C++ Library" /><link rel="up" href="parallel_mode.html" title="Chapter��18.��Parallel Mode" /><link rel="prev" href="parallel_mode_using.html" title="Using" /><link rel="next" href="parallel_mode_test.html" title="Testing" /></head><body><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">Design</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="parallel_mode_using.html">Prev</a>��</td><th width="60%" align="center">Chapter��18.��Parallel Mode</th><td width="20%" align="right">��<a accesskey="n" href="parallel_mode_test.html">Next</a></td></tr></table><hr /></div><div class="section"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a id="manual.ext.parallel_mode.design"></a>Design</h2></div></div></div><p>
3  </p><div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="parallel_mode.design.intro"></a>Interface Basics</h3></div></div></div><p>
4All parallel algorithms are intended to have signatures that are
5equivalent to the ISO C++ algorithms replaced. For instance, the
6<code class="function">std::adjacent_find</code> function is declared as:
7</p><pre class="programlisting">
8namespace std
9{
10  template&lt;typename _FIter&gt;
11    _FIter
12    adjacent_find(_FIter, _FIter);
13}
14</pre><p>
15Which means that there should be something equivalent for the parallel
16version. Indeed, this is the case:
17</p><pre class="programlisting">
18namespace std
19{
20  namespace __parallel
21  {
22    template&lt;typename _FIter&gt;
23      _FIter
24      adjacent_find(_FIter, _FIter);
25
26    ...
27  }
28}
29</pre><p>But.... why the ellipses?
30</p><p> The ellipses in the example above represent additional overloads
31required for the parallel version of the function. These additional
32overloads are used to dispatch calls from the ISO C++ function
33signature to the appropriate parallel function (or sequential
34function, if no parallel functions are deemed worthy), based on either
35compile-time or run-time conditions.
36</p><p> The available signature options are specific for the different
37algorithms/algorithm classes.</p><p> The general view of overloads for the parallel algorithms look like this:
38</p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>ISO C++ signature</p></li><li class="listitem"><p>ISO C++ signature + sequential_tag argument</p></li><li class="listitem"><p>ISO C++ signature + algorithm-specific tag type
39    (several signatures)</p></li></ul></div><p> Please note that the implementation may use additional functions
40(designated with the <code class="code">_switch</code> suffix) to dispatch from the
41ISO C++ signature to the correct parallel version. Also, some of the
42algorithms do not have support for run-time conditions, so the last
43overload is therefore missing.
44</p></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="parallel_mode.design.tuning"></a>Configuration and Tuning</h3></div></div></div><div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="parallel_mode.design.tuning.omp"></a>Setting up the OpenMP Environment</h4></div></div></div><p>
45Several aspects of the overall runtime environment can be manipulated
46by standard OpenMP function calls.
47</p><p>
48To specify the number of threads to be used for the algorithms globally,
49use the function <code class="function">omp_set_num_threads</code>. An example:
50</p><pre class="programlisting">
51#include &lt;stdlib.h&gt;
52#include &lt;omp.h&gt;
53
54int main()
55{
56  // Explicitly set number of threads.
57  const int threads_wanted = 20;
58  omp_set_dynamic(false);
59  omp_set_num_threads(threads_wanted);
60
61  // Call parallel mode algorithms.
62
63  return 0;
64}
65</pre><p>
66 Some algorithms allow the number of threads being set for a particular call,
67 by augmenting the algorithm variant.
68 See the next section for further information.
69</p><p>
70Other parts of the runtime environment able to be manipulated include
71nested parallelism (<code class="function">omp_set_nested</code>), schedule kind
72(<code class="function">omp_set_schedule</code>), and others. See the OpenMP
73documentation for more information.
74</p></div><div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="parallel_mode.design.tuning.compile"></a>Compile Time Switches</h4></div></div></div><p>
75To force an algorithm to execute sequentially, even though parallelism
76is switched on in general via the macro <code class="constant">_GLIBCXX_PARALLEL</code>,
77add <code class="classname">__gnu_parallel::sequential_tag()</code> to the end
78of the algorithm's argument list.
79</p><p>
80Like so:
81</p><pre class="programlisting">
82std::sort(v.begin(), v.end(), __gnu_parallel::sequential_tag());
83</pre><p>
84Some parallel algorithm variants can be excluded from compilation by
85preprocessor defines. See the doxygen documentation on
86<code class="code">compiletime_settings.h</code> and <code class="code">features.h</code> for details.
87</p><p>
88For some algorithms, the desired variant can be chosen at compile-time by
89appending a tag object. The available options are specific to the particular
90algorithm (class).
91</p><p>
92For the "embarrassingly parallel" algorithms, there is only one "tag object
93type", the enum _Parallelism.
94It takes one of the following values,
95<code class="code">__gnu_parallel::parallel_tag</code>,
96<code class="code">__gnu_parallel::balanced_tag</code>,
97<code class="code">__gnu_parallel::unbalanced_tag</code>,
98<code class="code">__gnu_parallel::omp_loop_tag</code>,
99<code class="code">__gnu_parallel::omp_loop_static_tag</code>.
100This means that the actual parallelization strategy is chosen at run-time.
101(Choosing the variants at compile-time will come soon.)
102</p><p>
103For the following algorithms in general, we have
104<code class="code">__gnu_parallel::parallel_tag</code> and
105<code class="code">__gnu_parallel::default_parallel_tag</code>, in addition to
106<code class="code">__gnu_parallel::sequential_tag</code>.
107<code class="code">__gnu_parallel::default_parallel_tag</code> chooses the default
108algorithm at compiletime, as does omitting the tag.
109<code class="code">__gnu_parallel::parallel_tag</code> postpones the decision to runtime
110(see next section).
111For all tags, the number of threads desired for this call can optionally be
112passed to the respective tag's constructor.
113</p><p>
114The <code class="code">multiway_merge</code> algorithm comes with the additional choices,
115<code class="code">__gnu_parallel::exact_tag</code> and
116<code class="code">__gnu_parallel::sampling_tag</code>.
117Exact and sampling are the two available splitting strategies.
118</p><p>
119For the <code class="code">sort</code> and <code class="code">stable_sort</code> algorithms, there are
120several additional choices, namely
121<code class="code">__gnu_parallel::multiway_mergesort_tag</code>,
122<code class="code">__gnu_parallel::multiway_mergesort_exact_tag</code>,
123<code class="code">__gnu_parallel::multiway_mergesort_sampling_tag</code>,
124<code class="code">__gnu_parallel::quicksort_tag</code>, and
125<code class="code">__gnu_parallel::balanced_quicksort_tag</code>.
126Multiway mergesort comes with the two splitting strategies for multi-way
127merging. The quicksort options cannot be used for <code class="code">stable_sort</code>.
128</p></div><div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="parallel_mode.design.tuning.settings"></a>Run Time Settings and Defaults</h4></div></div></div><p>
129The default parallelization strategy, the choice of specific algorithm
130strategy, the minimum threshold limits for individual parallel
131algorithms, and aspects of the underlying hardware can be specified as
132desired via manipulation
133of <code class="classname">__gnu_parallel::_Settings</code> member data.
134</p><p>
135First off, the choice of parallelization strategy: serial, parallel,
136or heuristically deduced. This corresponds
137to <code class="code">__gnu_parallel::_Settings::algorithm_strategy</code> and is a
138value of enum <span class="type">__gnu_parallel::_AlgorithmStrategy</span>
139type. Choices
140include: <span class="type">heuristic</span>, <span class="type">force_sequential</span>,
141and <span class="type">force_parallel</span>. The default is <span class="type">heuristic</span>.
142</p><p>
143Next, the sub-choices for algorithm variant, if not fixed at compile-time.
144Specific algorithms like <code class="function">find</code> or <code class="function">sort</code>
145can be implemented in multiple ways: when this is the case,
146a <code class="classname">__gnu_parallel::_Settings</code> member exists to
147pick the default strategy. For
148example, <code class="code">__gnu_parallel::_Settings::sort_algorithm</code> can
149have any values of
150enum <span class="type">__gnu_parallel::_SortAlgorithm</span>: <span class="type">MWMS</span>, <span class="type">QS</span>,
151or <span class="type">QS_BALANCED</span>.
152</p><p>
153Likewise for setting the minimal threshold for algorithm
154parallelization.  Parallelism always incurs some overhead. Thus, it is
155not helpful to parallelize operations on very small sets of
156data. Because of this, measures are taken to avoid parallelizing below
157a certain, pre-determined threshold. For each algorithm, a minimum
158problem size is encoded as a variable in the
159active <code class="classname">__gnu_parallel::_Settings</code> object.  This
160threshold variable follows the following naming scheme:
161<code class="code">__gnu_parallel::_Settings::[algorithm]_minimal_n</code>.  So,
162for <code class="function">fill</code>, the threshold variable
163is <code class="code">__gnu_parallel::_Settings::fill_minimal_n</code>,
164</p><p>
165Finally, hardware details like L1/L2 cache size can be hardwired
166via <code class="code">__gnu_parallel::_Settings::L1_cache_size</code> and friends.
167</p><p>
168</p><p>
169All these configuration variables can be changed by the user, if
170desired.
171There exists one global instance of the class <code class="classname">_Settings</code>,
172i. e. it is a singleton. It can be read and written by calling
173<code class="code">__gnu_parallel::_Settings::get</code> and
174<code class="code">__gnu_parallel::_Settings::set</code>, respectively.
175Please note that the first call return a const object, so direct manipulation
176is forbidden.
177See <a class="link" href="http://gcc.gnu.org/onlinedocs/libstdc++/latest-doxygen/index.html" target="_top">
178  <code class="filename">&lt;parallel/settings.h&gt;</code></a>
179for complete details.
180</p><p>
181A small example of tuning the default:
182</p><pre class="programlisting">
183#include &lt;parallel/algorithm&gt;
184#include &lt;parallel/settings.h&gt;
185
186int main()
187{
188  __gnu_parallel::_Settings s;
189  s.algorithm_strategy = __gnu_parallel::force_parallel;
190  __gnu_parallel::_Settings::set(s);
191
192  // Do work... all algorithms will be parallelized, always.
193
194  return 0;
195}
196</pre></div></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="parallel_mode.design.impl"></a>Implementation Namespaces</h3></div></div></div><p> One namespace contain versions of code that are always
197explicitly sequential:
198<code class="code">__gnu_serial</code>.
199</p><p> Two namespaces contain the parallel mode:
200<code class="code">std::__parallel</code> and <code class="code">__gnu_parallel</code>.
201</p><p> Parallel implementations of standard components, including
202template helpers to select parallelism, are defined in <code class="code">namespace
203std::__parallel</code>. For instance, <code class="function">std::transform</code> from <code class="filename">algorithm</code> has a parallel counterpart in
204<code class="function">std::__parallel::transform</code> from <code class="filename">parallel/algorithm</code>. In addition, these parallel
205implementations are injected into <code class="code">namespace
206__gnu_parallel</code> with using declarations.
207</p><p> Support and general infrastructure is in <code class="code">namespace
208__gnu_parallel</code>.
209</p><p> More information, and an organized index of types and functions
210related to the parallel mode on a per-namespace basis, can be found in
211the generated source documentation.
212</p></div></div><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="parallel_mode_using.html">Prev</a>��</td><td width="20%" align="center"><a accesskey="u" href="parallel_mode.html">Up</a></td><td width="40%" align="right">��<a accesskey="n" href="parallel_mode_test.html">Next</a></td></tr><tr><td width="40%" align="left" valign="top">Using��</td><td width="20%" align="center"><a accesskey="h" href="../index.html">Home</a></td><td width="40%" align="right" valign="top">��Testing</td></tr></table></div></body></html>