This repository has been archived by the owner on May 21, 2023. It is now read-only.
forked from mlba-team/xdispatch
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Tutorial.dox
351 lines (259 loc) · 11.8 KB
/
Tutorial.dox
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
/**
@page tutorial Getting Started
@author Marius Zwicker / MLBA
@section tut_intro Introduction
This page will give you a short introduction concerning the concepts enforced throughout XDispatch and also help you getting started by first integrating libxdispatch into your project and writing the first lines of code afterwards. libxdispatch aims to provide an environment within which you can write parallelized code using the concepts of Grand Central Dispatch while keeping cross-platfrom compatibility and thus leaves it up to you, which operating systems you intend to target.
@section tut_conf Configuring your environment
The quickest way for getting started is obtaining a binary suitable for your development environment by going to the <a href="http://opensource.mlba-team.de/xdispatch/files">download section</a>. There you can find binaries for Linux, Windows and Mac OS X. Download the appropriate archive and install it on your system.
@subsection tut_conf_mac Mac OS X
On Mac OS you will only have to execute the provided package installer. It will automatically copy xdispatch.framework and (if selected) QtDispatch.framework to the '/Library' directory. Afterwards you can use the libraries by including one of the headers
@code
#include <dispatch/dispatch.h>
#include <xdispatch/dispatch>
@endcode
As with any other framework you will need to link your executable against the framework, so either pass '-framework xdispatch' when invoking gcc or configure your IDE to link against the xdispatch framework.
@remarks As Grand Central Dispatch is an operating system component on Mac OS you will NOT have to link against libdispatch by hand. This will happen completely automatically.
@subsection tut_conf_win Windows
Extract the archive and copy it to some place on the disk. The provided archives contain three directories:
<ul>
<li><i>include</i> - Location of the header files</li>
<li><i>lib</i> - Location of the lib files you will have to link against</li>
<li><i>bin</i> - Location of the dll files.</li>
</ul>
To use xdispatch, make sure the three directories listed above are listed in the INCLUDE, LIB and PATH (for the bin directory) environment variables. When using Visual Studio you might also want to change the include and linker directories within your project configuration. Afterwards you can use the libraries by including one of the headers
@code
#include <dispatch/dispatch.h>
#include <xdispatch/dispatch>
@endcode
Configure your project to link against the xdispatch and dispatch libs.
@subsection tut_conf_linux Linux
The recommended way on Linux is to use the provided packages by subscribing the <a href="https://launchpad.net/~mlba-team/+archive/stable">PPA on Launchpad</a>:
@code
sudo apt-add-repository ppa:mlba_team/stable
sudo apt-get update
sudo apt-get install libxdispatch-dev libdispatch-dev
@endcode
Packages for debian and RPM packages for openSUSE & Co will be released in the future.
In the meantime when not using Ubuntu, download the provided binary tarball according to your architecture and extract the headers and libraries to their corresponding places. Afterwards you can use the libraries by including one of the headers
@code
#include <dispatch/dispatch.h>
#include <xdispatch/dispatch>
@endcode
As with any other shared library you will need to link your executable against them, so either pass '-lxdispatch -ldispatch' when invoking gcc or configure your IDE to link against the xdispatch framework.
@remarks When using clang and lambdas, you will also have to link against the <a href="http://mackyle.github.com/blocksruntime/">BlocksRuntime</a>. It will be installed as a depedency on Ubuntu by default and is included in the tarball for all other Linux distributions. You will have to call clang with the parameters '-lxdispatch -ldispatch -lBlocksRuntime -fblocks'.
@section tut_first First Steps
Using libXDispatch within your source code is pretty straight forward as all you need to do is to include the headers within your source files - that's it.
@code
#include <xdispatch/dispatch>
@endcode
All functions are located in the xdispatch namespace. In the following I will demonstrate some use cases occuring when trying to parallelize the code. I will assume that your are either using gcc-4.5+, Visual Studio 2010 or clang as your compiler as enables us to utilize lambdas. For those not being able to use a "modern" compiler, please have a look at \ref tut_first_operations.
\subsection tut_first_lambdas Parallel code using lambdas
The most obvious use case is that you want to move some heavy calculation work off the main thread and into a background worker. Now without using libXDispatch, you'd probably be writing something similar to this:
@code
#include <pthread.h>
#include <iostream>
// declared somewhere else
class SomeData {
bool finished;
pthread_mutex_t lock;
...
};
/*
The worker function doing all the stuff
*/
void* do_work(void* dt){
SomeData* data = (SomeData*)dt;
// execute the heavy code
do_calculations(data);
// notify the main thread we are finished
pthread_mutex_lock(&data->lock);
data->finished = true;
pthread_mutex_unlock(%data->lock);
}
/*
This function is getting called
from your main thread also powering
the user interface
*/
void some_function(){
SomeData* sd = new SomeData();
fill_data(sd);
pthread_t worker;
if(pthread_create(&worker, NULL, do_work, NULL, (void*)sd)){
std::cerr << "Failed to create worker thread" << std::endl;
return;
}
pthread_mutex_lock(&sd->lock);
while(!sd->finished){
pthread_mutex_unlock(&sd->lock);
// process all events on the main thread
process_events();
pthread_mutex_lock(&sd->lock);
}
// ok, now the worker has finished, show the results within the gui
show_calc_results(sd);
delete sd;
}
@endcode
So this is an example using pthreads. When writing for windows as well, we'd probably need to write another version using WindowsThreads or need to use a
library such as OpenThreads or boost::threads. When using libXDispatch, we can
express this code much more effectively - and still maintain cross platform compatibility:
@code
#include <xdispatch/dispatch>
// declared somewhere else
class SomeData {
...
};
/*
This function is getting called
from your main thread also powering
the user interface
*/
void some_function(){
SomeData* sd = new SomeData();
fill_data(sd);
xdispatch::global_queue().async(${
// execute the heavy code
do_calulations(sd);
// notify the gui that we are finished
xdispatch::main_queue().async(${
show_calc_results(sd);
delete sd;
});
});
}
@endcode
There's no need for manual thread creation and so on. Also note, that we can use all variables declared within <i>some_function()</i> within our lambda code <i>${ .. }</i>. It's just as easy when you want to parallelize a loop. Let's assume the following piece of code (Please note this is still a very simple calculation):
@code
#include <vector>
#include <cmath>
// declared somewhere else
class SomeData {
...
std::vector<double> a;
std::vector<double> b;
std::vector<double> c;
std::vector<double> results;
};
void do_calculations(SomeData* sd){
// our output will go in here
sd->results = std::vector<double>(sd->a.size());
// the calculation - running on one thread only
for(unsigned int i = 0; i < a.size(); i++){
sd->results[i] = 0;
for(unsigned int j = 0; j < b.size(); j++){
for(unsigned int z = 0; z < c.size(); z++){
sd->results[i] += std::pow(sd->b[j], sd->a[i]) * std::sin(sd->c[z]);
}
}
}
}
@endcode
Now to parallelize this piece of code using libXDispatch you can simply write:
@code
#include <vector>
#include <cmath>
#include <xdispatch/dispatch>
// declared somewhere else
class SomeData {
...
std::vector<double> a;
std::vector<double> b;
std::vector<double> c;
std::vector<double> results;
};
void do_calculations(SomeData* sd){
// our output will go in here
sd->results = std::vector<double>(sd->a.size());
// the calculation - running on multiple threads
xdispatch::global_queue().apply($(size_t i){
sd->results[i] = 0;
for(unsigned int j = 0; j < b.size(); j++){
for(unsigned int z = 0; z < c.size(); z++){
sd->results[i] += std::pow(sd->b[j], sd->a[i]) * std::sin(sd->c[z]);
}
}
}, a.size());
}
@endcode
libXDispatch is also providing mechanisms for making some piece of code perfectly threadsafe. So again assume the following piece of code:
@code
#include <pthread.h>
static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
/*
So this function is called from several threads
*/
void worker(){
// some work
...
pthread_mutex_lock(&lock);
// do some critical work
if(already_done){ // we might have finished here
pthread_mutex_unlock(&lock);
return;
}
// do some other critical work
pthread_mutex_lock(&lock);
// some other work
...
}
@endcode
We will have to make sure the mutex is cleanly unlocked whenever leaving the critical section. And what happens if an exception is thrown from within that we do not catch? This might result in a deadlock. All this can be easily resolved by using the following expression:
@code
#include <xdispatch/dispatch>
/*
So this function is called from several threads
*/
void worker(){
// some work
...
synchronized {
// do some critical work
if(already_done) // we might have finished here
return;
// do some other critical work
}
// some other work
...
}
@endcode
No need to handle the locking by yourself, all is done magically - and it is ensured that the lock will be automatically cleared whenever you leave the section marked by the brackets. For further details about this, please see the documentation on the xdispatch::synclock. Please note that his functionality is available on compilers without lambda support as well.
\subsection tut_first_operations Parallel code using xdispatch::operations
All the examples shown above can also be written without using lambdas. So for example the parallel loop can also be expressed using an xdispatch::iteration_operation:
@code
#include <vector>
#include <cmath>
#include <xdispatch/dispatch>
// declared somewhere else
class SomeData {
...
std::vector<double> a;
std::vector<double> b;
std::vector<double> c;
std::vector<double> results;
};
class InnerCalculation : public xdispatch::iteration_operation {
SomeData* sd;
public:
InnerCalculation(SomeData* d) : sd(d) {}
void operator()(size_t i){
sd->results[i] = 0;
for(unsigned int j = 0; j < b.size(); j++){
for(unsigned int z = 0; z < c.size(); z++){
sd->results[i] += std::pow(sd->b[j], sd->a[i]) * std::sin(sd->c[z]);
}
}
}
}
void do_calculations(SomeData* sd){
// our output will go in here
sd->results = std::vector<double>(sd->a.size());
// the calculation - running on multiple threads
xdispatch::global_queue().apply(new InnerCalculation(sd), a.size());
}
@endcode
There is no need to worry about memory leaks - xdispatch will automatically delete the iteration_operation once it has finished execution.
@section tut_conc Concepts
The examples above showed only some of the functionality and power of libXDispatch. Of course there also is a plean C interface and Qt integration provided within QtDispatch. For further exploration, we recommend browsing the API documentation and having a look at the various unittests.
There is also a lot more concepts to explore. For example you could create your own queues and not only use the automatically provided global queues. For understanding the idea of serial and concurrent queues and the usage of setting a target for a queue, we recommend to read the document <a href="http://opensource.mlba-team.de/xdispatch/GrandCentral_TB_brief_20090608.pdf">"Apple Technical Brief on Grand Central Dispatch"</a> and have a look at <a href="http://developer.apple.com/library/mac/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40008091">Apple's Concurrency Programming Guide</a>
*/