forked from booksbyus/zguide
-
Notifications
You must be signed in to change notification settings - Fork 0
/
postface.txt
140 lines (90 loc) · 12.7 KB
/
postface.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
.output postface.wd
.bookmark postface
+ Postface
++ Tales from Out There
I asked some of the contributors to this book to tell us what they were doing with 0MQ. Here are their stories.
+++ Rob Gagnon's Story
"We use 0MQ to assist in aggregating thousands of events occurring every minute across our global network of telecommunications servers so that we can accurately report and monitor for situations that require our attention. 0MQ made the development of the system not only easier, but faster to develop and more robust and fault-tolerant than we had originally planned in our original design.
"We're able to easily add and remove clients from the network without the loss of any message. If we need to enhance the server portion of our system, we can stop and restart it as well without having to worry about stopping all of the clients first. The built-in buffering of 0MQ makes this all possible."
+++ Tom van Leeuwen's Story
"I was looking at creating some kind of service bus connecting all kinds of services together. There were already some products that implemented a broker, but they did not have the functionality I needed. By accident, I stumbled upon 0MQ, which is awesome. It's very lightweight, lean, simple and easy to follow because the guide is very complete and reads very well. I've actually implemented the Titanic pattern and the Majordomo broker with some additions (client/worker authentication and workers sending a catalog explaining what they provide and how they should be addressed).
"The beautiful thing about 0MQ is the fact that it is a library and not an application. You can mold it however you like and it simply puts boring things like queuing, reconnecting, TCP sockets and such to the background, making sure you can concentrate on what is important to you. I've implemented all kinds of workers/clients and the broker in Ruby, because that is the main language we use for development, but also some PHP clients to connect to the bus from existing PHP webapps. We use this service bus for cloud services, connecting all kinds of platform devices to a service bus exposing functionality for automation.
"0MQ is very easy to understand and if you spend a day with the guide, you'll have good knowledge of how it works. I'm a network engineer, not a software developer, but managed to create a very nice solution for our automation needs! 0MQ: Thank you very much!"
+++ Michael Jakl's Story
"We use 0MQ for distributing millions of documents per day in our distributed processing pipeline. We started out with big message queuing brokers that had their own respective issues and problems. In the quest of simplifying our architecture, we chose 0MQ to do the wiring. So far it had a huge impact in how our architecture scales and how easy it is to change and move the components. The plethora of language bindings lets us choose the right tool for the job without sacrificing interoperability in our system. We don't use a lot of sockets (less than 10 in our whole application), but that's all we needed to split a huge monolithic application into small independent parts.
"All in all, 0MQ lets me keep my sanity and helps my customers stay within budget."
+++ Vadim Shalts's Story
"I am team leader in the company ActForex, which develops software for financial markets. Due to the nature of our domain, we need to process large volumes of prices quickly. In addition, it's extremely critical to minimize latency in processing orders and prices. Achieving a high throughput is not enough. Everything must be handled in a soft real time with a predictable ultra low latency per price. The system consists of multiple components exchanging messages. Each price can take a lot of processing stages, each of which increases total latency. As a consequence, low and predictable latency of messaging between components becomes a key factor of our architecture.
"We investigated different solutions to find something suitable for our needs. We tried different message brokers (RabbitMQ, ActiveMQ Apollo, Kafka), but failed to reach a low and predictable latency with any of them. In the end, we chose ZeroMQ used in conjunction with ZooKeeper for service discovery. Complex coordination with ZeroMQ requires a relatively large effort and a good understanding, as a result of the natural complexity of multithreading. We found that an external agent like ZooKeeper is better choice for service discovery and coordination while ZeroMQ can be used primarily for simple messaging. ZeroMQ fit perfectly into our architecture. It allowed us to achieve the desired latency using minimal efforts. It saved us from a bottleneck in the processing of messages and made processing time very stable and predictable.
"I can decidedly recommend ZeroMQ for solutions where low latency is important."
++ How This Book Happened
When I set out to write a 0MQ book, we were still debating the pros and cons of forks and pull requests in the 0MQ community. Today, for what it's worth, this argument seems settled: the "liberal" policy that we adopted for {{libzmq}} in early 2012 broke our dependency on a single prime author, and opened the floor to dozens of new contributors. More profoundly, it allowed us to move to a gently organic evolutionary model that was very different from the older forced-march model.
The reason I was confident this would work was that our work on the Guide had, for a year or more, shown the way. True, the text is my own work, which is perhaps as it should be. Writing is not programming. When we write, we tell a story and one doesn't want different voices telling one tale; it feels strange.
For me the real long-term value of the book is the repository of examples: about 65,000 lines of code in 24 different languages. It's partly about making 0MQ accessible to more people. People already refer to the Python and PHP example repositories--two of the most complete--when they want to tell others how to learn 0MQ. But it's also about learning programming languages.
Here's a loop of code in Tcl:
[[code language="Tcl"]]
while {1} {
# Process all parts of the message
zmq message message
frontend recv_msg message
set more [frontend getsockopt RCVMORE]
backend send_msg message [expr {$more?"SNDMORE":""}]
message close
if {!$more} {
break ; # Last message part
}
}
[[/code]]
And here's the same loop in Lua:
[[code language="Lua"]]
while true do
-- Process all parts of the message
local msg = frontend:recv()
if (frontend:getopt(zmq.RCVMORE) == 1) then
backend:send(msg, zmq.SNDMORE)
else
backend:send(msg, 0)
break; -- Last message part
end
end
[[/code]]
And this particular example ({{rrbroker}}) exists in C#, C++, CL, Clojure, Erlang, F#, Go, Haskell, Haxe, Java, Lua, Node.js, Perl, PHP, Python, Ruby, Scala, Tcl, and of course C. This code base, all provided as open source under the MIT/X11 license, may form the basis for other books or projects.
But what this collection of translations says most profoundly is this: the language you choose is a detail, even a distraction. The power of 0MQ lies in the patterns it gives you and lets you build, and these transcend the comings and goings of languages. My goal as a software and social architect is to build structures that can last generations. There seems no point in aiming for mere decades.
++ Removing Friction
I'll explain the technical tool chain we used in terms of the friction we removed. In this book we're telling a story and the goal is to reach as many people as possible, as cheaply and smoothly as we can.
The core idea was to host the text and examples on GitHub and make it easy for anyone to contribute. It turned out to be more complex than that, however.
Let's start with the division of labor. I'm a good writer and can produce endless amounts of decent text quickly. But what was impossible for me was to provide the examples in other languages. Because the core 0MQ API is in C, it seemed logical to write the original examples in C. Also, C is a neutral choice; it's perhaps the only language that doesn't create strong emotions.
How to encourage people to make translations of the examples? We tried a few approaches and finally what worked best was to offer a "choose your language" link on every single example in the text, which took people either to the translation or to a page explaining how they could contribute. The way it usually works is that as people learn 0MQ in their preferred language, they contribute a handful of translations or fixes to the existing ones.
At the same time, I noticed a few people quite determinedly translating //every single// example. This was mainly binding authors who realized that the examples were a great way to encourage people to use their bindings. For their efforts, I extended the scripts to produce language-specific versions of the book. Instead of including the C code, we'd include the Python, or PHP code. Lua and Haxe also got their dedicated versions.
Once we have an idea of who works on what, we know how to structure the work itself. It's clear that to write and test an example, what you want to work on is //source code//. So we import this source code when we build the book, and that's how we make language-specific versions.
I like to write in a plain text format. It's fast and works well with source control systems like git. Because the main platform for our websites is Wikidot, I write using Wikidot's very readable markup format.
At least in the first chapters, it was important to draw pictures to explain the flow of messages between peers. Making diagrams by hand is a lot of work, and when we want to get final output in different formats, image conversion becomes a chore. I started with Ditaa, which turns text diagrams into PNGs, then later switched to asciitosvg, which produces SVG files, which are rather better. Since the figures are text diagrams, embedded in the prose, it's remarkably easy to work with them.
By now you'll realize that the toolchain we use is highly customized, though it uses a lot of external tools. All are available on Ubuntu, which is a mercy, and the whole custom toolchain is in the zguide repository in the bin subdirectory.
Let's walk through the editing and publishing process. Here is how we produce the online version:
[[code]]
bin/buildguide
[[/code]]
Which works as follows:
* The original text sits in a series of text files (one per chapter).
* The examples sit in the examples subdirectory, classified per language.
* We take the text and process this using a custom Perl script, mkwikidot, into a set of Wikidot-ready files.
* We do this for each of the languages that get their own version.
* We extract the graphics and call asciitosvg and rasterize on each one to produce image files, which we store in the images subdirectory.
* We extract inline listings (which are not translated) and stores these in the listings subdirectory.
* We use pygmentize on each example and listing to create a marked-up page in Wikidot format.
* We upload all changed files to the online wiki using the Wikidot API.
Doing this from scratch takes a while. So we store the SHA1 signatures of every image, listing, example, and text file, and only process and upload changes, and that makes it easy to publish a new version of the text when people make new contributions.
To produce the PDF and Epub formats, we do the following:
[[code]]
bin/buildpdfs
[[/code]]
Which works as follows:
* We use the custom mkdocbook Perl program on the input files to produce a DocBook output.
* We push the DocBook format through docbook2ps and ps2pdf to create clean PDFs in each language.
* We push the DocBook format through db2epub to create Epub books and in each language.
* We upload the PDFs to the public wiki using the Wikidot API.
When creating a community project, it's important to lower the "change latency", which is the time it takes for people to see their work live or, at least, to see that you've accepted their pull request. If that is more than a day or two, you've often lost your contributor's interest.
++ Licensing
I want people to reuse this text in their own work: in presentations, articles, and even other books. However, the deal is that if they remix my work, others can remix theirs. I'd like credit, and have no argument against others making money from their remixes. Thus, the text is licensed under cc-by-sa.
For the examples, we started with GPL, but it rapidly became clear this wasn't workable. The point of examples is to give people reusable code fragments so they will use 0MQ more widely, and if these are GPL, that won't happen. We switched to MIT/X11, even for the larger and more complex examples that conceivably would work as LGPL.
However, when we started turning the examples into standalone projects (as with Majordomo), we used the LGPL. Again, remixability trumps dissemination. Licenses are tools; use them with intent, not ideology.