-
Notifications
You must be signed in to change notification settings - Fork 2
/
index.html
452 lines (395 loc) · 22.2 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
<html>
<head>
<title>ACL 2016 First Conference on Machine Translation (WMT16)</title>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<style> h3 { margin-top: 2em; } </style>
</head>
<body>
<center>
<script src="title.js"></script>
<p><h2>Home</h2></p>
<script src="menu.js"></script>
</center>
<!--<p>
<center><font color="red"><b><a href="https://www.softconf.com/acl2016/WMT16/">Submission</a> is open</b></font></center>
</p>-->
<p>
This conference builds on ten previous workshops on statistical machine
translation:
<UL id="">
<LI>the <a href="http://www.statmt.org/wmt06/">NAACL-2006 Workshop on
Statistical Machine Translation</a>,
<LI>the <a href="http://www.statmt.org/wmt07/">ACL-2007 Workshop on Statistical
Machine Translation</a>,
<LI>the <a href="http://www.statmt.org/wmt08/">ACL-2008
Workshop on Statistical Machine Translation</a>,
<LI> the <a href="http://www.statmt.org/wmt09/">EACL-2009 Workshop on
Statistical Machine Translation</a>,
<LI> the <a href="http://www.statmt.org/wmt10/">ACL-2010 Workshop on
Statistical Machine Translation</a>
<LI> the <a href="http://www.statmt.org/wmt11/">EMNLP-2011 Workshop on
Statistical Machine Translation</a>,
<LI> the <a href="http://www.statmt.org/wmt12/">NAACL-2012 Workshop on
Statistical Machine Translation</a>,
<LI> the <a href="http://www.statmt.org/wmt13/">ACL-2013 Workshop on
Statistical Machine Translation</a>,
<LI> the <a href="http://www.statmt.org/wmt14/">ACL-2014 Workshop on
Statistical Machine Translation</a>, and
<LI> the <a href="http://www.statmt.org/wmt15/">EMNLP-2015 Workshop on
Statistical Machine Translation</a>.
</UL>
</p>
<h3>IMPORTANT DATES</h3>
<table>
<tr><td>Release of training data for shared tasks</td><td>January, 2016</td></tr>
<tr><td>Evaluation periods for shared tasks</td><td>April, 2016</td></tr>
<tr><td>Paper submission deadline (Research Papers)</td><td>May 8, 2016</td></tr>
<tr><td>Paper submission deadline (System Papers)</td><td>May 15, 2016</td></tr>
<tr><td>Notification of acceptance</td><td>June 5, 2016</td></tr>
<tr><td>Camera-ready deadline</td><td>June 22, 2016</td></tr>
<tr><td>Conference in Berlin</td><td>August 11-12th, 2016</td></tr>
<!-- <tr><td>Papers available online</td><td>TBA</td></tr> -->
</table>
<h3>OVERVIEW</h3>
<p>
This year's conference will feature ten shared tasks:
<ul><li>a news translation task,
<li>an IT domain translation task (<font color=red><b>NEW</b></font>),
<li>a biomedical translation task (<font color=red><b>NEW</b></font>),
<li>an automatic post-editing task,
<li>a metrics task (assess MT quality given reference translation).
<li>a quality estimation task (assess MT quality without access to any reference),
<li>a tuning task (optimize a given MT system),
<li>a pronoun translation task,
<li>a bilingual document alignment task (<font color=red><b>NEW</b></font>),
<li>a multimodal translation task (<font color=red><b>NEW</b></font>)
</ul>
</p>
<p>
In addition to the shared tasks, the conference will also feature scientific papers on topics related to MT.
Topics of interest include, but are not limited to:
<ul>
<li> word-based, phrase-based, syntax-based, semantics-based SMT</li>
<li> neural machine translation</li>
<li> using comparable corpora for SMT</li>
<li> incorporating linguistic information into SMT</li>
<li> decoding</li>
<li> system combination</li>
<li> error analysis</li>
<li> manual and automatic method for evaluating MT</li>
<li> scaling MT to very large data sets</li>
</ul>
We encourage authors to evaluate their approaches to the above topics
using the common data sets created for the shared tasks.
</p>
<h3>REGISTRATION</h3>
<p>
Registration will be handled by <a href="http://acl2016.org/">ACL 2016</a>.
<h3>NEWS TRANSLATION TASK</h3>
<p>
The first shared task which will examine translation between the
following language pairs:
<ul>
<li> English-German and German-English</li>
<li> English-Finnish and Finnish-English </li>
<li> English-Czech and Czech-English</li>
<li> English-Romanian and Romanian-English <font color=red><b>NEW</b></font></li>
<li> English-Russian and Russian-English </li>
<li> English-Turkish and Turkish-English <font color=red><b>NEW</b></font></li>
</ul>
The text for all the test sets will be drawn from news articles.
Participants may submit translations for any or all of the language
directions. In addition to the common test sets the conference organizers
will provide optional training resources.
</p>
<p>
All participants who submit entries will have their translations
evaluated. We will evaluate translation performance by human judgment. To
facilitate the human evaluation we will require participants in the
shared tasks to manually judge some of the submitted translations. For each team,
this will amount
to ranking 300 sets of 5 translations, per language pair submitted.
</p>
<p>
We also provide baseline machine translation systems, with performance
comparable to the best systems from last year's shared task.
</p>
<h3>IT TRANSLATION TASK</h3>
<p>This task focuses on domain adaptation of MT to the <abbr title="information technologies">IT</abbr> domain for the following languages pairs:<p>
<ul>
<li>English-to-Bulgarian (EN-BG)</li>
<li>English-to-Czech (EN-CS)</li>
<li>English-to-German (EN-DE)</li>
<li>English-to-Spanish (EN-ES)</li>
<li>English-to-Basque (EN-EU)</li>
<li>English-to-Dutch (EN-NL)</li>
<li>English-to-Portuguese (EN-PT)</li>
</ul>
<p>Parallel corpora (including in-domain training data) are available.
Evaluation will be carried out both automatically and manually.
See <a href="it-translation-task.html">detailed information about the task</a>.</p>
<h3>BIOMEDICAL TRANSLATION TASK</h3>
<p>In this first edition of this task, we will evaluate systems for the translation of scientific abstracts in biological and health sciences for the following languages pairs:<p>
<ul>
<li> English-French and French-English</li>
<li> English-Spanish and Spanish-English</li>
<li> English-Portuguese and Portuguese-English </li>
</ul>
<p>Parallel corpora will be available for the above language pairs but also monoligual corpora for each of the four languages.
Evaluation will be carried out both automatically and manually.</p>
<h3>AUTOMATIC POST-EDITING TASK</h3>
<p>This shared task will examine automatic methods for correcting errors produced by machine translation (MT) systems. Automatic Post-editing (APE) aims at improving MT output in black box scenarios, in which the MT system is used "as is" and cannot be modified.
From the application point of view APE components would make it possible to:</p>
<ul>
<li> Cope with systematic errors of an MT system whose decoding process is not accessible </li>
<li> Provide professional translators with improved MT output quality to reduce (human) post-editing effort </li>
</ul>
<p>In this second edition of the task, the evaluation will focus on one language pair (English-German), measuring systems' capability to reduce the distance (HTER) that separates an automatic translation from its human-revised version approved for publication. This edition will focus on IT domain data, and will provide post-editions (of MT output) collected from professional translators.</p>
<h3>METRICS TASK</h3>
The metrics task (also called evaluation task) will assess automatic evaluation metrics' ability to:
</p>
<ul>
<li> Rank systems on their overall performance on the test set </li>
<li> Rank systems on a sentence by sentence level </li>
</ul>
<p>
Participants in the shared evaluation task will use their automatic evaluation metrics to score the output from the translation task and the tunable metrics task. In addition to MT outputs from the other two tasks, the participants will be provided with reference translations. We will measure the correlation of automatic evaluation metrics with the human judgments.
</p>
<h3>QUALITY ESTIMATION TASK</h3>
<p> Quality estimation systems aim at producing an estimate on the quality of a given translation at system run-time, without access to a reference translation. This topic is particularly relevant from a user perspective. Among other applications, it can (i) help decide whether a given translation is good enough for publishing as is; (ii) filter out sentences that are not good enough for post-editing; (iii) select the best translation among options from multiple MT and/or translation memory systems; (iv) inform readers of the target language of whether or not they can rely on a translation; and (v) spot parts (words or phrases) of a translation that are potentially incorrect.</p>
<p>Research on this topic has been showing promising results in the last couple of years. Building on the last three years' experience, the Quality-Estimation track of the WMT15 workshop and shared-task will focus on English, Spanish and German as languages and provide new training and test sets, along with evaluation metrics and baseline systems for variants of the task at three different levels of prediction: word, sentence, and document.
<h3>TUNING TASK</h3>
This task will assess your team's ability to <b>optimize the parameters</b> of a given hierarchical MT system (Moses).
<p>
Participants in the tuning task will be given complete Moses models for English-to-Czech and Czech-to-English translation and the standard developments sets from the translation task. The participants are expected to submit the <code>moses.ini</code> for one or both of the translation directions. We will use the configuration and a fixed revision of Moses to translate official WMT15 test set. The outputs of the various configurations of the system will be scored using the standard manual evaluation procedure.
</p>
<h3>CROSS-LINGUAL PRONOUN PREDICTION TASK</h3>
<p>Pronoun translation poses a problem for current state-of-the-art SMT systems as pronoun systems do not map well across languages, e.g., due to differences in gender, number, case, formality, or humanness, and to differences in where pronouns may be used. Translation divergences typically lead to mistakes in SMT, as when translating the English "it" into French ("il", "elle", or "cela"?) or into German ("er", "sie", or "es"?). One way to model pronoun translation is to treat it as a cross-lingual pronoun prediction task.</p>
<p>We propose such a task, which asks participants to predict a target-language pronoun given a source-language pronoun in the context of a sentence. We further provide a lemmatised target-language human-authored translation of the source sentence, and automatic word alignments between the source sentence words and the target-language lemmata. In the translation, the words aligned to a subset of the source-language third-person pronouns are substituted by placeholders. The aim of the task is to predict, for each placeholder, the word that should replace it from a small, closed set of classes, using any type of information that can be extracted from the documents.</p>
<p>The cross-lingual pronoun prediction task will be similar to the task of the same name at DiscoMT 2015:</p>
<a href="http://www.idiap.ch/workshop/DiscoMT/shared-task">http://www.idiap.ch/workshop/DiscoMT/shared-task</a>
<p>Participants are invited to submit systems for the English-French and English-German language pairs, for both directions.</p>
<h3>BILINGUAL DOCUMENT ALIGNMENT TASK</h3>
Details TBC
<h3>MULTIMODAL TRANSLATION TASK</h3>
This is a new task where participants are requested to generate a description for an image in a target language, given the image itself and one or more descriptions in a different (source) language.
<h3>PAPER SUBMISSION INFORMATION</h3>
<p>
Submissions will consist of regular full papers of 6-10 pages, plus
additional pages for references, formatted following the
<a href="http://acl2016.org">ACL 2016
guidelines</a>. In addition, shared task participants will be invited to
submit short papers (suggested length: 4-6 pages, plus references) describing their systems or their
evaluation metrics. Both submission and review processes will be handled
<a href="https://www.softconf.com/acl2016/WMT16/">electronically</a>.
Note that regular papers must be anonymized, while system descriptions
do not need to be.
</p>
<p>
We encourage individuals who are submitting research papers to evaluate
their approaches using the training resources provided by this conference
and past workshops, so that their experiments can be repeated by others
using these publicly available corpora.
</p>
<h3>POSTER FORMAT</h3>
A0, vertical. For details on posters, please check with the local ACL organisers.
<h3>ANNOUNCEMENTS</h3>
<table border=0 cellspacing=0>
<tr><td style="padding-left: 5px">
Subscribe to to the announcement list for WMT by entering your e-mail address below. This list will be used to announce when the test sets are released, to indicate any corrections to the training sets, and to amend the deadlines as needed.
</td></tr>
<form action="http://groups.google.com/group/wmt-tasks/boxsubscribe">
<tr><td style="padding-left: 5px;">
Email: <input type=text name=email>
<input type=submit name="sub" value="Subscribe">
</td></tr>
</form>
<tr>
<td>
</table>
<p>
<table border=0 cellspacing=0>
<tr><td>
You can read <a href="http://groups.google.com/group/wmt-tasks">past
announcements</a> on the Google Groups page for WMT. These also
include an archive of announcements from earlier workshops.</td><td style="padding-left: 25px" align=right nowrap>
<a href="http://groups.google.com/group/wmt-tasks"><img src="http://groups.google.com/groups/img/3nb/groups_bar.gif"
height=26 width=132 alt="Google Groups"></a>
</td></tr>
</table>
</p>
<h3>INVITED TALK</h3>
TBC
<!-- Jacob Devlin (Microsoft Research)<br>
<i>A Practical Guide to Real-Time Neural Translation</i>
<br> -->
<h3>ORGANIZERS</h3>
Ondřej Bojar (Charles University in Prague)<br>
Christian Buck (University of Edinburgh)<br>
Rajen Chatterjee (FBK)<br>
Christian Federmann (MSR)<br>
Liane Guillou (University of Edinburgh)<br>
Barry Haddow (University of Edinburgh)<br>
Matthias Huck (University of Edinburgh)<br>
Antonio Jimeno Yepes (IBM Research Australia)<br>
<!--Varvara Logacheva (University of Sheffield)<br>-->
Aurélie Névéol (LIMSI, CNRS)<br>
Mariana Neves (Hasso-Plattner Institute)<br>
Pavel Pecina (Charles University in Prague)<br>
Martin Popel (Charles University in Prague)<br>
Philipp Koehn (University of Edinburgh / Johns Hopkins University)<br>
Christof Monz (University of Amsterdam)<br>
Matteo Negri (FBK)<br>
Matt Post (Johns Hopkins University)<br>
<!--Carolina Scarton (University of Sheffield)<br>-->
Lucia Specia (University of Sheffield)<br>
Karin Verspoor (University of Melbourne)<br>
Jörg Tiedemann (University of Helsinki)<br>
Marco Turchi (FBK)<br>
<h3>PROGRAM COMMITTEE</h3>
<!--
<ul>
<li>Alexandre Allauzen (Universite Paris-Sud / LIMSI-CNRS)</li>
<li>Tim Anderson (Air Force Research Laboratory)</li>
<li>Eleftherios Avramidis (German Research Center for Artificial Intelligence (DFKI)</li>
<li>Amittai Axelrod (University of Maryland)</li>
<li>Loic Barrault (LIUM, University of Le Mans)</li>
<li>Fernando Batista (INESC-ID, ISCTE-IUL)</li>
<li>Daniel Beck (University of Sheffield)</li>
<li>Jose Miguel Benedi (Universitat Politecnica de Valencia)</li>
<li>Nicola Bertoldi (FBK)</li>
<li>Arianna Bisazza (University of Amsterdam)</li>
<li>Graeme Blackwood (IBM Research)</li>
<li>Fabienne Braune (University of Stuttgart)</li>
<li>Chris Brockett (Microsoft Research)</li>
<li>Christian Buck (University of Edinburgh)</li>
<li>Hailong Cao (Harbin Institute of Technology)</li>
<li>Michael Carl (Copenhagen Business School)</li>
<li>Marine Carpuat (University of Maryland)</li>
<li>Francisco Casacuberta (Universitat Politecnica de Valencia)</li>
<li>Daniel Cer (Google)</li>
<li>Mauro Cettolo (FBK)</li>
<li>Rajen Chatterjee (Fondazione Bruno Kessler)</li>
<li>Boxing Chen (NRC)</li>
<li>Colin Cherry (NRC)</li>
<li>David Chiang (University of Notre Dame)</li>
<li>Kyunghyun Cho (New York University)</li>
<li>Vishal Chowdhary (Microsoft)</li>
<li>Steve DeNeefe (SDL Language Weaver)</li>
<li>Michael Denkowski (Carnegie Mellon University)</li>
<li>Jacob Devlin (Microsoft Research)</li>
<li>Markus Dreyer (SDL Language Weaver)</li>
<li>Kevin Duh (Nara Institute of Science and Technology)</li>
<li>Nadir Durrani (QCRI)</li>
<li>Marc Dymetman (Xerox Research Centre Europe)</li>
<li>Marcello Federico (FBK)</li>
<li>Minwei Feng (IBM Watson Group)</li>
<li>Yang Feng (Baidu)</li>
<li>Andrew Finch (NICT)</li>
<li>Jose A. R. Fonollosa (Universitat Politecnica de Catalunya)</li>
<li>Mikel Forcada (Universitat d'Alacant)</li>
<li>George Foster (NRC)</li>
<li>Alexander Fraser (Ludwig-Maximilians-Universität München)</li>
<li>Markus Freitag (RWTH Aachen University)</li>
<li>Ekaterina Garmash (University of Amsterdam)</li>
<li>Ulrich Germann (University of Edinburgh)</li>
<li>Kevin Gimpel (Toyota Technological Institute at Chicago)</li>
<li>Jesus Gonzalez-Rubio (Universitat Politecnica de Valencia)</li>
<li>Francisco Guzman (Qatar Computing Research Institute)</li>
<li>Nizar Habash (New York University Abu Dhabi)</li>
<li>Jan Hajic (Charles University in Prague)</li>
<li>Greg Hanneman (Carnegie Mellon University)</li>
<li>Eva Hasler (University of Cambridge)</li>
<li>Yifan He (New York University)</li>
<li>Kenneth Heafield (University of Edinburgh)</li>
<li>John Henderson (MITRE)</li>
<li>Teresa Herrmann (Karlsruhe Institute of Technology)</li>
<li>Felix Hieber (Amazon Research)</li>
<li>Stephane Huet (Universite d'Avignon)</li>
<li>Young-Sook Hwang (SKPlanet)</li>
<li>Gonzalo Iglesias (University of Cambridge)</li>
<li>Abe Ittycheriah (IBM)</li>
<li>Laura Jehl (Heidelberg University)</li>
<li>Maxim Khalilov (BMMT)</li>
<li>Roland Kuhn (National Research Council of Canada)</li>
<li>Shankar Kumar (Google)</li>
<li>David Langlois (LORIA, Universite de Lorraine)</li>
<li>Gennadi Lembersky (NICE Systems)</li>
<li>Lemao Liu (NICT)</li>
<li>Qun Liu (Dublin City University)</li>
<li>Zhanyi Liu (Baidu)</li>
<li>Wolfgang Macherey (Google)</li>
<li>Saab Mansour (RWTH Aachen University)</li>
<li>Yuval Marton (Microsoft)</li>
<li>Arne Mauser (Google, Inc)</li>
<li>Wolfgang Menzel (Hamburg University)</li>
<li>Abhijit Mishra (Indian Institute of Technology Bombay)</li>
<li>Dragos Munteanu (SDL Language Technologies)</li>
<li>Maria Nadejde (University of Edinburgh)</li>
<li>Preslav Nakov (Qatar Computing Research Institute, HBKU)</li>
<li>Graham Neubig (Nara Institute of Science and Technology)</li>
<li>Jan Niehues (Karlsruhe Institute of Technology)</li>
<li>Kemal Oflazer (Carnegie Mellon University - Qatar)</li>
<li>Daniel Ortiz-MartÃnez (Technical University of Valencia)</li>
<li>Santanu Pal (Saarland University)</li>
<li>Stephan Peitz (RWTH Aachen University)</li>
<li>Sergio Penkale (Lingo24)</li>
<li>Daniele Pighin (Google Inc)</li>
<li>Maja Popovic (Humboldt University of Berlin)</li>
<li>Stefan Riezler (Heidelberg University)</li>
<li>Johann Roturier (Symantec)</li>
<li>Raphael Rubino (Prompsit Language Engineering)</li>
<li>Alexander M. Rush (MIT)</li>
<li>Hassan Sawaf (eBay Inc.)</li>
<li>Jean Senellart (SYSTRAN)</li>
<li>Rico Sennrich (University of Edinburgh)</li>
<li>Wade Shen (MIT)</li>
<li>Patrick Simianer (Heidelberg University)</li>
<li>Linfeng Song (University of Rochester)</li>
<li>Sara Stymne (Uppsala University)</li>
<li>Katsuhito Sudoh (NTT Communication Science Laboratories / Kyoto University)</li>
<li>Felipe Sanchez-Martinez (Universitat d'Alacant)</li>
<li>Jörg Tiedemann (Uppsala University)</li>
<li>Christoph Tillmann (IBM Research)</li>
<li>Antonio Toral (Dublin City Unversity)</li>
<li>Yulia Tsvetkov (Carnegie Mellon University)</li>
<li>Marco Turchi (Fondazione Bruno Kessler)</li>
<li>Ferhan Ture (BBN Technologies)</li>
<li>Masao Utiyama (NICT)</li>
<li>Ashish Vaswani (University of Southern California Information Sciences Institute)</li>
<li>David Vilar (Nuance)</li>
<li>Martin Volk (University of Zurich)</li>
<li>Aurelien Waite (University of Cambridge)</li>
<li>Taro Watanabe (NICT)</li>
<li>Marion Weller (Universität Stuttgart)</li>
<li>Philip Williams (University of Edinburgh)</li>
<li>Shuly Wintner (University of Haifa)</li>
<li>Hua Wu (Baidu)</li>
<li>Joern Wuebker (RWTH Aachen University)</li>
<li>Peng Xu (Google Inc.)</li>
<li>Wenduan Xu (Cambridge University)</li>
<li>Francois Yvon (LIMSI/CNRS)</li>
<li>Feifei Zhai (The City University of New York)</li>
<li>Joy Ying Zhang (Carnegie Mellon University)</li>
<li>Tiejun Zhao (Harbin Institute of Technology)</li>
<li>Yinggong Zhao (State Key Laboratory for Novel Software Technology at Nanjing University)</li>
</ul>
-->
<h3>CONTACT</h3>
<p>
For general questions, comments, etc. please send email
to <a href="mailto:[email protected]">[email protected]</a>.<br>
For task-specific questions, please contact the relevant organisers.
</p>
<h3>ACKNOWLEDGEMENTS</h3>
This conference has received funding
from the European Union’s Horizon 2020 research
and innovation programme under grant agreements
645452 (<a href="http://www.qt21.eu/">QT21</a>) and 645357 (<a href="http://www.meta-net.eu/projects/cracker/">Cracker</a>).
<br>
We thank <a href="http://www.yandex.com">Yandex</a> for their donation of data for the Russian-English and Turkish-English news tasks, and the University of Helsinki for their donation for the Finnish-English news tasks.
<!-- Supported by the European Commision<br> under the
<a href="http://www.mosescore.eu"><img align=right src="http://www.statmt.org/mosescore/pub/img/mosescore-logo-transp.png" border=0 width=100 height=20></a> <a href="http://www.mosescore.eu/">MosesCore</a> project <br>project (grant number 288487) <p>
-->
</body>
</html>