Skip to content

Commit

Permalink
Commit local changes to Jekyll sources
Browse files Browse the repository at this point in the history
  • Loading branch information
emjotde committed Jun 3, 2017
1 parent 024fe83 commit 21f869c
Show file tree
Hide file tree
Showing 6 changed files with 21 additions and 20 deletions.
3 changes: 2 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
MIT License

Copyright (c) 2017 AmuNMT
Copyright (c) 2017 Marcin Junczys-Dowmunt, the University of Edinburgh, Adam
Mickiewicz University, the World Intellectual Property Organization

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# AmuNMT Website
# Marian Website

The website is build with Jekyll - a static site generator.
The content is created and updated on branch `jekyll`, then the static pages
Expand Down Expand Up @@ -48,4 +48,4 @@ message.
| Tag | Description |
| --- | --- |
| `[Text](/permalink/)` | An active link to another subpage of the website identified by its permalink. |
| `{% github_link <repository>/<path/to/file> %}` | An active link to a file/directory `<path/to/file>` in the given repository, i.e. `http://amunmt.github.io/amunmt/<repository>/tree/master/<path/to/file>`. |
| `{% github_link <repository>/<path/to/file> %}` | An active link to a file/directory `<path/to/file>` in the given repository, i.e. `http://marian-nmt.github.io/marian/<repository>/tree/master/<path/to/file>`. |
14 changes: 7 additions & 7 deletions features.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,20 +57,20 @@ the [IWSLT paper](http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_4.pdf

We ran our experiments on an Intel Xeon E5-2620 2.40GHz server with four NVIDIA
GeForce GTX 1080 GPUs.We present the words-per-second ratio for our NMT models
using AmuNMT and Nematus, executed on the CPU and GPU. For the CPU version we
using Marian and Nematus, executed on the CPU and GPU. For the CPU version we
use 16 threads, translating one sentence per thread. We restrict the number of
OpenBLAS threads to 1 per main Nematus thread. For the GPU version of Nematus
we use 5 processes to maximize GPU saturation. As a baseline, the phrase-based
model reaches 455 words per second using 16 threads.

The CPU-bound execution of Nematus reaches 47 words per second while the
GPU-bound achieved 270 words per second. In similar settings, CPU-bound AmuNMT
GPU-bound achieved 270 words per second. In similar settings, CPU-bound Marian
is three times faster than Nematus CPU, but three times slower than Moses. With
vocabulary selection (systems with asteriks) we can nearly double the speed of
AmuNMT CPU. The GPU-executed version of AmuNMT is more than three times faster
Marian CPU. The GPU-executed version of Marian is more than three times faster
than Nematus and nearly twice as fast as Moses, achieving 865 words per second,
with vocabulary selection we reach 1,192. Even the speed of the CPU version
would already allow to replace a Moses-based SMT system with an AmuNMT-based
would already allow to replace a Moses-based SMT system with an Marian-based
NMT system in a production environment without severely affecting translation
throughput.

Expand All @@ -88,8 +88,8 @@ graph.

### Training speed in words per second

We also compare training speed between a number of popular toolkits and AmuNMT.
As AmuNMT is still early work, we expect speed to improve with future optimizations.
We also compare training speed between a number of popular toolkits and Marian.
As Marian is still early work, we expect speed to improve with future optimizations.

<div class="multiple-images">
<img alt="Training speed #1" src="{{ site.baseurl }}/assets/images/training_speed.png" />
Expand All @@ -105,7 +105,7 @@ on German-English WMT data.

### Multi-GPU training

AmuNMT's training framework provides multi-GPU training via asynchronous SGD and
Marian's training framework provides multi-GPU training via asynchronous SGD and
data parallelism (copies of the full model on each GPU). We benchmarked
the [Romanian-English example](/examples/training/) on a machine with
8 NVIDIA GTX 1080 GPUs. Training speed increases with each GPU instance, but currently
Expand Down
2 changes: 1 addition & 1 deletion index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ permalink: /
</p>

<div class="cta-container">
<a class="btn btn-primary btn-cta btn-blue" href="{{ site.github }}/amunmt" target="_blank">
<a class="btn btn-primary btn-cta btn-blue" href="{{ site.github }}/marian" target="_blank">
<i class="fa fa-github"></i>
Download from GitHub
</a>
Expand Down
4 changes: 2 additions & 2 deletions publications.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ menu: 6
## Citation

Please cite the following [IWSLT paper](http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_4.pdf)
if you use AmuNMT or Marian in your research:
if you use Marian (formerly AmuNMT) in your research:

```tex
@InProceedings{junczys2016neural,
Expand All @@ -24,7 +24,7 @@ if you use AmuNMT or Marian in your research:
}
```

## Work using AmuNMT
## Work using Marian/AmuNMT

{% bibliography %}

Expand Down
14 changes: 7 additions & 7 deletions quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,28 +43,28 @@ Tested on different machines and distributions:

Clone a fresh copy from github:

git clone https://github.com/amunmt/amunmt
git clone https://github.com/marian-nmt/marian

The project is a standard CMake out-of-source build:

cd amunmt
cd marian
mkdir build
cd build
cmake ..
make -j

If run for the first time, this will also download Marian -- the training
framework for AmuNMT.
framework for Marian.

## Running AmuNMT
## Running Marian

### Training

Marian is the training framework of AmuNMT. Assuming `corpus.en` and `corpus.ro` are
Marian is the training framework of Marian. Assuming `corpus.en` and `corpus.ro` are
corresponding and preprocessed files of a English-Romanian parallel corpus, the
following command will create a Nematus-compatible neural machine translation model.

./amunmt/build/marian \
./marian/build/marian \
--train-sets corpus.en corpus.ro \
--vocabs vocab.en vocab.ro \
--model model.npz
Expand All @@ -77,7 +77,7 @@ a WMT-grade model.

If a trained model is available, run:

./amunmt/build/amun -m model.npz -s vocab.en -t vocab.ro <<< "This is a test ."
./marian/build/amun -m model.npz -s vocab.en -t vocab.ro <<< "This is a test ."

See the [documentation](/docs/#amun) for a full list of command line options
or the [examples](/examples/translating) for a full example of how to use
Expand Down

0 comments on commit 21f869c

Please sign in to comment.