opus1m-2021-05-16.zip dataset: opus1m model: transformer-align source language(s): cat fra gcf target language(s): heb model: transformer-align pre-processing: normalization + SentencePiece (spm32k,spm32k) download: opus1m-2021-05-16.zip test set translations: opus1m-2021-05-16.test.txt test set scores: opus1m-2021-05-16.eval.txt Benchmarks testset BLEU chr-F #sent #words BP Tatoeba-test.cat-heb 100.0 1.000 1 7 1.000 Tatoeba-test.fra-heb 33.2 0.547 3281 20645 1.000 Tatoeba-test.gcf-heb 0.0 1.000 1 2 1.000 Tatoeba-test.ita-heb 2.6 0.157 1706 9790 1.000 Tatoeba-test.lad-heb 1.0 0.149 137 715 1.000 Tatoeba-test.lat-heb 0.8 0.119 224 1307 1.000 Tatoeba-test.multi-heb 33.6 0.552 3283 20664 1.000 Tatoeba-test.osp-heb 9.5 0.219 2 7 1.000 Tatoeba-test.por-heb 1.8 0.171 702 4336 1.000 Tatoeba-test.ron-heb 16.0 0.122 1 3 1.000 Tatoeba-test.spa-heb 6.4 0.241 1849 12105 1.000