The multilingual evaluation features a wide array of different websites: news outlets, online magazines, blogs, government or company pages. Archived versions of the pages are sometimes used to test if the extraction is consistent through time.
The benchmark focuses on decisive text parts, mostly at the beginning and the end of the main text where errors often happen. Other difficult segments throughout the document are chosen to enhance detection of false positives, and segments in particular sections (e.g. quotes or lists) are taken to see if all necessary parts of a document are present in the output.
This type of evaluation does not probe for duplicate segments, but Trafilatura features a LRU cache for detection of duplicate text parts.
It is not evaluated whether the extracted segments are in the right order, although they are generally few and far apart.
These decisions are prompted by the need to find cost-efficient ways to define a gold standard and annotate a series of documents. More comprehensive evaluations are available, mostly focusing on English and/or a particular text type.
The results and a list of comparable benchmarks are available on the evaluation page of the docs.
The following allows for comparing changes made to Trafilatura, for example in a new version or pull request:
- Install Trafilatura
- Run the script
comparison_small.py
A comparison with similar software is run periodically. As the packages tend to evolve the script may not always be up-to-date and all packages may not be available. If that happens, commenting out the corresponding sections is the most efficient solution. Fixes to the file can be submitted as pull requests.
- Install the packages specified in
eval-requirements.txt
- Run the script
comparison.py
(some packages are slow, it can be a while)
- BBAW collection (multilingual): Adrien Barbaresi, Lukas Kozmus, Lena Klink.
- Polish news: tsolewski.
- Additional German news sites: diskursmonitor.de, courtesy of Jan Oliver Rüdiger.