Skip to content

The software used to extract structured data from Wikipedia

Notifications You must be signed in to change notification settings

jlareck/extraction-framework

 
 

Repository files navigation

DBpedia Information Extraction Framework

Extraction Framework Build and MiniDump Test

Homepage: http://dbpedia.org
Documentation: http://dev.dbpedia.org/Extraction
Get in touch with DBpedia: https://wiki.dbpedia.org/join/get-in-touch
Slack: join the #dev-team slack channel within the the DBpedia Slack workspace - the main point for developement updates and discussions

Contents

About DBpedia

DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
To check out the projects of DBpedia, visit the official DBpedia website.

Getting Started

The Easy Way - Execution using the MARVIN release bot

Running the extraction framework is a relatively complex task which is in details documented in the advanced QuickStart guide. To run the extraction process same as the DBpedia core team does, you can do using the MARVIN release bot. The MARVIN bot automates the overall extraction process, from downloading the ontology, mappings and Wikipedia dumps, to extraction and post-processing the data.

git clone https://git.informatik.uni-leipzig.de/dbpedia-assoc/marvin-config
cd marvin-config
./setup-or-reset-dief.sh
# test run Romanian extraction, very small
./marvin_extraction_run.sh test
# around 4-7 days
./marvin_extraction_run.sh generic

Standalone Execution

If you plan to work on improving the codebase of the framework you would need to run the extraction framework alone as described in the QuickStart guide. This is highly recommended, since during this process you will learn a lot about the extraction framework.

  • Extractors represent the core of the extraction framework. So far, many extractors have been developed for extraction of particular information from different Wikimedia projects. To learn more, check the New Extractors guide, which explains the process of writing new extractor.

  • Check the Debugging Guide and learn how to debug the extraction framework.

Execution using Apache Spark

In order to speed up the extraction process, the extraction framework has been adopted to run on Apache Spark. Currently, more than half of the extractors can be executed using Spark. The extraction process using Spark is a slightly different process and requires different Execution. Check the QuickStart guide on how to run the extraction using Apache Spark.

Note: if possible, new extractors should be implemented using Apache Spark. To learn more, check the New Extractors guide, which explains the process of writing new extractor.

The DBpedia Extraction Framework

The DBpedia community uses a flexible and extensible framework to extract different kinds of structured information from Wikipedia. The DBpedia extraction framework is written using Scala 2.8. The framework is available from the DBpedia Github repository (GNU GPL License). The change log may reveal more recent developments. More recent configuration options can be found here: https://github.com/dbpedia/extraction-framework/wiki

The DBpedia extraction framework is structured into different modules

  • Core Module : Contains the core components of the framework.
  • Dump extraction Module : Contains the DBpedia dump extraction application.

Core Module

http://www4.wiwiss.fu-berlin.de/dbpedia/wiki/DataFlow.png

Components

  • Source : The Source package provides an abstraction over a source of Media Wiki pages.
  • WikiParser : The Wiki Parser package specifies a parser, which transforms an Media Wiki page source into an Abstract Syntax Tree (AST).
  • Extractor : An Extractor is a mapping from a page node to a graph of statements about it.
  • Destination : The Destination package provides an abstraction over a destination of RDF statements.

In addition to the core components, a number of utility packages offers essential functionality to be used by the extraction code:

Dump extraction Module

More recent configuration options can be found here: https://github.com/dbpedia/extraction-framework/wiki/Extraction-Instructions.

To know more about the extraction framework, click here

Contribution Guidelines

If you want to work on one of the issues, assign yourself to it or at least leave a comment that you are working on it and how.
If you have an idea for a new feature, make an issue first, assign yourself to it, then start working.
Please make sure you have read the Developer's Certificate of Origin, further down on this page!

  1. Fork the main extraction-framework repository on GitHub.
  2. Clone this fork onto your machine (git clone <your_repo_url_on_github>).
  3. Switch to the dev branch (git checkout dev).
  4. From the latest revision of the dev branch, make a new development branch from the latest revision. Name the branch something meaningful, for example fixRestApiParams (git checkout dev -b fixRestApiParams).
  5. Make changes and commit them to this branch.
  • Please commit regularly in small batches of things "that go together" (for example, changing a constructor and all the instance creating calls). Putting a huge batch of changes in one commit is bad for code reviews.
  • In the commit messages, summarize the commit in the first line using not more than 70 characters. Leave one line blank and describe the details in the following lines, preferably in bullet points, like in 7776e31....
  1. When you are done with a bugfix or feature, rebase your branch onto extraction-framework/dev (git pull --rebase git://github.com/dbpedia/extraction-framework.git). Resolve possible conflicts and commit.
  2. Push your branch to GitHub (git push origin fixRestApiParams).
  3. Send a pull request from your branch into extraction-framework/dev via GitHub.
  • In the description, reference the associated commit (for example, "Fixes #123 by ..." for issue number 123).
  • Your changes will be reviewed and discussed on GitHub.
  • In addition, Travis-CI will test if the merged version passes the build.
  • If there are further changes you need to make, because Travis said the build fails or because somebody caught something you overlooked, go back to item 4. Stay on the same branch (if it is still related to the same issue). GitHub will add the new commits to the same pull request.
  • When everything is fine, your changes will be merged into extraction-framework/dev, finally the dev together with your improvements will be merged with the master branch.

Please keep in mind:

More tips:

Important: Developer's Certificate of Origin

By sending a pull request to the extraction-framework repository on GitHub, you implicitly accept the Developer's Certificate of Origin 1.1

License

The source code is under the terms of the GNU General Public License, version 2.

About

The software used to extract structured data from Wikipedia

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 78.0%
  • Java 10.1%
  • Hack 4.4%
  • PHP 3.2%
  • JavaScript 2.8%
  • Shell 0.7%
  • Other 0.8%