Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
-
Updated
Nov 25, 2024 - Python
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
Natural Language Processing Pipeline - Sentence Splitting, Tokenization, Lemmatization, Part-of-speech Tagging and Dependency Parsing
A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology tags, lemmas, and dependency trees.
reference code for syntaxnet
spaCy + UDPipe
HuSpaCy: industrial-strength Hungarian natural language processing
A minimal, pure Python library to interface with CoNLL-U format files.
Repository for the Georgetown University Multilayer Corpus (GUM)
A framework to convert Universal Dependencies to Logical Forms
BERT fine-tuning for POS tagging task (Keras)
Python framework for processing Universal Dependencies data
This Universal Dependencies (UD) Portuguese treebank.
Yet Another (natural language) Parser
COMBO is jointly trained tagger, lemmatizer and dependency parser.
An NLP pipeline for Hebrew
CONLL-U to Pandas DataFrame
розмічений руками морфо’, синт’, кореф’ корпус української мови
Framework for probing tasks
A JSON API to tag a sentence with part of speech tags. Uses UDPipe, so support for hundreds of languages.
Add a description, image, and links to the universal-dependencies topic page so that developers can more easily learn about it.
To associate your repository with the universal-dependencies topic, visit your repo's landing page and select "manage topics."