-
-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi Language Tokenization Support #298
Comments
How to join the development of multiple languages? I am good at Chinese and English. |
Hi @taotecode, thanks for your interest in contributing to the project! Here are the unit tests for the Tokenizers implemented in the library. https://github.com/RubixML/ML/tree/master/tests/Tokenizers We need help from native language speakers to ensure that we have test coverage for different languages and that the current tests are correct. |
@andrewdalpino I can help with Hindi. I am not sure how it is going to work with though. Here is the problem:
Expected array:
Actual array:
I only tested for |
This is because Hindi and many other languages are based on Complex Text Layout (CTL) - so you will need to account for partial words that becomes full words at the end. In general terms, they fall under complex script languages. I'm pretty sure there are many works in python for tokenizing these languages, PHP also needs one of those implementations such as hindi-tokenizer but for other languages as well to support further development. AFAIK the tokenizers comes from NLTK and it's derivative works, there needs to be equivalent implementation in php or FFI wrapper in order to make this work. |
I'm hoping that we can get to the point where we fully support the following languages.
I started adding unit tests for these languages for a few tokenizers here https://github.com/RubixML/ML/tree/master/tests/Tokenizers - however, it doesn't look like we support all the langugaes. I only speak English so it's hard for me to tell. Could we get some help from the community to verify that our Tokenizers support all of these languages and, if not, contribute a fix?
https://github.com/RubixML/ML/tree/master/src/Tokenizers
Thank you!
The text was updated successfully, but these errors were encountered: