-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce resources to load language models #121
Comments
Which files ? if you require the processing in Python or in JavaScript(Node) I can work on a Google proto buffer format; quite sure the persisted model would be way lighter, maybe the processing would be fast, I do not know. |
I know this is half road, as you were asking for a better structure to gain processing time. But for big model on memory here is a solution: I changed the format a little bit from regular Here is a working example in JavaScript/Node: https://github.com/bacloud23/lingua-rs-bigrams So here how it goes:
Drawback: new protobufjs dependency. |
@ghost: By how much your solution reduces the binary size? |
Currently, the language models are parsed from json files and loaded into simple maps at runtime. Even though accessing the maps is pretty fast, they consume a significant amount of memory. The goal is to investigate whether there are more suitable data structures available that require less storage space in memory, something like NumPy for Python.
One promising candidate could be ndarray.
The text was updated successfully, but these errors were encountered: