Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much memory does this require while training? #17

Open
lukesalamone opened this issue Apr 8, 2021 · 3 comments
Open

How much memory does this require while training? #17

lukesalamone opened this issue Apr 8, 2021 · 3 comments

Comments

@lukesalamone
Copy link

We are running this experiment on Google Colab Pro (16 GB machines) and are receiving out of memory errors using the same vocabulary size (89500). How much memory would you recommend for us to have to run this experiment?

@Avmb
Copy link
Collaborator

Avmb commented Apr 8, 2021

Strange, what framerwork are you using?

@lukesalamone
Copy link
Author

We are training with Nematus. When we try to run with the defaults we get

Resource exhausted: OOM when allocating tensor with shape[5980,89500]

When we check our machine specs we get

Gen RAM Free: 26.2 GB  | Proc size: 168.6 MB
GPU RAM Free: 16280MB | Used: 0MB | Util   0% | Total 16280MB

@Avmb
Copy link
Collaborator

Avmb commented Apr 9, 2021

Nematus has been reimplemented from scratch in Tensorflow since then and I haven't trained with this dataset with it.

You can try to reduce the vocabulary size e.g. to 32000 or 16000.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants