-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to reduce GPU memory? #58
Comments
And I wonder why it need 7.23GiB memory? Can anyone explain it? |
I think your seq length is very high. 1200 is quite long. Tensorflow has this issue with the way it creates these kinds of graphs using seq2seq see this issue. You may try to remake it using dynamic_rnn as they suggest in the comments. A quick fix might be to lower your batch size. Though it is already low so your loss may be noiser. |
@fujimotomh Thank you for your reply. I just tried to use But if I tried So how can I modify the code? |
@ckcz123 You almost have it. outputs, last_state = tf.nn.dynamic_rnn(cell, tf.nn.embedding_lookup(embedding, self.input_data), initial_state=self.initial_state, scope='rnnlm') To confirm correctness, I think the best thing to do would be to run it with default parameters and see if you can get low loss on the training set. I would suspect this would work though as |
@fujimotomh Oh, it works! Only 1.1G usage of GPU memory! |
What a wonderful project! I have used it to solve some problems.
But there is one problem that always bothers me.
In one of the cases, I have to use
rnn_size=512
,num_layers=2
,seq_length=1200
.Other arguments:
batch_size=10
,num_epochs=50
,grad_clip=5.0
, and so on.But it will allocate 7.23GiB in GPU, which is only 8GB-free.
So I just wonder if I can reduce GPU memory to 7GiB or less. If so, I can run it on GPU.
rnn_size
,num_layers
,seq_length
cannot be modified.Here is some of the ouputs.
Sorry for my poor English, and thanks a lot!
The text was updated successfully, but these errors were encountered: