-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leak at each session.run() #29
Comments
It does make sense that transferring the |
I feel not .eval() cause but session.run() - default graph etc. |
Yes, both functions should IMO. |
Yes, you are right it is image_batch create stack instance every time as call classify()... |
Hello. I am using your last lot of code and have adapted it to run all the time so I can feed images in for classification without the need to re-run the script. Because of this when I start putting a large number of images through the system memory just climbs up and up instead of climbing but then remaining constant. Coincidentally I have also seen that as images are being processed and the memory increases, the overall performance decreases. Where the first one might take 500ms but after around 20 images can take up to around 1.4 seconds and so on. I have been testing this on a i7 6-Core, 128 Gb Ram with 2 x GTX 1080. Any ideas/guidance on how to deal with this memory/performance issue? (been trying different things for the last 2 weeks) |
Any response yet about possibly a one stack instance? I have been able to minimise the leak but not remove it completely without this fix. |
i have not tried to run this code for video. As I have mentioned I normally run this on batches Images. However I have code in the repository to export to a SavedModelBundle. This can be then run in tensorflow serving. I'm starting to do more of this and haven't noticed issues with it yet. Have you seen that code? This adds a decode_jpeg and resample operations on the front of the graph (and it becomes a single graph). I am open to making this an option if it will solve the issue (or another way if people have a better idea). I just have not had much time lately to try this out. Hopefully will get to testing video in the next couple of weeks In the meantime I am open to suggestions or PRs |
Also, I am not sure if you are feeding data faster than the inference can be achieved. You may wish to throttle, or better yet, use a ring-buffer or some sort of data structure with back-pressure if you are feeding code much faster than you are consuming it. |
I use not last version your code (without progress bar)
And I see memory leak at session.run()
now in your current version it here
https://github.com/dpressel/rude-carnie/blob/master/guess.py#L77
I make http service, but after new call classify ram memory less than before about 15Mb...
What do you think about?
The text was updated successfully, but these errors were encountered: