Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory leak at each session.run() #29

Open
vinnitu opened this issue Jun 20, 2017 · 8 comments
Open

memory leak at each session.run() #29

vinnitu opened this issue Jun 20, 2017 · 8 comments

Comments

@vinnitu
Copy link

vinnitu commented Jun 20, 2017

I use not last version your code (without progress bar)

And I see memory leak at session.run()

now in your current version it here
https://github.com/dpressel/rude-carnie/blob/master/guess.py#L77

I make http service, but after new call classify ram memory less than before about 15Mb...

What do you think about?

@dpressel
Copy link
Owner

It does make sense that transferring the image_batch.eval() should cause an allocation. And, once memory is allocated for a process the OS doesnt return it. However, I would expect that it could reclaim it, so would not necessarily expect a "leak".

@vinnitu
Copy link
Author

vinnitu commented Jun 20, 2017

I feel not .eval() cause but session.run() - default graph etc.

@dpressel
Copy link
Owner

Yes, both functions should IMO.

@vinnitu
Copy link
Author

vinnitu commented Jun 21, 2017

Yes, you are right it is image_batch create stack instance every time as call classify()...
Maybe you know how to use one stack instance?

@s3571423
Copy link

Hello. I am using your last lot of code and have adapted it to run all the time so I can feed images in for classification without the need to re-run the script. Because of this when I start putting a large number of images through the system memory just climbs up and up instead of climbing but then remaining constant.

Coincidentally I have also seen that as images are being processed and the memory increases, the overall performance decreases. Where the first one might take 500ms but after around 20 images can take up to around 1.4 seconds and so on.

I have been testing this on a i7 6-Core, 128 Gb Ram with 2 x GTX 1080.

Any ideas/guidance on how to deal with this memory/performance issue? (been trying different things for the last 2 weeks)

@s3571423
Copy link

Any response yet about possibly a one stack instance? I have been able to minimise the leak but not remove it completely without this fix.

@dpressel
Copy link
Owner

i have not tried to run this code for video. As I have mentioned I normally run this on batches Images. However I have code in the repository to export to a SavedModelBundle. This can be then run in tensorflow serving. I'm starting to do more of this and haven't noticed issues with it yet. Have you seen that code? This adds a decode_jpeg and resample operations on the front of the graph (and it becomes a single graph). I am open to making this an option if it will solve the issue (or another way if people have a better idea). I just have not had much time lately to try this out. Hopefully will get to testing video in the next couple of weeks

In the meantime I am open to suggestions or PRs

@dpressel
Copy link
Owner

Also, I am not sure if you are feeding data faster than the inference can be achieved. You may wish to throttle, or better yet, use a ring-buffer or some sort of data structure with back-pressure if you are feeding code much faster than you are consuming it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants