You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
the software goes out of memory when run on common GPUs. The problem arises when the session is run and does not occur when weights are loaded in detect.py or saved in convert_weights,py.
The problem is the same for yolov3 and yolov3-tiny, on different gpus.
It runs fine on CPU and takes little memory ( a few hundred megabytes).
When running yolov3-tiny on a Titan X GPU with allow_growth option, a few hundred megabytes are used during loading of weights but memory usage grows to 8GB (out of 12) when sess.run is called.
I think it is an anomalous behaviour.
GPU is a MX150, with 2GB memory, but I tried also others with same amount of memory.
Python 3.7 on windows, with Tensorflow 1.13.1, Cuda 10.1, Driver 425
The text was updated successfully, but these errors were encountered:
Hello,
the software goes out of memory when run on common GPUs. The problem arises when the session is run and does not occur when weights are loaded in detect.py or saved in convert_weights,py.
The problem is the same for yolov3 and yolov3-tiny, on different gpus.
It runs fine on CPU and takes little memory ( a few hundred megabytes).
When running yolov3-tiny on a Titan X GPU with allow_growth option, a few hundred megabytes are used during loading of weights but memory usage grows to 8GB (out of 12) when sess.run is called.
I think it is an anomalous behaviour.
GPU is a MX150, with 2GB memory, but I tried also others with same amount of memory.
Python 3.7 on windows, with Tensorflow 1.13.1, Cuda 10.1, Driver 425
The text was updated successfully, but these errors were encountered: