You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running inference with some of the classifiers while using the GPU support, all available GPU memory is utilised. This is a known TensorFlow behaviour which can be controlled by setting the environment variable TF_FORCE_GPU_ALLOW_GROWTH=true or directives in Pyhon code, which has worked for me so far, but when performing inference through Essentia, this doesn't seem to have any effect (and my processes are killed by Slurm on an HPC, reporting CUDA_ERROR_OUT_OF_MEMORY).
Could I control GPU memory usage with Essentia in some other way?
pip show essentia-tensorflow
Name: essentia-tensorflow
Version: 2.1b6.dev1110
Summary: Library for audio and music analysis, description and synthesis, with TensorFlow support
Home-page: http://essentia.upf.edu
Author: Dmitry Bogdanov
Author-email: [email protected]
License: AGPLv3
Location: /usr/local/lib/python3.10/dist-packages
Requires: numpy, pyyaml, six
Required-by:
The text was updated successfully, but these errors were encountered:
Hi,
When running inference with some of the classifiers while using the GPU support, all available GPU memory is utilised. This is a known TensorFlow behaviour which can be controlled by setting the environment variable
TF_FORCE_GPU_ALLOW_GROWTH=true
or directives in Pyhon code, which has worked for me so far, but when performing inference through Essentia, this doesn't seem to have any effect (and my processes are killed by Slurm on an HPC, reporting CUDA_ERROR_OUT_OF_MEMORY).Could I control GPU memory usage with Essentia in some other way?
The text was updated successfully, but these errors were encountered: