Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confirmation: --no DLIB_USE_CUDA for GPU support? #6

Open
Ianmcmill opened this issue Feb 24, 2018 · 5 comments
Open

Confirmation: --no DLIB_USE_CUDA for GPU support? #6

Ianmcmill opened this issue Feb 24, 2018 · 5 comments

Comments

@Ianmcmill
Copy link

Ianmcmill commented Feb 24, 2018

@MotorCityCobra You state in your readme the compile option -DDLIB_USE_CUDA=0 and when installing --no DLIB_USE_CUDA

Are you sure? Because when compiling with this argument and running the process my GPU isn't used at all. Extract process runs fine nevertheless but CPU is maxed out.

@Ianmcmill Ianmcmill changed the title Confirmation: --no DLIB_USE_CUDA when for GPU extract Confirmation: --no DLIB_USE_CUDA for GPU support? Feb 24, 2018
@MotorCityCobra
Copy link
Owner

MotorCityCobra commented Feb 25, 2018

It's basically just something I copied and pasted that got dlib, (and thereby face_recognition) to use my GPU.

You ran all the commands for it in the readme?
You copied, pasted, and hit enter for each individual line?

What kind of system do you have?
I have Ubuntu 16.04 and an Nvidia 1080ti
I type nvidia-smi in the terminal when face_recognition is running and see that it's using the GPU.

@Ianmcmill
Copy link
Author

Ianmcmill commented Feb 25, 2018

Have copy and pasted everything as in the read me. No errors. Nvidia-smi shows that the python3 process is using 69mb but the volatile GPU usage is 1-4%

Ubuntu 16.04. Cuda 8. GTX 980 4g.
When compiling dlib it doesn't say anything about Cuda. But when I remove the -D in -DDLIB it compiles with cuda.

If I compile it like this:

`mkdir build; cd build; cmake .. -DLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1; cmake --build .

cd ..

python3 setup.py install --yes USE_AVX_INSTRUCTIONS
`

I get Cuda support. But I wonder how your compile does support CUDA even if you explicitly say that the compiler should not to compile with Cuda support (-DDLIB....=0)

@Ianmcmill
Copy link
Author

Ianmcmill commented Feb 25, 2018

I just found this on githubgist: https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf

These instructions assume you don't have an nVidia GPU and don't have Cuda and cuDNN installed and don't want GPU acceleration (since none of the current Mac models support this).

Clone the code from github:

git clone https://github.com/davisking/dlib.git

Build the main dlib library (optional if you just want to use Python):

cd dlib
mkdir build; cd build; cmake .. -DDLIB_USE_CUDA=0 -DUSE_AVX_INSTRUCTIONS=1; cmake --build .

Build and install the Python extensions:

cd ..
python3 setup.py install --yes USE_AVX_INSTRUCTIONS --no DLIB_USE_CUDA

So your instructions clearly do not compile for CUDA.

EDIT:

I managed to compile dlib with CUDA support. I tried a fresh virtualenv. Seems like I already managed to compile it before but was not aware CUDA was being used as nvidia-smi showed only 300mb used from VRAM. Anyways, here it goes:

mkdir build; cd build; cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1; cmake --build .

In this process I updated my cudnn from 6.0.21 to 7.0.5 using this

$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h
/usr/local/cuda/lib64/libcudnn*

Read more at: http://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#ixzz587Lymz7E

After that:

python3 setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA

The log shows that CUDA support is compiled and checks if correct cudnn library is present working.
Now it runs on CUDA. Though not as fast as I thought it would ;)

@Ianmcmill Ianmcmill reopened this Feb 25, 2018
@Ianmcmill Ianmcmill reopened this Feb 25, 2018
@Ianmcmill
Copy link
Author

I think I closed this to early because I still think it does not utilize CUDA.
Just for clarification. When I run the script and check GPU load with nvidia-smi I get the following output:

Sun Feb 25 13:32:46 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980     Off  | 00000000:01:00.0  On |                  N/A |
|  0%   51C    P2    45W / 196W |   1168MiB /  4036MiB |      5%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1338      G   /usr/lib/xorg/Xorg                           132MiB |
|    0      2444      G   compiz                                       122MiB |
|    0     17005      C   python3.5                                    301MiB |
|    0     17564      C   python3.5                                    301MiB |
|    0     29950      C   python3                                      301MiB |
+-----------------------------------------------------------------------------+

I do run three instances and the GPU fan isn't even spinning. GPU Utilization is between 0-5%.
When starting demo.py I get the following message:


Tolerance: 0.6.
Number of confirmed faces saved from each video: 1000.
[ INFO:0] Initialize OpenCL runtime...
Using OpenCL: True.

and it starts checking the frames and writing images.
Why does it initialize OpenCL?
This does not look like GPU is utilized.
@MotorCityCobra what is your nvidia-smi output?

@MotorCityCobra
Copy link
Owner

MotorCityCobra commented Feb 25, 2018

Your nvidia-smi looks like it's saying it's using your GPU.
When mine uses my GPU it doesn't use as much memory as I expected.

I wish I had my Linux machine right now. I won't have it hooked up until tomorrow.

The Terminal won't display whether or not the GPU is being used when you run demo.py.
I think your 'nvidia-smi' displays that python is using your GPU. That's the 'python' and 'python3'.

I have a 1080ti and checking and cropping the images in this script is one of the slower things I get the GPU to do in python.
I think it's just because there is a lot going on in each image, to check it for faces, to check that it's the face, and to crop it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants