You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
running main.py I get the following two errors. Can you please share the hand_detection_weights.h5 with me?
mona@Mona:~/code/gesture/hand_gest$ python main.py
/home/mona/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
comeback
2018-06-21 19:48:10.503141: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-21 19:48:10.504002: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-21 19:48:10.508211: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-21 19:48:10.509255: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-21 19:48:10.930165: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 11.90GiB freeMemory: 10.90GiB
2018-06-21 19:48:10.930198: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-21 19:48:10.930348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 11.90GiB freeMemory: 10.90GiB
2018-06-21 19:48:10.930370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-21 19:48:10.932635: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 11.90GiB freeMemory: 10.65GiB
2018-06-21 19:48:10.932656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-21 19:48:10.932725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 11.90GiB freeMemory: 10.65GiB
2018-06-21 19:48:10.932745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-21 19:48:11.172375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-21 19:48:11.172413: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-06-21 19:48:11.172422: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-06-21 19:48:11.172677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10161 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-06-21 19:48:11.184769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-21 19:48:11.184813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-06-21 19:48:11.184822: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-06-21 19:48:11.185013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 303 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-06-21 19:48:11.196904: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-21 19:48:11.196938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-06-21 19:48:11.196947: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-06-21 19:48:11.197012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-21 19:48:11.197032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-06-21 19:48:11.197038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-06-21 19:48:11.197117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 220 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-06-21 19:48:11.197158: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 220 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-06-21 19:48:11.198264: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 220.31M (231014400 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.198732: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 220.31M (231014400 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.199957: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 198.28M (207912960 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.200842: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 178.45M (187121664 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.201468: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 160.61M (168409600 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.202117: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 144.55M (151568640 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.202699: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 130.09M (136411904 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.203271: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 117.08M (122770944 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.203842: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 105.38M (110493952 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.204406: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 94.84M (99444736 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.204970: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 85.35M (89500416 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.205535: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 76.82M (80550400 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.206111: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 69.14M (72495360 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.206749: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 62.22M (65245952 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.207368: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 56.00M (58721536 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.208040: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 50.40M (52849408 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.208652: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 45.36M (47564544 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.209250: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 40.83M (42808320 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.209885: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 36.74M (38527488 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.210506: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 33.07M (34674944 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.211112: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 29.76M (31207680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.211689: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 26.79M (28087040 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.212265: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 24.11M (25278464 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.212879: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 21.70M (22750720 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.213545: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 19.53M (20475648 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.214200: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 17.57M (18428160 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.214903: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 15.82M (16585472 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.215624: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 14.24M (14927104 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.216408: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 12.81M (13434624 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.217128: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 11.53M (12091392 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.217832: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 10.38M (10882304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.218541: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 9.34M (9794304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.219181: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 8.41M (8815104 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.219840: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 7.57M (7933696 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.220490: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 6.81M (7140352 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.221140: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 6.13M (6426368 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.221826: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 5.52M (5783808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.222513: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 4.96M (5205504 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.223206: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 4.47M (4685056 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-06-21 19:48:11.223972: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 4.02M (4216576 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
Process ForkPoolWorker-4:
Traceback (most recent call last):
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/pool.py", line 103, in worker
initializer(*initargs)
File "main.py", line 27, in worker
gesture.load_model()
File "/home/mona/code/gesture/hand_gest/predict_gesture.py", line 21, in load_model
self.model.load_weights('training/hand_detection_weights.h5')
File "/home/mona/anaconda3/lib/python3.6/site-packages/keras/engine/network.py", line 1171, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 269, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'training/hand_detection_weights.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/pool.py", line 103, in worker
initializer(*initargs)
File "main.py", line 27, in worker
gesture.load_model()
File "/home/mona/code/gesture/hand_gest/predict_gesture.py", line 21, in load_model
self.model.load_weights('training/hand_detection_weights.h5')
File "/home/mona/anaconda3/lib/python3.6/site-packages/keras/engine/network.py", line 1171, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 269, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'training/hand_detection_weights.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Process ForkPoolWorker-2:
Traceback (most recent call last):
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/pool.py", line 103, in worker
initializer(*initargs)
File "main.py", line 27, in worker
gesture.load_model()
File "/home/mona/code/gesture/hand_gest/predict_gesture.py", line 21, in load_model
self.model.load_weights('training/hand_detection_weights.h5')
File "/home/mona/anaconda3/lib/python3.6/site-packages/keras/engine/network.py", line 1171, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 269, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'training/hand_detection_weights.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Process ForkPoolWorker-3:
Traceback (most recent call last):
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/pool.py", line 103, in worker
initializer(*initargs)
File "main.py", line 27, in worker
gesture.load_model()
File "/home/mona/code/gesture/hand_gest/predict_gesture.py", line 21, in load_model
self.model.load_weights('training/hand_detection_weights.h5')
File "/home/mona/anaconda3/lib/python3.6/site-packages/keras/engine/network.py", line 1171, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 269, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/mona/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'training/hand_detection_weights.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
^CTraceback (most recent call last):
File "main.py", line 106, in <module>
output_details = output_q.get()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/queues.py", line 94, in get
res = self._recv_bytes()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/util.py", line 262, in _run_finalizers
finalizer()
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/util.py", line 186, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/mona/anaconda3/lib/python3.6/multiprocessing/queues.py", line 191, in _finalize_join
thread.join()
File "/home/mona/anaconda3/lib/python3.6/threading.py", line 1056, in join
self._wait_for_tstate_lock()
File "/home/mona/anaconda3/lib/python3.6/threading.py", line 1072, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
mona@Mona:~/code/gesture/hand_gest$ ls
hand_inference_graph __init__.py LICENSE main.py model-checkpoint predict_gesture.py protos __pycache__ README.md test test.jpg test.png test.py training utils
mona@Mona:~/code/gesture/hand_gest$ ls training/
data/ .~lock.hand_detection_weights_3.h5# test/
.gitignore model.json training.ipynb
__init__.py model.png training_temp.ipynb
I basically have 12GB memory:
mona@Mona:~/code/gesture/hand_gest$ nvidia-smi
Thu Jun 21 19:51:27 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.67 Driver Version: 390.67 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:01:00.0 On | N/A |
| 23% 39C P5 26W / 250W | 624MiB / 12187MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1072 G /usr/lib/xorg/Xorg 312MiB |
| 0 1867 G compiz 251MiB |
| 0 18196 G ...el-token=87D61E3CF639ED54EF0FA2EE25A5C8 50MiB |
| 0 29083 G evolution 3MiB |
| 0 31971 G /usr/lib/firefox/firefox 3MiB |
+-----------------------------------------------------------------------------+
The text was updated successfully, but these errors were encountered:
running main.py I get the following two errors. Can you please share the hand_detection_weights.h5 with me?
I basically have 12GB memory:
The text was updated successfully, but these errors were encountered: