- Python >3.6.x
- Preferred 3.9
-
Gazebo
sudo apt install gazebo
-
Ros Noetic
sudo apt install ros-noetic-desktop-full
-
Clone and Fetch the Gesture Recognition repository to the folder
/catkin_ws/src/
asVOWEL
Open the terminal and Navigate to
/catkin_ws/src/
cd catkin_ws/
and then enter
catkin_make
- Tensorflow
- Tensorflow Addons
- Sci-kit learn
- Pandas
- Numpy
- Imutils
- OpenCV
- ABSL
- Rospy
- Mediapipe
- Python Yaml
- Rospkg
-
Steps:
Create a virtual environment using conda
conda create -n ENV python=3.9
Use
ENV
as the default env for running this entire codebase.conda activate ENV
Navigate to the script folder in your terminal and install the provided dependencies
cd catkin_ws/src/VOWEL/script/ pip install -r requirements.txt
To finally run the setup. Navigate to the
/catkin_ws/src
and thensource devel/setup.bash roslaunch atom world.launch
-
Open a terminal -> Navigate to
HOME directory
cd HOME/
-
Make sure a camera is connected to the system
-
Make desired changes in model/model_specification.json and then
navigate to the script folder in your terminal
cd catkin_ws/src/VOWEL/script
Train the model
python train.py
-
Steps:
Navigate to catkin workspace
cd catkin_ws source devel/setup.bash roslaunch atom world.launch
Navigate to the script folder in a new terminal and enter
cd HOME/catkin_ws/src/VOWEL/script python main.py
-
The robot and camera operations will initiallize and starting instruction messages will be provided in the terminal
-
Navigate to the script folder in your terminal
cd ROOT_PATH tensorboard --logdir EXP_PATH
EXP_PATH is model checkpoints are saved eg.
logs/DATE_TIME/train
andDATE_TIME
is the date and time when first epoch was initialized
-
Steps
Navigate to
catkin_ws/src/VOWEL/script/robot.py
-
Values of variables named SPEED, ANGULAR_SPEED, DIST, ANGLE can be changed to change the respective self explanatory movements per prediction frame