Table of Contents
This project is an update of the SimDRLSR (Deep Reinforcement Learning and Social Robotics Simulator) simulator and includes compatibility upgrades and for future improvements the design of an API to improve its existing functions.
The simulation environment enables efficient, safe, and controlled testing and validation of reinforcement learning models. The simulator allows the robot (Pepper) to interact with an avatar (human), performing four main tasks: greeting, following with your gaze, greeting and waiting.
The simulator requirements are:
- Ubuntu 22.04 LTS
- Unity 2022.3.32f1
pyMDQN requirements in the document requirement.
- Python 3.12
- Pytorch 2.3.1
Hardware Requirements:
- GPU with 6 GB VRAM
- Cloning repository
git clone -b dev --single-branch [email protected]:HBRS-SDP/ss24-reinforcement-learning-simulator.git
The following steps are the configuration for generate Unity simulator file (simDRLSR.x86_64)
-
In Unity software, go to
File>Open project>
open project and select the folder "simDRLSR unity". If unity Hub is been used, click on Add and select the same folder. -
Go to
File>Open Scene>
and open "Library.unity" located in simDRLSR_Unity/Assets/Scenes/ -
With Library.unity opened, click on Library on Hierarchy.
-
Go to
Window>AI>Navigation(Obsolete)
and click on "Bake" -
Go to
Window>Rendering>Lighting
and verify that "Auto Generate" checkbox is unchecked -
Go to
File>Build Settings>
and click on "Build" -
Navigate to "ss24-reinforcement-learning-simulator/" folder and save it with the name "simDRLSR.x86_64"
The config.xml
file stores the settings for the simulation. The configuration options are:
- Simulation Quality:
- Very Low
- Low
- Medium
- High
- Very High
The default value is "Medium".
- Frames Per Second (FPS): 60
- Screen Width: 640
- Screen Height: 480
- Fullscreen: True or False (default)
- Path Prob Folder: folder with human avatar probabilities files.
⚠️ Path WorkDir: directory with pyMDQN files. Change toss24-reinforcement-learning-simulator/pyMDQN
if you need to use pyMDQN.⚠️ Total Steps: defines the number of interactions that robot performs at each data collection. Note: set a higher value than set in the pyMDQN.⚠️ To use pyMDQN it is not necessary to use a docker container, however, we encourage the use of Conda to install modules, such as pytorch, pip, numpy, etc.- The IP address is seted to localhost IP
127.0.0.1
. You can change this inenvironment.py
pyMDQN script in the pyMDQN folder.
Labels:
‼️ : fundamental importance, needs to be configured;⚠️ : needs to be configured if the pyMDQN parameters are modified;
After setting up the simulator and before running a full training session, it's important to validate your configuration files. Follow these steps to run the configuration tests:
-
Ensure you are in the Unity project directory:
cd ss24-reinforcement-learning-simulator/'simDRLSR Unity'
-
Open the Unity project and navigate to the Test Runner: In Unity, go to Window > General > Test Runner
-
In the Test Runner window, select "Edit Mode" and click "Run All" to execute all configuration tests.
-
Verify that all tests pass. These tests check:
- Proper loading of probability files
- Correct JSON format of configuration files
- Validity of probability distributions
- Proper initialization of avatar behaviors
By running these tests, you can ensure that your configuration is correct before starting a time-consuming training process.
Before creating the Conda environment, add the necessary channels:
conda config --add channels conda-forge
conda config --add channels defaults
- Create a new conda environment with the following command:
conda create --name simDRLSR --file requirements.txt
- Activate the conda environment:
conda activate simDRLSR
Follow the next steps to run the simulator:
- Change to the ss24-reinforcement-learning-simulator directory:
cd ss24-reinforcement-learning-simulator
- Execute the simulation file:
./simDRLSR.x86_64
Follow the next steps to run the validation script :
- Open a new terminal
- Change to the ss24-reinforcement-learning-simulator/pyMDQN directory:
cd ss24-reinforcement-learning-simulator/pyMDQN
- Execute the validation script using python:
python3 API_test.py
- Run the simulator, as described above:
cd ss24-reinforcement-learning-simulator
./simDRLSR.x86_64
- On a new terminal:
-
Erasing old train files and images:
cd ss24-reinforcement-learning-simulator/pyMDQN rm -rf dataset/RGB/ep* rm -rf dataset/Depth/ep*
-
If you are using Conda environment:
conda activate simDRLSR
-
Initializing files and models:
python3 makefiles.py python3 init_model.py
-
Run the main script:
python mdqn.py
Distributed under the MGNU GPL 3.0. See LICENSE
for more information.
This simulator is based on the following works:
[1] Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi Ishiguro "Robot gains social intelligence through Multimodal Deep Reinforcement Learning" Proceedings of IEEE-RAS International Conference on Humanoid Robots (Humanoids) pp. 745-751, Cancun, Mexico 2016.