Skip to content

HBRS-SDP/ss24-reinforcement-learning-simulator

Repository files navigation

ss24-reinforcement-learning-simulator update

SimDRLSR: Deep Reinforcement Learning Simulator for Social Robotics

Stars Issues License

Table of Contents
  1. About The Project
  2. Pre-requisites
  3. Installation
  4. License
  5. References

About the project

This project is an update of the SimDRLSR (Deep Reinforcement Learning and Social Robotics Simulator) simulator and includes compatibility upgrades and for future improvements the design of an API to improve its existing functions.

The simulation environment enables efficient, safe, and controlled testing and validation of reinforcement learning models. The simulator allows the robot (Pepper) to interact with an avatar (human), performing four main tasks: greeting, following with your gaze, greeting and waiting.

Pre-requisites

The simulator requirements are:

  • Ubuntu 22.04 LTS
  • Unity 2022.3.32f1

pyMDQN requirements in the document requirement.

  • Python 3.12
  • Pytorch 2.3.1

Hardware Requirements:

  • GPU with 6 GB VRAM

Installation

  1. Cloning repository
    git clone -b dev --single-branch [email protected]:HBRS-SDP/ss24-reinforcement-learning-simulator.git

Unity Configuration

The following steps are the configuration for generate Unity simulator file (simDRLSR.x86_64)

  1. In Unity software, go to File>Open project>open project and select the folder "simDRLSR unity". If unity Hub is been used, click on Add and select the same folder.

  2. Go to File>Open Scene> and open "Library.unity" located in simDRLSR_Unity/Assets/Scenes/

  3. With Library.unity opened, click on Library on Hierarchy.

  4. Go to Window>AI>Navigation(Obsolete) and click on "Bake"

    bake_instruccion

  5. Go to Window>Rendering>Lighting and verify that "Auto Generate" checkbox is unchecked

    Lighting_uncheck

  6. Go to File>Build Settings> and click on "Build"

  7. Navigate to "ss24-reinforcement-learning-simulator/" folder and save it with the name "simDRLSR.x86_64"

SimDRLSR Configuration

The config.xml file stores the settings for the simulation. The configuration options are:

  • Simulation Quality:
    • Very Low
    • Low
    • Medium
    • High
    • Very High
      The default value is "Medium".
  • Frames Per Second (FPS): 60
  • Screen Width: 640
  • Screen Height: 480
  • Fullscreen: True or False (default)
  • Path Prob Folder: folder with human avatar probabilities files.
  • ⚠️ Path WorkDir: directory with pyMDQN files. Change to ss24-reinforcement-learning-simulator/pyMDQNif you need to use pyMDQN.
  • ⚠️ Total Steps: defines the number of interactions that robot performs at each data collection. Note: set a higher value than set in the pyMDQN.
  • ⚠️ To use pyMDQN it is not necessary to use a docker container, however, we encourage the use of Conda to install modules, such as pytorch, pip, numpy, etc.
  • The IP address is seted to localhost IP 127.0.0.1. You can change this in environment.py pyMDQN script in the pyMDQN folder.

Labels:

  • ‼️ : fundamental importance, needs to be configured;
  • ⚠️ : needs to be configured if the pyMDQN parameters are modified;

Testing Configuration Files

After setting up the simulator and before running a full training session, it's important to validate your configuration files. Follow these steps to run the configuration tests:

  1. Ensure you are in the Unity project directory:

    cd ss24-reinforcement-learning-simulator/'simDRLSR Unity'
  2. Open the Unity project and navigate to the Test Runner: In Unity, go to Window > General > Test Runner

  3. In the Test Runner window, select "Edit Mode" and click "Run All" to execute all configuration tests.

  4. Verify that all tests pass. These tests check:

  • Proper loading of probability files
  • Correct JSON format of configuration files
  • Validity of probability distributions
  • Proper initialization of avatar behaviors

By running these tests, you can ensure that your configuration is correct before starting a time-consuming training process.

How to setup the conda environment

Adding Conda Channels

Before creating the Conda environment, add the necessary channels:

conda config --add channels conda-forge
conda config --add channels defaults
  1. Create a new conda environment with the following command:
conda create --name simDRLSR --file requirements.txt
  1. Activate the conda environment:
conda activate simDRLSR

How to run the simulator

Follow the next steps to run the simulator:

  1. Change to the ss24-reinforcement-learning-simulator directory:
cd ss24-reinforcement-learning-simulator
  1. Execute the simulation file:
./simDRLSR.x86_64

Validate the simulator

Follow the next steps to run the validation script :

  1. Open a new terminal
  2. Change to the ss24-reinforcement-learning-simulator/pyMDQN directory:
  cd ss24-reinforcement-learning-simulator/pyMDQN
  1. Execute the validation script using python:
  python3 API_test.py

Training with pyMDQN

  1. Run the simulator, as described above:
cd ss24-reinforcement-learning-simulator
./simDRLSR.x86_64
  1. On a new terminal:

⚠️ Make sure to back up your trained models before performing the next steps.

⚠️ Remmember to config. WorkDir of simDRLSR with pyMDQN directory!

  • Erasing old train files and images:

    cd ss24-reinforcement-learning-simulator/pyMDQN
    rm -rf dataset/RGB/ep*
    rm -rf dataset/Depth/ep*
  • If you are using Conda environment:

    conda activate simDRLSR
  • Initializing files and models:

    python3 makefiles.py
    python3 init_model.py
  • Run the main script:

    python mdqn.py

License

Distributed under the MGNU GPL 3.0. See LICENSE for more information.

References

This simulator is based on the following works:

[1] Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi Ishiguro "Robot gains social intelligence through Multimodal Deep Reinforcement Learning" Proceedings of IEEE-RAS International Conference on Humanoid Robots (Humanoids) pp. 745-751, Cancun, Mexico 2016.

About

No description, website, or topics provided.

Resources

License

GPL-3.0, GPL-3.0 licenses found

Licenses found

GPL-3.0
LICENSE
GPL-3.0
LICENSE.txt

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •