Skip to content

Latest commit

 

History

History
61 lines (48 loc) · 1.58 KB

README.md

File metadata and controls

61 lines (48 loc) · 1.58 KB

gym-short-corridor

This is an environment built out of the Example 13.1 "Short corridor with switched actions" from Reinforcement Learning book, 2nd ed. 2018.

Example 13.1

There are four states. The goal is to reach the goal state "G" on the right. The reward is -1 for every step. You start at state "S" - first state on the left. Second state acts in reverse - so if you take a step left you end up going right, and if you take a step right you end up going left. The state is unknown to you - you will always get 0.

Installation

cd gym-short-corridor
pip install -e .

Usage

Here is an example random policy that shows how to exercise the environment:

import gym
import gym_short_corridor
import random
env = gym.make("ShortCorridorEnv-v0")

state = env.reset()
env.render()

while True: 
    action = env.action_space.sample()
    step = "left" if action == 0 else "right"
    print("Try to go " + step)
    state, reward, done, _ = env.step(action) 
    env.render()

    if done:
        break

Sample run of the above may produce the following:

[x]{ }[ ][ ]
Try to go left
[x]{ }[ ][ ]
Try to go right
[ ]{x}[ ][ ]
Try to go right
[x]{ }[ ][ ]
Try to go right
[ ]{x}[ ][ ]
Try to go right
[x]{ }[ ][ ]
Try to go right
[ ]{x}[ ][ ]
Try to go left
[ ]{ }[x][ ]
Try to go right
[ ]{ }[ ][x]

Here "x" is the current true state of the environment (not known to the agent - which only sees one state). The "{ }" indicates a state in which actions are reversed. The rightmost state is the terminal state.