diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index fab8c6599..8092f9827 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -23,6 +23,7 @@ repos: - id: clang-format args: - --style=Google + exclude: '^.*\.ipynb$' - repo: https://github.com/pre-commit/pre-commit-hooks rev: v5.0.0 hooks: diff --git a/examples/notebooks/acrobot_urdf.ipynb b/examples/notebooks/acrobot_urdf.ipynb index 09e8bc7e7..a65d181ed 100644 --- a/examples/notebooks/acrobot_urdf.ipynb +++ b/examples/notebooks/acrobot_urdf.ipynb @@ -1,331 +1,324 @@ { - "cells" : [ - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# The Acrobot\n", "\n", - "*We recommend you look at the [Introduction to " - "Crocoddyl](introduction_to_crocoddyl.ipynb) example before this " - "one.*\n", - "\n", - "In the example, we model the acrobot control problem using Crocoddyl. " - "An acrobot is a two joint planar robot with only one actuator. It is " - "a canonnical example of an underactuated system and so presents an " - "interesting control problem.\n", - "\n", "We demonstrate how to:\n", "\n", - "1. Load a model from an urdf.\n", - "1. Define an actuation mapping for the system.\n", - "1. Construct and solve the control problem.\n", "\n", - "## Loading the model\n", - "A standalone double pendulum robot urdf is provided in the " - "[example-robot-data](https://github.com/Gepetto/example-robot-data) " - "repository, this comes bundled with Crocoddyl. Let's load the model " - "and inspect its properties." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "import os\n", - "import pathlib\n", - "\n", - "import numpy as np\n", - "import pinocchio\n", - "\n", - "# Get the path to the urdf\n", - "from example_robot_data.path import EXAMPLE_ROBOT_DATA_MODEL_DIR\n", - "\n", - "import crocoddyl\n", - "\n", - "urdf_model_path = pathlib.Path(\n", - " \"double_pendulum_description\", \"urdf\", " - "\"double_pendulum_simple.urdf\"\n", - ")\n", - "urdf_model_path = os.path.join(EXAMPLE_ROBOT_DATA_MODEL_DIR, " - "urdf_model_path)\n", - "\n", - "# Now load the model (using pinocchio)\n", - "robot = " - "pinocchio.robot_wrapper.RobotWrapper.BuildFromURDF(str(urdf_model_" - "path))\n", - "\n", - "# The model loaded from urdf (via pinicchio)\n", - "print(robot.model)\n", - "\n", - "# Create a multibody state from the pinocchio model.\n", - "state = crocoddyl.StateMultibody(robot.model)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "`pinocchio` comes with some handy wrappers that load a robot along " - "with vizual and collision models. These are all defined via that " - "urdf. `robot.model` is the model is a model of the DAEs (Differential " - "Algebraic Equations).\n", - "\n", - "You will notice that the there are two joint configurations `nq` and " - "velocities `nv`.\n", - "\n", "## Actuation Mapping\n", - "In order to create an underactuated double pendulum, the acrobot, we " - "will create mapping between control inputs and joint torques. This is " - "done by inheriting from `ActuationModelAbstract`. See also " - "`ActuationModelFloatingBase` and `ActuationModelFull` for other " - "options." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Define the control signal to actuated joint mapping\n", - "class AcrobotActuationModel(crocoddyl.ActuationModelAbstract):\n", - " def __init__(self, state):\n", - " nu = 1 # Control dimension\n", - " crocoddyl.ActuationModelAbstract.__init__(self, state, " - "nu=nu)\n", - "\n", - " def calc(self, data, x, u):\n", - " assert len(data.tau) == 2\n", - " # Map the control dimensions to the joint torque\n", - " data.tau[0] = 0\n", - " data.tau[1] = u\n", - "\n", - " def calcDiff(self, data, x, u):\n", - " # Specify the actuation jacobian\n", - " data.dtau_du[0] = 0\n", - " data.dtau_du[1] = 1\n", - "\n", - "\n", - "# Also see ActuationModelFloatingBase and ActuationModelFull\n", - "actuationModel = AcrobotActuationModel(state)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## Constructing the Problem\n", "\n", - "Before we solve the control problem, we need to construct the cost " - "models and action models." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "dt = 1e-3 # Time step\n", - "T = 1000 # Number of knots\n", - "\n", - "# Cost models\n", - "runningCostModel = crocoddyl.CostModelSum(state, " - "nu=actuationModel.nu)\n", - "terminalCostModel = crocoddyl.CostModelSum(state, " - "nu=actuationModel.nu)\n", - "\n", - "# Add a cost for the configuration positions and velocities\n", - "xref = np.array([0, 0, 0, 0]) # Desired state\n", - "stateResidual = crocoddyl.ResidualModelState(state, xref=xref, " - "nu=actuationModel.nu)\n", - "stateCostModel = crocoddyl.CostModelResidual(state, stateResidual)\n", - "runningCostModel.addCost(\"state_cost\", cost=stateCostModel, " - "weight=1e-5 / dt)\n", - "terminalCostModel.addCost(\"state_cost\", cost=stateCostModel, " - "weight=1000)\n", - "\n", - "# Add a cost on control\n", - "controlResidual = crocoddyl.ResidualModelControl(state, " - "nu=actuationModel.nu)\n", - "bounds = crocoddyl.ActivationBounds(np.array([-1.0]), " - "np.array([1.0]))\n", - "activation = crocoddyl.ActivationModelQuadraticBarrier(bounds)\n", - "controlCost = crocoddyl.CostModelResidual(\n", - " state, activation=activation, residual=controlResidual\n", - ")\n", - "runningCostModel.addCost(\"control_cost\", cost=controlCost, " - "weight=1e-1 / dt)\n", - "\n", - "# Create the action models for the state\n", - "runningModel = crocoddyl.IntegratedActionModelEuler(\n", - " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", - " state, actuationModel, runningCostModel\n", - " ),\n", - " dt,\n", - ")\n", - "terminalModel = crocoddyl.IntegratedActionModelEuler(\n", - " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", - " state, actuationModel, terminalCostModel\n", - " ),\n", - " 0.0,\n", - ")" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Now we define the control problem."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Define a shooting problem\n", - "q0 = np.zeros((state.nq,)) # Inital joint configurations\n", - "q0[0] = np.pi / 2 # Down\n", - "v0 = np.zeros((state.nv,)) # Initial joint velocities\n", - "x0 = np.concatenate((q0, v0)) # Inital robot state\n", - "problem = crocoddyl.ShootingProblem(x0, [runningModel] * T, " - "terminalModel)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Let's test the system with a rollout."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Test the problem with a rollout\n", - "us = [0.01 * np.ones((1,))] * T\n", "xs = problem.rollout(us)\n", "\n", - "# Handy to blat up the state and control trajectories\n", - "crocoddyl.plotOCSolution(xs, us, show=False, figIndex=99, " - "figTitle=\"Test rollout\")\n", - "\n", "# Put a grid on the plots\n", - "import matplotlib.pyplot as plt\n", "\n", "fig = plt.gcf()\n", - "axs = fig.axes\n", "for ax in axs:\n", " ax.grid()" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Now we can solve the optimal control problem."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Now stabilize the acrobot using FDDP\n", - "solver = crocoddyl.SolverFDDP(problem)\n", "\n", "# Solve\n", - "callbacks = []\n", "callbacks.append(crocoddyl.CallbackLogger())\n", - "callbacks.append(crocoddyl.CallbackVerbose())\n", - "solver.setCallbacks(callbacks)\n", - "solver.solve([], [], 300, False, 1e-5)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["We can visualize the trajectory with `meshcat` or using " - "`gepetto-gui` (you will need to install [gepetto-viewer]() " - "and [gepetto-viewer-corba]() and start the process in a " - "separate terminal.)"] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Display using meshcat\n", - "robot_display = crocoddyl.MeshcatDisplay(robot, -1, 1, False)\n", - "display(robot_display.robot.viewer.jupyter_cell())\n", - "robot_display.displayFromSolver(solver)\n", "\n", - "# Display using gepetto-gui\n", "if False:\n", - " robot_display = crocoddyl.GepettoDisplay(robot, floor=False)\n", - " robot_display.displayFromSolver(solver)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["We can plot the trajectory and the solver's convergence " - "properties."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Plotting the solution and the DDP convergence\n", - "log = solver.getCallbacks()[0]\n", - "\n", - "import matplotlib.pyplot as plt\n", - "\n", - "crocoddyl.plotOCSolution(\n", - " xs=log.xs, us=log.us, show=False, figIndex=1, " - "figTitle=\"Solution\"\n", - ")\n", - "fig = plt.gcf()\n", - "axs = fig.axes\n", - "for ax in axs:\n", - " ax.grid(True)\n", - "\n", - "crocoddyl.plotConvergence(\n", - " log.costs,\n", - " log.pregs,\n", - " log.dregs,\n", - " log.grads,\n", - " log.stops,\n", - " log.steps,\n", - " show=False,\n", - " figIndex=2,\n", - ")\n", - "fig = plt.gcf()\n", - "axs = fig.axes\n", - "for ax in axs:\n", - " ax.grid(True)\n", - "\n", - "plt.show()" - ] - } - ], - "metadata" : { - "interpreter" : { - "hash" : - "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" - }, - "kernelspec" : { - "display_name" : "Python 3.8.10 64-bit", - "language" : "python", - "name" : "python3" - }, - "language_info" : { - "codemirror_mode" : {"name" : "ipython", "version" : 3}, - "file_extension" : ".py", - "mimetype" : "text/x-python", - "name" : "python", - "nbconvert_exporter" : "python", - "pygments_lexer" : "ipython3", - "version" : "3.8.10" - }, - "orig_nbformat" : 4 + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# The Acrobot\n", + "\n", + "*We recommend you look at the [Introduction to Crocoddyl](introduction_to_crocoddyl.ipynb) example before this one.*\n", + "\n", + "In the example, we model the acrobot control problem using Crocoddyl. An acrobot is a two joint planar robot with only one actuator. It is a canonnical example of an underactuated system and so presents an interesting control problem.\n", + "\n", + "We demonstrate how to:\n", + "\n", + "1. Load a model from an urdf.\n", + "1. Define an actuation mapping for the system.\n", + "1. Construct and solve the control problem.\n", + "\n", + "## Loading the model\n", + "A standalone double pendulum robot urdf is provided in the [example-robot-data](https://github.com/Gepetto/example-robot-data) repository, this comes bundled with Crocoddyl. Let's load the model and inspect its properties." + ] }, - "nbformat" : 4, - "nbformat_minor" : 2 + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import pathlib\n", + "\n", + "import numpy as np\n", + "import pinocchio\n", + "\n", + "# Get the path to the urdf\n", + "from example_robot_data.path import EXAMPLE_ROBOT_DATA_MODEL_DIR\n", + "\n", + "import crocoddyl\n", + "\n", + "urdf_model_path = pathlib.Path(\n", + " \"double_pendulum_description\", \"urdf\", \"double_pendulum_simple.urdf\"\n", + ")\n", + "urdf_model_path = os.path.join(EXAMPLE_ROBOT_DATA_MODEL_DIR, urdf_model_path)\n", + "\n", + "# Now load the model (using pinocchio)\n", + "robot = pinocchio.robot_wrapper.RobotWrapper.BuildFromURDF(str(urdf_model_path))\n", + "\n", + "# The model loaded from urdf (via pinicchio)\n", + "print(robot.model)\n", + "\n", + "# Create a multibody state from the pinocchio model.\n", + "state = crocoddyl.StateMultibody(robot.model)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`pinocchio` comes with some handy wrappers that load a robot along with vizual and collision models. These are all defined via that urdf. `robot.model` is the model is a model of the DAEs (Differential Algebraic Equations).\n", + "\n", + "You will notice that the there are two joint configurations `nq` and velocities `nv`.\n", + "\n", + "## Actuation Mapping\n", + "In order to create an underactuated double pendulum, the acrobot, we will create mapping between control inputs and joint torques. This is done by inheriting from `ActuationModelAbstract`. See also `ActuationModelFloatingBase` and `ActuationModelFull` for other options." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Define the control signal to actuated joint mapping\n", + "class AcrobotActuationModel(crocoddyl.ActuationModelAbstract):\n", + " def __init__(self, state):\n", + " nu = 1 # Control dimension\n", + " crocoddyl.ActuationModelAbstract.__init__(self, state, nu=nu)\n", + "\n", + " def calc(self, data, x, u):\n", + " assert len(data.tau) == 2\n", + " # Map the control dimensions to the joint torque\n", + " data.tau[0] = 0\n", + " data.tau[1] = u\n", + "\n", + " def calcDiff(self, data, x, u):\n", + " # Specify the actuation jacobian\n", + " data.dtau_du[0] = 0\n", + " data.dtau_du[1] = 1\n", + "\n", + "\n", + "# Also see ActuationModelFloatingBase and ActuationModelFull\n", + "actuationModel = AcrobotActuationModel(state)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Constructing the Problem\n", + "\n", + "Before we solve the control problem, we need to construct the cost models and action models." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dt = 1e-3 # Time step\n", + "T = 1000 # Number of knots\n", + "\n", + "# Cost models\n", + "runningCostModel = crocoddyl.CostModelSum(state, nu=actuationModel.nu)\n", + "terminalCostModel = crocoddyl.CostModelSum(state, nu=actuationModel.nu)\n", + "\n", + "# Add a cost for the configuration positions and velocities\n", + "xref = np.array([0, 0, 0, 0]) # Desired state\n", + "stateResidual = crocoddyl.ResidualModelState(state, xref=xref, nu=actuationModel.nu)\n", + "stateCostModel = crocoddyl.CostModelResidual(state, stateResidual)\n", + "runningCostModel.addCost(\"state_cost\", cost=stateCostModel, weight=1e-5 / dt)\n", + "terminalCostModel.addCost(\"state_cost\", cost=stateCostModel, weight=1000)\n", + "\n", + "# Add a cost on control\n", + "controlResidual = crocoddyl.ResidualModelControl(state, nu=actuationModel.nu)\n", + "bounds = crocoddyl.ActivationBounds(np.array([-1.0]), np.array([1.0]))\n", + "activation = crocoddyl.ActivationModelQuadraticBarrier(bounds)\n", + "controlCost = crocoddyl.CostModelResidual(\n", + " state, activation=activation, residual=controlResidual\n", + ")\n", + "runningCostModel.addCost(\"control_cost\", cost=controlCost, weight=1e-1 / dt)\n", + "\n", + "# Create the action models for the state\n", + "runningModel = crocoddyl.IntegratedActionModelEuler(\n", + " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", + " state, actuationModel, runningCostModel\n", + " ),\n", + " dt,\n", + ")\n", + "terminalModel = crocoddyl.IntegratedActionModelEuler(\n", + " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", + " state, actuationModel, terminalCostModel\n", + " ),\n", + " 0.0,\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we define the control problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Define a shooting problem\n", + "q0 = np.zeros((state.nq,)) # Inital joint configurations\n", + "q0[0] = np.pi / 2 # Down\n", + "v0 = np.zeros((state.nv,)) # Initial joint velocities\n", + "x0 = np.concatenate((q0, v0)) # Inital robot state\n", + "problem = crocoddyl.ShootingProblem(x0, [runningModel] * T, terminalModel)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's test the system with a rollout." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Test the problem with a rollout\n", + "us = [0.01 * np.ones((1,))] * T\n", + "xs = problem.rollout(us)\n", + "\n", + "# Handy to blat up the state and control trajectories\n", + "crocoddyl.plotOCSolution(xs, us, show=False, figIndex=99, figTitle=\"Test rollout\")\n", + "\n", + "# Put a grid on the plots\n", + "import matplotlib.pyplot as plt\n", + "\n", + "fig = plt.gcf()\n", + "axs = fig.axes\n", + "for ax in axs:\n", + " ax.grid()" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can solve the optimal control problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Now stabilize the acrobot using FDDP\n", + "solver = crocoddyl.SolverFDDP(problem)\n", + "\n", + "# Solve\n", + "callbacks = []\n", + "callbacks.append(crocoddyl.CallbackLogger())\n", + "callbacks.append(crocoddyl.CallbackVerbose())\n", + "solver.setCallbacks(callbacks)\n", + "solver.solve([], [], 300, False, 1e-5)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can visualize the trajectory with `meshcat` or using `gepetto-gui` (you will need to install [gepetto-viewer]() and [gepetto-viewer-corba]() and start the process in a separate terminal.)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Display using meshcat\n", + "robot_display = crocoddyl.MeshcatDisplay(robot, -1, 1, False)\n", + "display(robot_display.robot.viewer.jupyter_cell())\n", + "robot_display.displayFromSolver(solver)\n", + "\n", + "# Display using gepetto-gui\n", + "if False:\n", + " robot_display = crocoddyl.GepettoDisplay(robot, floor=False)\n", + " robot_display.displayFromSolver(solver)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can plot the trajectory and the solver's convergence properties." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Plotting the solution and the DDP convergence\n", + "log = solver.getCallbacks()[0]\n", + "\n", + "import matplotlib.pyplot as plt\n", + "\n", + "crocoddyl.plotOCSolution(\n", + " xs=log.xs, us=log.us, show=False, figIndex=1, figTitle=\"Solution\"\n", + ")\n", + "fig = plt.gcf()\n", + "axs = fig.axes\n", + "for ax in axs:\n", + " ax.grid(True)\n", + "\n", + "crocoddyl.plotConvergence(\n", + " log.costs,\n", + " log.pregs,\n", + " log.dregs,\n", + " log.grads,\n", + " log.stops,\n", + " log.steps,\n", + " show=False,\n", + " figIndex=2,\n", + ")\n", + "fig = plt.gcf()\n", + "axs = fig.axes\n", + "for ax in axs:\n", + " ax.grid(True)\n", + "\n", + "plt.show()" + ] + } + ], + "metadata": { + "interpreter": { + "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" + }, + "kernelspec": { + "display_name": "Python 3.8.10 64-bit", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.10" + }, + "orig_nbformat": 4 + }, + "nbformat": 4, + "nbformat_minor": 2 } diff --git a/examples/notebooks/arm_manipulation.ipynb b/examples/notebooks/arm_manipulation.ipynb index e5a9353f3..b5817ffae 100644 --- a/examples/notebooks/arm_manipulation.ipynb +++ b/examples/notebooks/arm_manipulation.ipynb @@ -1,404 +1,355 @@ { - "cells" : [ - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# Reaching multiple targets with a manipulator\n", - "The objective of this exercise is to reach multiple targets with a " - "manipulator.\n", - "\n", - "We provide a basic example for reaching one point, and you have to " - "modify it for sequence of multiple targets. Below it is the basic " - "example, there we'll guide you to the final result.\n" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# %load arm_example.py\n", - "import example_robot_data\n", - "import numpy as np\n", - "import pinocchio\n", - "\n", - "import crocoddyl\n", - "\n", - "robot = example_robot_data.load(\"talos_arm\")\n", - "robot_model = robot.model\n", - "\n", - "DT = 1e-3\n", - "T = 25\n", - "target = np.array([0.4, 0.0, 0.4])\n", - "\n", - "display = crocoddyl.GepettoDisplay(robot)\n", - "display.robot.viewer.gui.addSphere(\n", - " \"world/point\", 0.05, [1.0, 0.0, 0.0, 1.0]\n", - ") # radius = .1, RGBA=1001\n", - "display.robot.viewer.gui.applyConfiguration(\n", - " \"world/point\", target.tolist() + [0.0, 0.0, 0.0, 1.0]\n", - ") # xyz+quaternion\n", - "display.robot.viewer.gui.refresh()\n", - "\n", - "# Create the cost functions\n", - "state = crocoddyl.StateMultibody(robot.model)\n", - "goalTrackingCost = crocoddyl.CostModelResidual(\n", - " state,\n", - " crocoddyl.ResidualModelFrameTranslation(\n", - " state, robot_model.getFrameId(\"gripper_left_joint\"), " - "target\n", - " ),\n", - ")\n", - "xRegCost = crocoddyl.CostModelResidual(state, " - "crocoddyl.ResidualModelState(state))\n", - "uRegCost = crocoddyl.CostModelResidual(state, " - "crocoddyl.ResidualModelControl(state))\n", - "\n", - "# Create cost model per each action model\n", - "runningCostModel = crocoddyl.CostModelSum(state)\n", - "terminalCostModel = crocoddyl.CostModelSum(state)\n", - "\n", - "# Then let's added the running and terminal cost functions\n", - "runningCostModel.addCost(\"gripperPose\", goalTrackingCost, 1e2)\n", - "runningCostModel.addCost(\"stateReg\", xRegCost, 1e-4)\n", - "runningCostModel.addCost(\"ctrlReg\", uRegCost, 1e-7)\n", - "terminalCostModel.addCost(\"gripperPose\", goalTrackingCost, 1e5)\n", - "terminalCostModel.addCost(\"stateReg\", xRegCost, 1e-4)\n", - "terminalCostModel.addCost(\"ctrlReg\", uRegCost, 1e-7)\n", - "\n", - "# Create the actuation model\n", - "actuationModel = crocoddyl.ActuationModelFull(state)\n", - "\n", - "# Create the action model\n", - "runningModel = crocoddyl.IntegratedActionModelEuler(\n", - " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", - " state, actuationModel, runningCostModel\n", - " ),\n", - " DT,\n", - ")\n", - "terminalModel = crocoddyl.IntegratedActionModelEuler(\n", - " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", - " state, actuationModel, terminalCostModel\n", - " )\n", - ")\n", - "# runningModel.differential.armature = 0.2 * np.ones(state.nv)\n", - "# terminalModel.differential.armature = 0.2 * np.ones(state.nv)\n", - "\n", - "# Create the problem\n", - "q0 = np.array([2.0, 1.5, -2.0, 0.0, 0.0, 0.0, 0.0])\n", - "x0 = np.concatenate([q0, pinocchio.utils.zero(state.nv)])\n", - "problem = crocoddyl.ShootingProblem(x0, [runningModel] * T, " - "terminalModel)\n", - "\n", - "# Creating the DDP solver for this OC problem, defining a logger\n", - "ddp = crocoddyl.SolverDDP(problem)\n", - "ddp.setCallbacks([crocoddyl.CallbackVerbose()])\n", - "\n", - "# Solving it with the DDP algorithm\n", - "ddp.solve()\n", - "\n", - "# Visualizing the solution in gepetto-viewer\n", - "display.displayFromSolver(ddp)\n", - "\n", - "robot_data = robot_model.createData()\n", - "xT = ddp.xs[-1]\n", - "pinocchio.forwardKinematics(robot_model, robot_data, xT[: " - "state.nq])\n", - "pinocchio.updateFramePlacements(robot_model, robot_data)\n", - "print(\n", - " \"Finally reached = \",\n", - " " - "robot_data.oMf[robot_model.getFrameId(\"gripper_left_joint\")]." - "translation.T,\n", - ")" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## I. DifferentialActionModel for Pinocchio ABA\n", - "This scenario uses an action model that computes 2nd order " - "differential dynamics with Pinocchio. Note that it can accept several " - "cost models. This action model is tailored for robot applications, " - "and at the same time, it's modular since:\n", - " - you can modify the robot dynamics by changing Pinocchio model, " - "and\n", - " - you can formulate any cost function by simply adding running a " - "terminal costs.\n", - "\n", "## II. Cost models\n", "\n", - "A cost model computes a scalar cost value and its gradient and " - "Hessian. All the models implemented are computing a cost residual and " - "are computing the Hessian with the Gauss approximation.\n", - "\n", "We implemented reusable cost models for controlling \n", - " - a frame placement (translation or velocity),\n", - " - the center of mass position, and \n", - " - state and control spaces.\n", "\n", - "In the example above, we used the CostModelFrameTranslation which " - "defines a 3d position task, and the state and control regularizers.\n", - "\n", - "As for any cost model in crocoddyl, if you write your own cost model " - "you need to create a data class for your cost function. The cost data " - "must be created from a pinocchio data (the rational is that the " - "pinocchio data used to compute the dynamics should be re-used to " - "compute the cost).\n" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "dataCollector = crocoddyl.DataCollectorMultibody(robot.data)\n", - "trackData = goalTrackingCost.createData(dataCollector)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "### II.a Frame position cost\n", "\n", - "You define a frame ID and the reference position as a 3D array. The " - "cost is the distance between the frame and the target. This cost " - "depends on $\\mathbf{x}$ (specifically the configuration " - "$\\mathbf{q}$). You can double check the 0s in its gradient." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "pinocchio.updateFramePlacements(robot.model, robot.data)\n", - "pinocchio.computeJointJacobians(robot.model, robot.data, xT[: " - "state.nq])\n", - "goalTrackingCost.calc(trackData, x0)\n", - "goalTrackingCost.calcDiff(trackData, x0)\n", - "print(trackData.Lx, trackData.Lu)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "### II.b State cost\n", - "In this part of the tutorial you must define a State model. It " - "defines \n", - " - the dimension of the state and its tangent, and\n", - " - the exponential/integrate and difference/log operators.\n", - "The operators can described using Pinocchio functions. And the " - "exercite consists on adding them into your State class. Please note " - "crocoddyl has abstract functions for this.\n", - "\n", - "The state cost uses a reference in state space (State.zero() by " - "default). The cost is the distance, computed with state.difference " - "between the current state and the reference. Hence, with this cost, " - "we regularize both position and velocity.\n", - "\n", "### II.c Control cost\n", "\n", - "The control cost uses a control reference as in the state cost. The " - "cost is the distance the current control and the reference. Hence the " - "cost regularizes torque commands." - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "### II.d Add cost models to the differential action model\n", - "Each time we want to include a new cost function, we use addCost " - "function inside our DAM. In this function you're also able its weight." - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## III. Create the problem with integrated action model\n", - "Differential action models describe cost and dynamics in " - "continuous-time, however our optimal control solvers work in " - "discrete-time. We have created the integrated action model in order " - "to deal with this.\n", - "\n", - "In the previous code, we have used an abstract class that uses " - "simpletic Euler rules. In the cartpole exercise you have learnt how " - "to use integrated action models for your problem." - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## IV. Callbacks\n", "\n", - "Callback functions are needed for analysing and debugging the " - "performance of the solver for your specific problem.\n", - "For problems defined with Pinocchio, you can display the robot " - "trajectory per each iterate by including CallbackDisplay. With this " - "callback, you can display robot motions with different rates. " - "Additionally, CallbackVerbose prints a message that allows us to " - "understand the behaviour of the solver.\n", - "\n", - "Generally speaking, an user is able to describe any callback " - "function. This function will be run once per iterate and it has " - "access to all data." - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## VI. Modifying the example\n", "\n", - "Start by defining several targets (let's say 4 targets, all at x=0.4, " - "and at y and z being either 0 or 0.4), and display then in the " - "viewer.\n" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "The shooting problem will be composed of 4 sequences of action " - "models. Each sequence consists on T shooting \"running\" nodes and 1 " - "terminal node. The running nodes mostly have regularization terms, " - "while the terminal nodes have a strong cost toward the respective " - "target.\n", - "\n", - "[ R1,R1,R1 ... R1,T1, R2,R2 .... R2, T2, R3 ... R3, T3, R4 ... R4 ] , " - "T4\n", - "\n", "First create 4 running models and 4 terminal models." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : - ["Then you need to add a position cost, and state and control " - "regularization to each running action model. Please note that for " - "terminal action model is only needed the position cost. " - "Additionally, in the running models, the position cost should be " - "low, and it should be high in the terminal models."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Now create a shooting problem."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "seq0 = [runningModels[0]] * T + [terminalModels[0]]\n", - "seq1 = [runningModels[1]] * T + [terminalModels[1]]\n", - "seq2 = [runningModels[2]] * T + [terminalModels[2]]\n", - "seq3 = [runningModels[3]] * T\n", - "problem = crocoddyl.ShootingProblem(x0, seq0 + seq1 + seq2 + seq3, " - "terminalmodel[3])" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Create a DDP solver for this problem and run it. "] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ "ddp = crocoddyl.SolverDDP(problem)\n", "ddp.solve()" ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "Well, it should not work, at least no on the first shot. The DDP " - "solver is likely not strong enough to accept the random weights that " - "you have selected. \n", - "\n", - "If it is working nicely from the first shot, display it in the viewer " - "and go take a coffee. But you will likely have to tweak the gains to " - "make it work.\n", - "\n", - "**It is suggested to first optimize only sequence 1. When you are " - "happy with it, add sequence 2 and optimize again, etc.**\n" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## V. Penalty\n", - "The solver works with double precisions, so it is quite robust to " - "high weight. 10000 is likely to be accepted for example. But if you " - "make the problem too difficult, the solver will break. \n", - "\n", - "In that case, you can implement a simple penalty solver by setting " - "the weight to be 10**i, and creating a for loop to explore i from 0 " - "to 5. At each iteration of the loop, run the solver from the previous " - "solution and for few iterations only." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "for i in range(1, 6):\n", " for m in terminalModels:\n", - " m.costs.costs[\"gripperPose\"].weight = 10**i\n", - " ddp.solve(ddp.xs, ddp.us, 10)" - ] - } - ], - "metadata" : { - "kernelspec" : { - "display_name" : "Python 3", - "language" : "python", - "name" : "python3" - }, - "language_info" : { - "codemirror_mode" : {"name" : "ipython", "version" : 2}, - "file_extension" : ".py", - "mimetype" : "text/x-python", - "name" : "python", - "nbconvert_exporter" : "python", - "pygments_lexer" : "ipython2", - "version" : "3.10.4 (v3.10.4:9d38120e33, Mar 23 2022, 17:29:05) [Clang " - "13.0.0 (clang-1300.0.29.30)]" - }, - "vscode" : { - "interpreter" : { - "hash" : - "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" - } - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Reaching multiple targets with a manipulator\n", + "The objective of this exercise is to reach multiple targets with a manipulator.\n", + "\n", + "We provide a basic example for reaching one point, and you have to modify it for sequence of multiple targets. Below it is the basic example, there we'll guide you to the final result.\n" + ] }, - "nbformat" : 4, - "nbformat_minor" : 2 + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load arm_example.py\n", + "import example_robot_data\n", + "import numpy as np\n", + "import pinocchio\n", + "\n", + "import crocoddyl\n", + "\n", + "robot = example_robot_data.load(\"talos_arm\")\n", + "robot_model = robot.model\n", + "\n", + "DT = 1e-3\n", + "T = 25\n", + "target = np.array([0.4, 0.0, 0.4])\n", + "\n", + "display = crocoddyl.GepettoDisplay(robot)\n", + "display.robot.viewer.gui.addSphere(\n", + " \"world/point\", 0.05, [1.0, 0.0, 0.0, 1.0]\n", + ") # radius = .1, RGBA=1001\n", + "display.robot.viewer.gui.applyConfiguration(\n", + " \"world/point\", target.tolist() + [0.0, 0.0, 0.0, 1.0]\n", + ") # xyz+quaternion\n", + "display.robot.viewer.gui.refresh()\n", + "\n", + "# Create the cost functions\n", + "state = crocoddyl.StateMultibody(robot.model)\n", + "goalTrackingCost = crocoddyl.CostModelResidual(\n", + " state,\n", + " crocoddyl.ResidualModelFrameTranslation(\n", + " state, robot_model.getFrameId(\"gripper_left_joint\"), target\n", + " ),\n", + ")\n", + "xRegCost = crocoddyl.CostModelResidual(state, crocoddyl.ResidualModelState(state))\n", + "uRegCost = crocoddyl.CostModelResidual(state, crocoddyl.ResidualModelControl(state))\n", + "\n", + "# Create cost model per each action model\n", + "runningCostModel = crocoddyl.CostModelSum(state)\n", + "terminalCostModel = crocoddyl.CostModelSum(state)\n", + "\n", + "# Then let's added the running and terminal cost functions\n", + "runningCostModel.addCost(\"gripperPose\", goalTrackingCost, 1e2)\n", + "runningCostModel.addCost(\"stateReg\", xRegCost, 1e-4)\n", + "runningCostModel.addCost(\"ctrlReg\", uRegCost, 1e-7)\n", + "terminalCostModel.addCost(\"gripperPose\", goalTrackingCost, 1e5)\n", + "terminalCostModel.addCost(\"stateReg\", xRegCost, 1e-4)\n", + "terminalCostModel.addCost(\"ctrlReg\", uRegCost, 1e-7)\n", + "\n", + "# Create the actuation model\n", + "actuationModel = crocoddyl.ActuationModelFull(state)\n", + "\n", + "# Create the action model\n", + "runningModel = crocoddyl.IntegratedActionModelEuler(\n", + " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", + " state, actuationModel, runningCostModel\n", + " ),\n", + " DT,\n", + ")\n", + "terminalModel = crocoddyl.IntegratedActionModelEuler(\n", + " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", + " state, actuationModel, terminalCostModel\n", + " )\n", + ")\n", + "# runningModel.differential.armature = 0.2 * np.ones(state.nv)\n", + "# terminalModel.differential.armature = 0.2 * np.ones(state.nv)\n", + "\n", + "# Create the problem\n", + "q0 = np.array([2.0, 1.5, -2.0, 0.0, 0.0, 0.0, 0.0])\n", + "x0 = np.concatenate([q0, pinocchio.utils.zero(state.nv)])\n", + "problem = crocoddyl.ShootingProblem(x0, [runningModel] * T, terminalModel)\n", + "\n", + "# Creating the DDP solver for this OC problem, defining a logger\n", + "ddp = crocoddyl.SolverDDP(problem)\n", + "ddp.setCallbacks([crocoddyl.CallbackVerbose()])\n", + "\n", + "# Solving it with the DDP algorithm\n", + "ddp.solve()\n", + "\n", + "# Visualizing the solution in gepetto-viewer\n", + "display.displayFromSolver(ddp)\n", + "\n", + "robot_data = robot_model.createData()\n", + "xT = ddp.xs[-1]\n", + "pinocchio.forwardKinematics(robot_model, robot_data, xT[: state.nq])\n", + "pinocchio.updateFramePlacements(robot_model, robot_data)\n", + "print(\n", + " \"Finally reached = \",\n", + " robot_data.oMf[robot_model.getFrameId(\"gripper_left_joint\")].translation.T,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## I. DifferentialActionModel for Pinocchio ABA\n", + "This scenario uses an action model that computes 2nd order differential dynamics with Pinocchio. Note that it can accept several cost models. This action model is tailored for robot applications, and at the same time, it's modular since:\n", + " - you can modify the robot dynamics by changing Pinocchio model, and\n", + " - you can formulate any cost function by simply adding running a terminal costs.\n", + "\n", + "## II. Cost models\n", + "\n", + "A cost model computes a scalar cost value and its gradient and Hessian. All the models implemented are computing a cost residual and are computing the Hessian with the Gauss approximation.\n", + "\n", + "We implemented reusable cost models for controlling \n", + " - a frame placement (translation or velocity),\n", + " - the center of mass position, and \n", + " - state and control spaces.\n", + "\n", + "In the example above, we used the CostModelFrameTranslation which defines a 3d position task, and the state and control regularizers.\n", + "\n", + "As for any cost model in crocoddyl, if you write your own cost model you need to create a data class for your cost function. The cost data must be created from a pinocchio data (the rational is that the pinocchio data used to compute the dynamics should be re-used to compute the cost).\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataCollector = crocoddyl.DataCollectorMultibody(robot.data)\n", + "trackData = goalTrackingCost.createData(dataCollector)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### II.a Frame position cost\n", + "\n", + "You define a frame ID and the reference position as a 3D array. The cost is the distance between the frame and the target. This cost depends on $\\mathbf{x}$ (specifically the configuration $\\mathbf{q}$). You can double check the 0s in its gradient." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pinocchio.updateFramePlacements(robot.model, robot.data)\n", + "pinocchio.computeJointJacobians(robot.model, robot.data, xT[: state.nq])\n", + "goalTrackingCost.calc(trackData, x0)\n", + "goalTrackingCost.calcDiff(trackData, x0)\n", + "print(trackData.Lx, trackData.Lu)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### II.b State cost\n", + "In this part of the tutorial you must define a State model. It defines \n", + " - the dimension of the state and its tangent, and\n", + " - the exponential/integrate and difference/log operators.\n", + "The operators can described using Pinocchio functions. And the exercite consists on adding them into your State class. Please note crocoddyl has abstract functions for this.\n", + "\n", + "The state cost uses a reference in state space (State.zero() by default). The cost is the distance, computed with state.difference between the current state and the reference. Hence, with this cost, we regularize both position and velocity.\n", + "\n", + "### II.c Control cost\n", + "\n", + "The control cost uses a control reference as in the state cost. The cost is the distance the current control and the reference. Hence the cost regularizes torque commands." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### II.d Add cost models to the differential action model\n", + "Each time we want to include a new cost function, we use addCost function inside our DAM. In this function you're also able its weight." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## III. Create the problem with integrated action model\n", + "Differential action models describe cost and dynamics in continuous-time, however our optimal control solvers work in discrete-time. We have created the integrated action model in order to deal with this.\n", + "\n", + "In the previous code, we have used an abstract class that uses simpletic Euler rules. In the cartpole exercise you have learnt how to use integrated action models for your problem." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## IV. Callbacks\n", + "\n", + "Callback functions are needed for analysing and debugging the performance of the solver for your specific problem.\n", + "For problems defined with Pinocchio, you can display the robot trajectory per each iterate by including CallbackDisplay. With this callback, you can display robot motions with different rates. Additionally, CallbackVerbose prints a message that allows us to understand the behaviour of the solver.\n", + "\n", + "Generally speaking, an user is able to describe any callback function. This function will be run once per iterate and it has access to all data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## VI. Modifying the example\n", + "\n", + "Start by defining several targets (let's say 4 targets, all at x=0.4, and at y and z being either 0 or 0.4), and display then in the viewer.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The shooting problem will be composed of 4 sequences of action models. Each sequence consists on T shooting \"running\" nodes and 1 terminal node. The running nodes mostly have regularization terms, while the terminal nodes have a strong cost toward the respective target.\n", + "\n", + "[ R1,R1,R1 ... R1,T1, R2,R2 .... R2, T2, R3 ... R3, T3, R4 ... R4 ] , T4\n", + "\n", + "First create 4 running models and 4 terminal models." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Then you need to add a position cost, and state and control regularization to each running action model. Please note that for terminal action model is only needed the position cost. Additionally, in the running models, the position cost should be low, and it should be high in the terminal models." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now create a shooting problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "seq0 = [runningModels[0]] * T + [terminalModels[0]]\n", + "seq1 = [runningModels[1]] * T + [terminalModels[1]]\n", + "seq2 = [runningModels[2]] * T + [terminalModels[2]]\n", + "seq3 = [runningModels[3]] * T\n", + "problem = crocoddyl.ShootingProblem(x0, seq0 + seq1 + seq2 + seq3, terminalmodel[3])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Create a DDP solver for this problem and run it. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ddp = crocoddyl.SolverDDP(problem)\n", + "ddp.solve()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Well, it should not work, at least no on the first shot. The DDP solver is likely not strong enough to accept the random weights that you have selected. \n", + "\n", + "If it is working nicely from the first shot, display it in the viewer and go take a coffee. But you will likely have to tweak the gains to make it work.\n", + "\n", + "**It is suggested to first optimize only sequence 1. When you are happy with it, add sequence 2 and optimize again, etc.**\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## V. Penalty\n", + "The solver works with double precisions, so it is quite robust to high weight. 10000 is likely to be accepted for example. But if you make the problem too difficult, the solver will break. \n", + "\n", + "In that case, you can implement a simple penalty solver by setting the weight to be 10**i, and creating a for loop to explore i from 0 to 5. At each iteration of the loop, run the solver from the previous solution and for few iterations only." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "for i in range(1, 6):\n", + " for m in terminalModels:\n", + " m.costs.costs[\"gripperPose\"].weight = 10**i\n", + " ddp.solve(ddp.xs, ddp.us, 10)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "3.10.4 (v3.10.4:9d38120e33, Mar 23 2022, 17:29:05) [Clang 13.0.0 (clang-1300.0.29.30)]" + }, + "vscode": { + "interpreter": { + "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" + } + } + }, + "nbformat": 4, + "nbformat_minor": 2 } diff --git a/examples/notebooks/bipedal_walking.ipynb b/examples/notebooks/bipedal_walking.ipynb index 39c57ea10..bbd22425c 100644 --- a/examples/notebooks/bipedal_walking.ipynb +++ b/examples/notebooks/bipedal_walking.ipynb @@ -1,231 +1,222 @@ { - "cells" : [ - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## Multi-Contact dynamics for biped walking (Talos legs)\n", - "In this example, we describe the multi-contact dynamics through " - "holonomic constraints for the support legs. From the Gauss principle, " - "we have derived the model as:\n", - "$$\n", - "\\left[\\begin{matrix}\n", - " \\mathbf{M} & \\mathbf{J}^{\\top}_c \\\\\n", - " {\\mathbf{J}_{c}} & \\mathbf{0} \\\\\n", - "\\end{matrix}\\right]\n", - "\\left[\\begin{matrix}\n", - " \\dot{\\mathbf{v}} \\\\ -\\boldsymbol{\\lambda}\n", - "\\end{matrix}\\right]\n", - " = \n", - "\\left[\\begin{matrix}\n", - " \\boldsymbol{\\tau} - \\mathbf{h} \\\\\n", - " -\\dot{\\mathbf{J}}_c \\mathbf{v} \\\\\n", - "\\end{matrix}\\right]$$.\n", - "\n", - "\n", - "Base on a predefined walking gait, we build per each phase a specific " - "multi-contact dynamics. Indeed we need to describe multi-phase " - "optimal control problem. One can formulate the multi-contact optimal " - "control problem (MCOP) as follows:\n", - "\n", - "\n", - "$$\\mathbf{X}^*,\\mathbf{U}^*=\n", - "\\begin{Bmatrix} \\mathbf{x}^*_0,\\cdots,\\mathbf{x}^*_N \\\\\n", - "\t\t\t\t \\mathbf{u}^*_0,\\cdots,\\mathbf{u}^*_N\n", - "\\end{Bmatrix} =\n", - "\\arg\\min_{\\mathbf{X},\\mathbf{U}} \\sum_{p=0}^P \\sum_{k=1}^{N(p)} " - "\\int_{t_k}^{t_k+\\Delta t} l_p(\\mathbf{x},\\mathbf{u})dt$$\n", - "subject to\n", - "$$ \\mathbf{\\dot{x}} = \\mathbf{f}_p(\\mathbf{x},\\mathbf{u}), " - "\\text{for } t \\in [\\tau_p,\\tau_{p+1}]$$\n", - "\n", - "$$ \\mathbf{g}(\\mathbf{v}^{p+1},\\mathbf{v}^p) = \\mathbf{0}$$\n", - "\n", - "$$ \\mathbf{x}\\in\\mathcal{X}_p, \\mathbf{u}\\in\\mathcal{U}_p, " - "\\boldsymbol{\\lambda}\\in\\mathcal{K}_p.$$\n", - "\n", - "where $\\mathbf{g}(\\cdot,\\cdot,\\cdot)$ describes the contact " - "dynamics, and they represents terminal constraints in each walking " - "phase. In this example we use the following impact model:\n", - "\n", - "$$\\mathbf{M}(\\mathbf{v}_{next}-\\mathbf{v}) = " - "\\mathbf{J}_{impulse}^T$$\n", - "\n", - "$$\\mathbf{J}_{impulse} \\mathbf{v}_{next} = \\mathbf{0}$$\n", - "\n", - "$$\\mathbf{J}_{c} \\mathbf{v}_{next} = \\mathbf{J}_{c} \\mathbf{v}$$" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "`SimpleBipedGaitProblem` class builds action models for each " - "locomotion phase:\n", - " - `createSwingFootModel`: defines an action model for the swing " - "phase\n", - " - `createFootSwitchModel`: defines an action model for switch knots " - "between phases\n", - " \n", - "Then we build a walking by combining a set of contact phases and " - "their contact switches. This is defined by `createFootstepModel`\n", - "\n", "Now let's create a walking OC problem for the Talos legs." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "import example_robot_data\n", - "import numpy as np\n", - "import pinocchio\n", - "\n", - "import crocoddyl\n", - "from crocoddyl.utils.biped import SimpleBipedGaitProblem\n", - "\n", - "# Creating the lower-body part of Talos\n", - "talos_legs = example_robot_data.load(\"talos_legs\")\n", - "\n", - "# Setting up the 3d walking problem\n", - "rightFoot = \"right_sole_link\"\n", - "leftFoot = \"left_sole_link\"\n", - "gait = SimpleBipedGaitProblem(talos_legs.model, rightFoot, " - "leftFoot)\n", - "\n", - "\n", - "# Create the initial state\n", - "q0 = talos_legs.q0.copy()\n", - "v0 = pinocchio.utils.zero(talos_legs.model.nv)\n", - "x0 = np.concatenate([q0, v0])\n", - "\n", - "\n", - "# Creating the walking problem\n", - "stepLength = 0.6 # meters\n", - "stepHeight = 0.1 # meters\n", - "timeStep = 0.0375 # seconds\n", - "stepKnots = 20\n", - "supportKnots = 10\n", - "problem = gait.createWalkingProblem(\n", - " x0, stepLength, stepHeight, timeStep, stepKnots, supportKnots\n", - ")\n", - "\n", - "\n", - "# Solving the 3d walking problem using Feasibility-prone DDP\n", - "ddp = crocoddyl.SolverFDDP(problem)\n", - "\n", - "# Using the meshcat displayer, you could enable gepetto viewer for " - "nicer view\n", - "# display = crocoddyl.GepettoDisplay(talos_legs, 4, 4)\n", - "display = crocoddyl.MeshcatDisplay(talos_legs, 4, 4, False)\n", - "ddp.setCallbacks(\n", - " [\n", - " crocoddyl.CallbackLogger(),\n", - " crocoddyl.CallbackVerbose(),\n", - " crocoddyl.CallbackDisplay(display),\n", - " ]\n", - ")" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Emdebbed meshcat in this cell\n", - "display.robot.viewer.jupyter_cell()" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Solve the optimal control problem\n", "ddp.th_stop = 1e-9\n", - "init_xs = [x0] * (problem.T + 1)\n", "init_us = []\n", - "maxiter = 1000\n", "regInit = 0.1\n", - "ddp.solve(init_xs, init_us, maxiter, False, regInit)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "With the following commands we can plot \n", - " - the state and control trajectories\n", " - the DDP performance" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Plotting the solution and the DDP convergence\n", - "log = ddp.getCallbacks()[0]\n", - "crocoddyl.plotOCSolution(log.xs, log.us)\n", - "crocoddyl.plotConvergence(\n", - " log.costs, log.pregs, log.dregs, log.grads, log.stops, " - "log.steps\n", - ")" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Finally we can visualize the solution."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Visualization of the DDP solution in meshcat\n", - "display.rate = -1\n", "display.freq = 1\n", - "display.displayFromSolver(ddp)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## VI. Understanding the walking problem\n", "\n", - "In this problem we pre-defined a 20 and 10 knots for the step and " - "double support phases with $dt=$3.75e-2, repectively. \n", - "\n", - " 1. Could you tell us how much is the foot step and double support " - "duration?\n", - " 2. What happens when do we change $dt$ (e.g. 2e-2 secs)?\n", - " 3. What happens when do we change the number of step knots (e.g. " - "10)?\n" - ] - } - ], - "metadata" : { - "kernelspec" : { - "display_name" : "Python 3", - "language" : "python", - "name" : "python3" - }, - "language_info" : { - "codemirror_mode" : {"name" : "ipython", "version" : 3}, - "file_extension" : ".py", - "mimetype" : "text/x-python", - "name" : "python", - "nbconvert_exporter" : "python", - "pygments_lexer" : "ipython3", - "version" : "3.8.3" - } + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Multi-Contact dynamics for biped walking (Talos legs)\n", + "In this example, we describe the multi-contact dynamics through holonomic constraints for the support legs. From the Gauss principle, we have derived the model as:\n", + "$$\n", + "\\left[\\begin{matrix}\n", + " \\mathbf{M} & \\mathbf{J}^{\\top}_c \\\\\n", + " {\\mathbf{J}_{c}} & \\mathbf{0} \\\\\n", + "\\end{matrix}\\right]\n", + "\\left[\\begin{matrix}\n", + " \\dot{\\mathbf{v}} \\\\ -\\boldsymbol{\\lambda}\n", + "\\end{matrix}\\right]\n", + " = \n", + "\\left[\\begin{matrix}\n", + " \\boldsymbol{\\tau} - \\mathbf{h} \\\\\n", + " -\\dot{\\mathbf{J}}_c \\mathbf{v} \\\\\n", + "\\end{matrix}\\right]$$.\n", + "\n", + "\n", + "Base on a predefined walking gait, we build per each phase a specific multi-contact dynamics. Indeed we need to describe multi-phase optimal control problem. One can formulate the multi-contact optimal control problem (MCOP) as follows:\n", + "\n", + "\n", + "$$\\mathbf{X}^*,\\mathbf{U}^*=\n", + "\\begin{Bmatrix} \\mathbf{x}^*_0,\\cdots,\\mathbf{x}^*_N \\\\\n", + "\t\t\t\t \\mathbf{u}^*_0,\\cdots,\\mathbf{u}^*_N\n", + "\\end{Bmatrix} =\n", + "\\arg\\min_{\\mathbf{X},\\mathbf{U}} \\sum_{p=0}^P \\sum_{k=1}^{N(p)} \\int_{t_k}^{t_k+\\Delta t} l_p(\\mathbf{x},\\mathbf{u})dt$$\n", + "subject to\n", + "$$ \\mathbf{\\dot{x}} = \\mathbf{f}_p(\\mathbf{x},\\mathbf{u}), \\text{for } t \\in [\\tau_p,\\tau_{p+1}]$$\n", + "\n", + "$$ \\mathbf{g}(\\mathbf{v}^{p+1},\\mathbf{v}^p) = \\mathbf{0}$$\n", + "\n", + "$$ \\mathbf{x}\\in\\mathcal{X}_p, \\mathbf{u}\\in\\mathcal{U}_p, \\boldsymbol{\\lambda}\\in\\mathcal{K}_p.$$\n", + "\n", + "where $\\mathbf{g}(\\cdot,\\cdot,\\cdot)$ describes the contact dynamics, and they represents terminal constraints in each walking phase. In this example we use the following impact model:\n", + "\n", + "$$\\mathbf{M}(\\mathbf{v}_{next}-\\mathbf{v}) = \\mathbf{J}_{impulse}^T$$\n", + "\n", + "$$\\mathbf{J}_{impulse} \\mathbf{v}_{next} = \\mathbf{0}$$\n", + "\n", + "$$\\mathbf{J}_{c} \\mathbf{v}_{next} = \\mathbf{J}_{c} \\mathbf{v}$$" + ] }, - "nbformat" : 4, - "nbformat_minor" : 2 + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`SimpleBipedGaitProblem` class builds action models for each locomotion phase:\n", + " - `createSwingFootModel`: defines an action model for the swing phase\n", + " - `createFootSwitchModel`: defines an action model for switch knots between phases\n", + " \n", + "Then we build a walking by combining a set of contact phases and their contact switches. This is defined by `createFootstepModel`\n", + "\n", + "Now let's create a walking OC problem for the Talos legs." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import example_robot_data\n", + "import numpy as np\n", + "import pinocchio\n", + "\n", + "import crocoddyl\n", + "from crocoddyl.utils.biped import SimpleBipedGaitProblem\n", + "\n", + "# Creating the lower-body part of Talos\n", + "talos_legs = example_robot_data.load(\"talos_legs\")\n", + "\n", + "# Setting up the 3d walking problem\n", + "rightFoot = \"right_sole_link\"\n", + "leftFoot = \"left_sole_link\"\n", + "gait = SimpleBipedGaitProblem(talos_legs.model, rightFoot, leftFoot)\n", + "\n", + "\n", + "# Create the initial state\n", + "q0 = talos_legs.q0.copy()\n", + "v0 = pinocchio.utils.zero(talos_legs.model.nv)\n", + "x0 = np.concatenate([q0, v0])\n", + "\n", + "\n", + "# Creating the walking problem\n", + "stepLength = 0.6 # meters\n", + "stepHeight = 0.1 # meters\n", + "timeStep = 0.0375 # seconds\n", + "stepKnots = 20\n", + "supportKnots = 10\n", + "problem = gait.createWalkingProblem(\n", + " x0, stepLength, stepHeight, timeStep, stepKnots, supportKnots\n", + ")\n", + "\n", + "\n", + "# Solving the 3d walking problem using Feasibility-prone DDP\n", + "ddp = crocoddyl.SolverFDDP(problem)\n", + "\n", + "# Using the meshcat displayer, you could enable gepetto viewer for nicer view\n", + "# display = crocoddyl.GepettoDisplay(talos_legs, 4, 4)\n", + "display = crocoddyl.MeshcatDisplay(talos_legs, 4, 4, False)\n", + "ddp.setCallbacks(\n", + " [\n", + " crocoddyl.CallbackLogger(),\n", + " crocoddyl.CallbackVerbose(),\n", + " crocoddyl.CallbackDisplay(display),\n", + " ]\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Emdebbed meshcat in this cell\n", + "display.robot.viewer.jupyter_cell()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Solve the optimal control problem\n", + "ddp.th_stop = 1e-9\n", + "init_xs = [x0] * (problem.T + 1)\n", + "init_us = []\n", + "maxiter = 1000\n", + "regInit = 0.1\n", + "ddp.solve(init_xs, init_us, maxiter, False, regInit)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With the following commands we can plot \n", + " - the state and control trajectories\n", + " - the DDP performance" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Plotting the solution and the DDP convergence\n", + "log = ddp.getCallbacks()[0]\n", + "crocoddyl.plotOCSolution(log.xs, log.us)\n", + "crocoddyl.plotConvergence(\n", + " log.costs, log.pregs, log.dregs, log.grads, log.stops, log.steps\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally we can visualize the solution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Visualization of the DDP solution in meshcat\n", + "display.rate = -1\n", + "display.freq = 1\n", + "display.displayFromSolver(ddp)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## VI. Understanding the walking problem\n", + "\n", + "In this problem we pre-defined a 20 and 10 knots for the step and double support phases with $dt=$3.75e-2, repectively. \n", + "\n", + " 1. Could you tell us how much is the foot step and double support duration?\n", + " 2. What happens when do we change $dt$ (e.g. 2e-2 secs)?\n", + " 3. What happens when do we change the number of step knots (e.g. 10)?\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 } diff --git a/examples/notebooks/cartpole_swing_up.ipynb b/examples/notebooks/cartpole_swing_up.ipynb index 7dbe5cec0..4bc1d70b2 100644 --- a/examples/notebooks/cartpole_swing_up.ipynb +++ b/examples/notebooks/cartpole_swing_up.ipynb @@ -1,641 +1,429 @@ { - "cells" : [ - { - "attachments" : { - "image.png" : { - "image/png" : - "iVBORw0KGgoAAAANSUhEUgAAASgAAACrCAYAAADVVAHbAAAABHNCSVQICAgIfAhk" - "iAAAGbNJREFUeF7tXQuwTdUbXyJy3byi4Ro3Qq5KoohikMF4VHdQGKkmmR7Ie0iR" - "1/gzIRqv8miirrfCyKNkTOOdVyjiJvQQFaNk6Mr//" - "Faz9ux9zrmPc+5+rL33b83ccc8+e631fb9v+" - "921vv2t7ytyI9IEGxEoAALXr18XRYsWLcCdvIUI2IPATfYMw1HCgECxYsXEmTNnw" - "qAqddQEARKUJobQXQxFTJ999pnuolK+ACFAggqQMZ1UJSsrSw4/" - "fPhwJ6fh2ETAggAJig9EgRDYuHGjvO/8+fPit99+K1Af3kQECosACaqwCIag/" - "+HDh8WWLVsMTdevXx8CramiDgiQoHSwguYyrFu3zpCwXr16YsyYMZpLTPGCggAJK" - "iiWdEgPhBYsXrxYTJ06VdSoUUPMmTNHZGdnC6yq2IiA0wiQoJxG2Ofj79mzRxw8e" - "FDcc889kpguXbokNTKvqnyuIsXXGAESlMbG0UE0RUh16tSR4qSlpYmHH36YjnIdj" - "BMCGYowkjwEVi6kinhrV6JECVG6dGmxe/dukZGRIVJSUhhVXkhc2T1/" - "BLiCyh+j0N9RoUIFAwP4pG699VaSU+" - "ifCncAKObONJyFCCSOAKLXBwwYIC5fviw7z507V1StWjXxgdjDtwhwBeVb0wVbcJ" - "BTenq6WLVqlWjYsKFUFp9XrlwZbMWpnQUBrqD4QGiJwIoVK6RcOTk5xnZy6NChYv" - "ny5aJz585aykyh7EeAKyj7MeWINiCwa9cuMXHiRIuvq1mzZiI1NdWG0TmEXxAgQf" - "nFUiGS888//" - "xRLly4V9evXt2itDiyHCIrQq0qCCv0joB8Ap06dkkI1aNDAEA5vD0FazZs3109gS" - "uQYAiQox6DlwMkisH//ftm1XLlyxhAXLlyQv0evqpKdg/38gQAJyh92CpWUe/" - "fuFW+88YbF/" - "3TixAmJwb333hsqLMKuLAkq7E+" - "AhvqvWbPGCC1Q4iGTZ9u2bTWUliI5iQAJykl0OXbCCBw7dkycPHlS1K5d29J31Kh" - "RomXLlgmPxw7+RoAE5W/" - "7BU56OMhLlSolatasaeimMng++uijgdOXCuWNAAkqb3z4rcsIYCvXs2dPi/" - "8JKyo0FVHuskiczkMESFAegs+" - "pYxGYPHlyDBEtW7Ys5lpsT14JIgIkqCBa1ac6qdJWjzzyiEUDOsh9alAbxCZB2QA" - "ih7AHAcQ/VaxY0eIgR1Q5MnrimAtb+BAgQYXP5tpqPGnSpJiDwE8//" - "bSUt1WrVtrKTcGcQ4AE5Ry2HDlBBLBSqlatmtELx1sQE4WEeUWLFk1wNN4eBARIU" - "EGwYgB0gP8JiekyMzMNbTZv3ix/" - "Hz16dAA0pArJIECCSga1EPdxaiXz7bffSlTNAZpIuYLWqFGjECMebtWZsC7c9k9Y" - "+w0bNgj8XLt2TYwfPz7X/kOGDJHbtQceeEA0btw41/" - "vUFwgliD7Ksm3bNvl1pUqV8u3PG4KJAKu6BNOutmuFt2mo6qIajp2geEK8VqxYMZ" - "mqVzW8mUPqlN69e4sqVarEJawiRYrI4qADBw6U3eB/" - "wjgbN24Ubdq0iTcNr4UAAa6gQmBkO1WcP3++" - "ePzxx6XjOq8GQrt48aJA6ACc3zhLB7JBQ99XX31VFkQAyamjLF26dDGGVP4nc06o" - "vObjd8FEgCuoYNrVdq3UCgp18ZI5cgISOnv2rHj99dclIW3fvl3KiLS+" - "yPGE7V2kRqMh98yZM8WUKVPE999/b7suHNA/" - "CHAF5R9baSEptl7JNKya8LN69WrZ/" - "fDhwwIxTsOHD5efUa3Y3L788ksxePBgyzV+" - "CB8CfIsXPptroTESzx04cEDs2LFDZi8wZypQOclr1aqlhawUwjsESFDeYc+" - "ZIwjgDR/inzp06GDgceTIEfk7/" - "U98REhQfAY8RWDnzp1yfnP+J6T8RTPnJPdUSE7uGQIkKM+g58RAAJkKsMUzvxWE/" - "2ns2LE83sJHRJCg+BB4igCCMUeOHGmRAeWl7r77bk/" - "l4uR6IECC0sMOoZQCbwQRG2V2kOPtHhqrt4TykYhRmgQVAwkvuIWACsasXr26MSU" - "COxGlHl00wS2ZOI9eCJCg9LJHqKRRCerM/" - "ic4yElOoXoM8lSWBJUnPPzSSQS2bNki+vXrZ0yBLd/" - "06dNjgjadlIFj640ACUpv+wRWOuV/" - "at26taGjqh7crl27wOpNxRJDgASVGF682yYE9uzZI0dCdgPV4CCHPyq/" - "g8g2icBhfIAACcoHRgqiiMePH5fxT2lpaYZ648aNk5kS2IiAQoAExWfBdgSwfcvv" - "UPHWrVsFiiSYM3QiLQsS3LERARIUnwHbETh27Jg8slKmTBlJNCqmKXoikBfySlWt" - "WtX4SuWEQuoVNiJAguIzYCsCKHqQkZEhsrOz5eFfrIbq1q0r1Fk782Tx8o/" - "v27dP3sIATVvN4vvBuMXzvQn1UGDo0KFSkK5du4rTp0/Lf9H+97//" - "xWz3sNKqUaOGJd4JZ/Kic0LpoRml8BIBJqzzEv2AzI3adTg/" - "V69ePbFkyRKpFf49d+6crGv3888/" - "W7ZzWVlZomPHjhbtZ8+eLeAkZyMCZgS4guLzUCgE4E+aNWuWHOPDDz+0jPXXX3/" - "JzyVLljSuIxkdCipEJ6jDttCck9wyED+EFgGuoEJrevsUV/4jOMeVs/" - "vKlSsiNTU1ZhKsptDMx1mQVRPN7DSP6cgLoUSABBVKs9unNAjn/" - "PnzcsD09PS4A4Os0LDa6t+/vyy6YCaojz/+OKYmXtyBeDF0CHCLFzqT26sw/" - "El5NSSeUysjHGVBepXoWKevv/" - "5aPPbYY3kNw+9CigAJKqSGt0vt4sWLy6EWLlwocnJyxKVLl+S/" - "qgZe+fLljalOnTolfze/rVMlqBhBbpdFgjUOt3jBsqdn2sD/" - "hKhwVW0YVYLRzJVZFixYIK9lZmYacqJWHpr5yIvxJX8JPQJcQYX+" - "ESgcANeuXZMDIFBTtZUrV8oVFIpxmsuWIxQBLSUlxbgX/" - "ifERJmPvBhf8pfQI0CCCv0jUDgAsBpCBoK+" - "ffvKoy2bNm0ywgWQ20k19XYPW0EzGSEnlCreWThJ2DuICJCggmhVF3XC2zgV04Sj" - "LVg1oeFf85s6HA5G69mzpyEdSAsEhQBPNiIQDwESVDxUeC0hBN58800xY8YMow/" - "e3EUHba5fvz7mKIuKnzLnJE9oYt4ceAToJA+8iZ1XEGEEffr0kT+" - "5NRx9mTZtmuVrVBCuWLEiE9TlBhqvsy4enwHnEcDxFhxladasmWWytWvXitdee81" - "5ATiDbxHgFs+3pvOP4J988okU1uyTQlQ5/" - "E9Nmzb1jyKU1HUESFCuQx6+Cbdv3y5DCcxN+" - "Z8qVaoUPkCocYERIEEVGCremCwCOMoS7Z86dOiQHK5s2bLJDst+" - "IUCABBUCI3upIrZyWEG1b9/" - "eIgau4eCwijz3UkbOrS8CJCh9bRMIyVR639tuu82iz+" - "rVq2NIKxAKUwlbESBB2QonB4tGAKl84X8y17pDnnIEafL8XTRa/" - "ByNAAkqGhF+thWBxYsXxxxlQSpgRI+zQIKtUAdyMBJUIM2qj1I//" - "PBDDBGhHl6LFi30EZKSaIsACUpb0/" - "hfMJQ3R7ZN1MpTDUGbcJBHB236X1tq4AQCJCgnUOWYEoGjR4/K8uZm/" - "9PFixfld+agTcJFBHJDgASVGzK8XmgEsL0bOXKkMQ5WT6ifB6c5/" - "U+FhjcUA5CgQmFmb5RE4ro6deoYk1+" - "9elXWz2vdurU3AnFW3yFAgvKdyfwhMMIIUP78zjvvNARG0QS06KIJ/" - "tCIUnqBAAnKC9RDMKc6a2deQc2bN09qTgd5CB4Am1QkQdkEJIexIoBcT6jUYk7v+" - "+OPP8qbzG/" - "1iBsRyAsBElRe6PC7pBH44IMPLBk0cSZPFVJggYSkYQ1dRxJU6EzuvMLK/1S/" - "fn1jMlXyfPny5c4LwBkCgwAJKjCm1EeRkydPSmGaNGliCIUzeYiJYvYCfezkB0lI" - "UH6wks9kRIAmqgebyQhFFF544QWfaUJxvUaABOW1BQI4/" - "5QpU4xSVFBPpfft1q1bALWlSk4iQIJyEt0Qjg0yQvzT/fffb2h/" - "4cIF+XuVKlVCiAhVLgwCJKjCoMe+" - "MQioYEzzURZV8hzlqdiIQCIIkKASQYv35ovAsWPH5D3mw8AoiY6YKDYikCgCJKhE" - "EeP9eSKwZs0a0bVrV8s9e/fuFb169cqzH78kAvEQIEHFQ4XXkkZg/vz5whz/" - "BJ8U8kKZryU9ODuGDgESVOhM7pzCSKeClpmZaUyybt06+" - "TvLSzmHe5BHJkEF2bou66aixc3+J7zRQ/" - "5xBmi6bIyATEeCCoghdVADwZgNGza0iILy5n379tVBPMrgQwRIUD40mq4if/" - "HFF+K+++4zxFMBmsyeqavF9JeLBKW/jXwjIYohmMMJVNFOplfxjQm1E5QEpZ1J/" - "CnQmTNnpOAtW7Y0FFAxUeaiCf7UjlJ7hQAJyivkAzZvVlaW1MjsDD9w4ICYOHFiw" - "DSlOm4iQIJyE+0Az7V79+6YYEz4pBj/" - "FGCju6AaCcoFkMMwxapVqyz+" - "J2z54JMqV65cGNSnjg4hQIJyCNgwDat8TebV0v79+yUEGRkZYYJCC11x3Ojll1+" - "OkWXmzJli3LhxMdd1vkCC0tk6PpFt27ZtUtK0tDRDYgRo4kweAzTdNyJKfc2ZM8e" - "SIPDtt9+" - "W8WiVK1d2X6BCzEiCKgR47PofAggnQLS4uRgCiiY89NBDhMgDBBB3hoymOBeJWLR" - "FixaJQYMGyRcWfstqSoLy4AEK2pTIN24ucY6iCdnZ2aJp06ZBU9U3+" - "kybNk3K2qNHD/" - "HMM8+Il156SQwbNsw38itBSVC+" - "M5leAqsKLuZocVW0s0GDBnoJGyJpgH2NGjVkqXm0ESNG+" - "FJ7EpQvzaaP0IqMzP4nFO3Efw7Wv/" - "POTsC+T58+UoCFCxcKv2YzJUF59wz5fma8vRszZkyMM3zChAmie/" - "fuvtfPzwrs3LlT+p3QvvvuO9+qQoLyrem8F/" - "zUqVMy1qlNmzaGMHDKYtt31113eS9giCVQW7rq1auL8ePHC5Wry2+" - "QkKD8ZjGN5D1+/LiUpm7duoZUqmhCu3btNJI0XKIg1glpbhDd/" - "9Zbb0nld+zY4UsQSFC+" - "NJseQiO2BgeBzc5wbPvwV5sHhL2xEYIxR40aJf1OyM2lVrevvPKKDDnwWyNB+" - "c1imsirQglatWplcYbj0DBebbN5g8DatWvFkCFDRM+ePaUACJRFPFRqaqpQGU+" - "9kSy5WYsl1429wo7AyZMnJQQdOnSwQIHX2vgPweYNAhs2bIiZ+" - "Pnnnxf48WPjCsqPVtNA5tOnT8vDweovNURSjljWwNPAQAERgQQVEEO6rcbnn38uH" - "nzwQcu0yhFL/" - "5Pb1khuPj+" - "82SNBJWfb0PZCACD8TziMWq1aNQsOn376aUzRztACpbniqPZcunRp0a1bN6Gyoeo" - "oMglKR6toKFNKSoqUqmTJkuLKlSvy9xYtWlgknT59unjyySct1/" - "hBTwTuuOMO+" - "ZYPPsP09HSZngWkpVtz3UmOv76IleExCN0ehbzluXDhgrwBYQRI5YtQgrNnz8ofN" - "Pik0PAZlYTZ9EdgwYIFYuvWrTINC1bE+EGKHDjUmzRpokeqnBsut06dOt2ImI4/" - "xIDPgMbPAP6f6tCKQAg3uR77XWT8K1GihJvTcq5CIgB7IW0HAgDxL97UPfHEE+" - "Lff/8VN910k3jnnXdknM3s2bOFWm0Vckp2dxAB2KxMmTJi/" - "fr1lrAQrIwRfd65c2cHZ09gaB1YkjLoj0BOTo5c8axYsUL+" - "GzlGYRG6bdu2NyI5h/" - "RXhBJKBC5dunSjV69exioW9lu9evUN2Fmn5roPKgHu5K0aIfD3339LaVQK3+" - "gS5xs3bvTteS+" - "NYHZNFBz0RkDt2LFjxXPPPadtOhbXt3iuWYAT2YoAYmbwWhrEVL58eWGOWMbbHxw" - "YjvxV1sOxaqvmwR0MNtU9ZzxXUMF9/hzRDG/oIts8y9jr1q2Tn3V/" - "2B0BxMeD+sFejIPy8QPmlei1a9e2TI0SU0jSz0YE7EaAWzy7EQ3oeGqLB/" - "UiTlSLlkWKFBHwQZkT1wUUBqrlMgJcQbkMuN+" - "nQ65xc1PRx3Xq1PG7apRfQwRIUBoaRWeROnbsaBFPVRAuW7aszmJTNp8iQILyqeG" - "8EhuHS1VDYn4Ebfbv358Ocq8MEvB5SVABN7Bd6sXLxnju3Dk5fLNmzeyahuMQAQs" - "CJCg+EAVCAKlUohuOSaBFv9WLvo+" - "fiUCyCJCgkkUuZP127dpl0Rhv9XD6vV69esJcVThksFBdhxEgQTkMcFCGP3r0qEU" - "VteWrVKlSUFSkHhoiQILS0Ci6iQRn+" - "MGDBy1iIbygYsWKYu7cubqJS3kChAAJKkDGdEqVvXv3xgyNpHVIuVK1atWY73iBC" - "NiFAAnKLiQDPA5inWbMmCGzaKqGctqNGzcOsNZUTQcEeNRFBys4KAMqzW7atCmhG" - "cyn3JGiefv27fIoSyRnkCynDaLC9g4O8lKlSolff/" - "1VJq5TtfJU2anNmzeLpk2byrnRH/0QmgC/1aRJk/RJipYQOrzZTQSYzcBNtD2YS/" - "mIkGM6mfbLL7/IbmlpaUb3ffv2yd/zG9NcMy+SzE72QWZOEB7KVmmTtdHQjL/" - "ohgAJSjeLOCDPs88+KwYOHJjUyFh9YeV08803y/7IA3XkyBGZXB/" - "pfZNpWNWpIzLJ9Gef8CBAggqBra9du5a0luoQ8Lx58+QY77//" - "vtzKITd5sq0w8iQ7J/v5EwESlD/" - "t5prUeEsHv1OHDh3knCAnpIo1b99cE4YThQ4BElToTJ64wkjzC19UsWLFJFlF5yN" - "PfET2CBICqNSE50K5AezQ7Z9//hHNmzcXkqCQxhWloIoXL57U2EjA/" - "tVXXyXd3zwplv+9e/" - "cWffr0SUoWdioYAnio3nvvPePm1NRUkZmZKWrWrBm3qKoqmnD9+vW4E+" - "A6tn94FlTD2zo4x1mkNS5k2l5EYG5+L0DcEL5ly5b/" - "EdRHH30kULZal0YHqnOWQNjAU089JbZs2SInqVChgixlfvnyZTF8+" - "HB5bceOHQnFOMHpjeq08RquI4c539jFQ0fPayh+" - "kUzDyxSEk6iGlXblypWTGUpcvXpV9OjR4z+" - "CGjx4sGjfvn1SAxW2U7ly5WRcDBpWcbk96IWdh/" - "2FwKopPT1dxiNFaqBJv5J5dYO/" - "nCiFjb+" - "ecILn52dCGfQRI0aIVatWyfJFAwYMECkpKRJqjIt4KjjXu3TpIr8fOXIkzeAjBIY" - "MGSKLeOa2ao5WBTY332vHyllu8eAI1eHIQqNGjaJ15mebEMCDA3KKFGuUpBGvITI" - "cP3hzh6rBIJvcVj4gn3bt2kmneW75yFE1BOENCNaEbeFCGDZsWLypeU1DBFBmDC0" - "Roknk3oKozKMuBUHJ5/fAv4TUKIj+fvfdd/" - "PVBpHgEydOlCsfbAnjNUVycI7mVywBS32s2LCFxKorWV9nPDl4zTkEdLATCco5+" - "2oxMh6yP/74Q4wePVpMmTKlwH8NsdJCw5YvXhs0aJDo169fgd/oqTAFrLrY/" - "IFAbn+" - "c3JSeBOUm2h7NhchvPGzxttBDhw4VixYtipEMznOsuE6cOGH5TvmWcDGe3xLzIG8" - "5/" - "F3mhn6dOnWSW0IQpp2vpGOE54XAIECCCowp4ysCIsA2DA0hBNFt8uTJIl46FdxXr" - "Vo1cfz4censVgnqEA8FwsttvN9//10sXbpU/" - "PTTT9FTidtvv11e27ZtW8x3vEAE4iFAgoqHSoCuIXxAZRlAdoHodvr0aTFu3Ljoy" - "/JtzDfffCOjxuEszcjIkPfAea5iZOKNh/" - "zkhw4dihumoFZq5lfRMRPzAhEwIUCCCvjjgNUMXhcjNcqyZctitMXbW7xti25ZWV" - "kyNgrObazAFKngXxAQYl6mTp0qV1fRLV6OcjjHMR58WyBF5JdiIwL5IhApY61Niw" - "QIoqb2jchDrI1MQREkQjIS2wjB5KtSJFAvxg7qGsZBi5CMvCcSU3Xj/" - "PnzeY6Zk5NzIxIVfCNCknnexy/" - "1QADPCGwb+cPmuUBaraCqVKmSL6HyhuQQwKt+" - "rKSw8kGIQLzgOzi4kV6lfv36AiXOzSEJKgBTzY6VFyLEsX1E7BSOS8UbE9exJcQq" - "DDmg2PRHIN6q2CuptTosrM57LVmyRNSqVcsrTAI5rzmmBWcdJ0yYIF588UVDV5AT" - "HOaqIRYKsVM4doQ4KrUNXLBggcwJBac5riMqPTs7W74hBAl2795d4Dwl5sN2UG0N" - "QXg4JYBjNfgPgBMEbHoigJMB2jTP13AmAdQ2JAKOXGLyhxjwGfDuGYgcT/" - "KcHrTKSY6/rLNmzRJ//" - "fWX6wSO84DqTCBif+" - "yUoXz58qJMmTLyR7WzZ88a87murGlChCGYKwMjzQV0j17h4HrdunWl0/" - "yWW26RI+AehB2o/" - "ihFpRqO1aijEl7qx7mTRwBpnuO9QEl+" - "xMR7akVQiYvPHm4hgD8eIBz4kpgPyi3U7ZsHB7VhN1XQwr6RnR1JKx+" - "Us6pydDsQiOcIt2NcjuEsAsghb17BOzubfaOToOzDkiMRAW0RiBebpq2wJsG0CjP" - "wA2CUkQgQAfcQIEG5hzVnIgJEIEEESFAJAsbbiQARcA8BEpR7WHMmIkAEEkSABJU" - "gYLydCBAB9xAgQbmHNWciAkQgQQRIUAkCxtuJABFwDwESlHtYcyYiQAQSRIAElSB" - "gvJ0IEAH3ECBBuYc1ZyICRCBBBEhQCQLG24kAEXAPARKUe1hzJiJABBJEgASVIGB" - "hvR0pf1F4gWmZw/oEeKP3/wHnHFgv4b8ucgAAAABJRU5ErkJggg==" - } - }, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# Cartpole optimal control problem\n", - "\n", - "![image.png](attachment:image.png)\n", - "\n", - "A cartpole is another classical example of control. In this system, " - "an underactuated pole is attached on top of a 1D actuacted cart. The " - "game is to raise the pole to a standing position.\n", - "\n", - "The model is here:\n", - "https://en.wikipedia.org/wiki/Inverted_pendulum\n", - "\n", - "We denote by $m_1$ the cart mass, $m_2$ the pole mass ($m=m_1+m_2$), " - "$l$ the pole length, $\\theta$ the pole angle w.r.t. the vertical " - "axis, $p$ the cart position, and $g=$9.81 the gravity.\n", - "\n", - "The system acceleration can be rewritten as:\n", - "\n", - "$$\\ddot{\\theta} = \\frac{1}{\\mu(\\theta)} \\big( \\frac{\\cos " - "\\theta}{l} f + \\frac{mg}{l} \\sin(\\theta) - m_2 \\cos(\\theta) " - "\\sin(\\theta) \\dot{\\theta}^2\\big),$$\n", - "\n", - "$$\\ddot{p} = \\frac{1}{\\mu(\\theta)} \\big( f + m_2 \\cos(\\theta) " - "\\sin(\\theta) g -m_2 l \\sin(\\theta) \\dot{\\theta} \\big),$$\n", - "\n", - "$\\hspace{12em}$with $$\\mu(\\theta) = m_1+m_2 \\sin(\\theta)^2,$$\n", - "\n", - "where $f$ represents the input command (i.e $f=u$) and $m=m_1+m_2$ is " - "the total mass.\n" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## I. Differential Action Model\n", "\n", - "A Differential Action Model (DAM) describes the action " - "(control/dynamics) in continous-time. In this exercise, we ask you to " - "write the equation of motions for the cartpole.\n", - "\n", - "For more details, see the instructions inside the " - "DifferentialActionModelCartpole class." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "import numpy as np\n", - "from cartpole_utils import animateCartpole\n", - "from IPython.display import HTML\n", - "\n", - "import crocoddyl\n", - "\n", - "\n", - "class " - "DifferentialActionModelCartpole(crocoddyl." - "DifferentialActionModelAbstract):\n", - " def __init__(self):\n", - " crocoddyl.DifferentialActionModelAbstract.__init__(\n", - " self, crocoddyl.StateVector(4), 1, 6\n", - " ) # nu = 1; nr = 6\n", - " self.unone = np.zeros(self.nu)\n", - "\n", - " self.m1 = 1.0\n", - " self.m2 = 0.1\n", - " self.l = 0.5\n", - " self.g = 9.81\n", - " self.costWeights = [\n", - " 1.0,\n", - " 1.0,\n", - " 0.1,\n", - " 0.001,\n", - " 0.001,\n", - " 1.0,\n", - " ] # sin, 1-cos, x, xdot, thdot, f\n", - "\n", - " def calc(self, data, x, u=None):\n", - " if u is None:\n", - " u = model.unone\n", - " # Getting the state and control variables\n", - " y, th, ydot, thdot = x[0].item(), x[1].item(), x[2].item(), " - "x[3].item()\n", - " f = u[0].item()\n", - "\n", - " # Shortname for system parameters\n", - " m1, m2, l, g = self.m1, self.m2, self.l, self.g\n", - " s, c = np.sin(th), np.cos(th)\n", - "\n", - " " - "######################################################################" - "#####\n", - " ############ TODO: Write the dynamics equation of your system " - "#############\n", - " " - "######################################################################" - "#####\n", - " # Hint:\n", - " # You don't need to implement integration rules for your " - "dynamic system.\n", - " # Remember that DAM implemented action models in " - "continuous-time.\n", - " m = m1 + m2\n", - " mu = m1 + m2 * s**2\n", - " xddot, thddot = cartpole_dynamics(\n", - " self, data, x, u\n", - " ) # Write the cartpole dynamics here\n", - " data.xout = np.matrix([xddot, thddot]).T\n", - "\n", - " # Computing the cost residual and value\n", - " data.r = np.matrix(self.costWeights * np.array([s, 1 - c, y, " - "ydot, thdot, f])).T\n", - " data.cost = 0.5 * sum(np.asarray(data.r) ** 2).item()\n", - "\n", - " def calcDiff(model, data, x, u=None):\n", - " # Advance user might implement the derivatives in " - "cartpole_analytical_derivatives\n", - " cartpole_analytical_derivatives(model, data, x, u)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["You can get the solution of the cartpole dynamics by " - "uncommenting the following line:"] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : ["# %load solutions/cartpole_dyn.py"] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["You may want to check your computation. Here is how to " - "create the model and run the calc method."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "cartpoleDAM = DifferentialActionModelCartpole()\n", - "cartpoleData = cartpoleDAM.createData()\n", - "x = cartpoleDAM.state.rand()\n", "u = np.zeros(1)\n", - "cartpoleDAM.calc(cartpoleData, x, u)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## II. Write the derivatives with DAMNumDiff\n", "\n", - "In the previous exercise, we didn't define the derivatives of the " - "cartpole system. In crocoddyl, we can compute them without any " - "additional code thanks to the DifferentialActionModelNumDiff class. " - "This class computes the derivatives through numerical " - "differentiation.\n", - "\n", - "In the following cell, you need to create a cartpole DAM that " - "computes the derivates using NumDiff." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Creating the carpole DAM using NumDiff for computing the " - "derivatives.\n", - "# We specify the withGaussApprox=True to have approximation of the\n", - "# Hessian based on the Jacobian of the cost residuals.\n", - "cartpoleND = crocoddyl.DifferentialActionModelNumDiff(cartpoleDAM, " - "True)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "After creating your cartpole DAM with NumDiff. We would like that you " - "answer the follows:\n", - "\n", " - 2 columns of Fx are null. Wich ones? Why?\n", "\n", - " - can you double check the values of Fu?\n" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : ["# %load solutions/cartpole_fxfu.py"] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## III. Integrate the model\n", "\n", - "After creating DAM for the cartpole system. We need to create an " - "Integrated Action Model (IAM). Remenber that an IAM converts the " - "continuos-time action model into a discrete-time action model. For " - "this exercise we'll use a simpletic Euler integrator." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# %load solutions/cartpole_integration.py\n", - "######################################################################" - "#####\n", - "################## TODO: Create an IAM with from the DAM " - "##################\n", - "######################################################################" - "#####\n", - "# Hint:\n", "# Use IntegratedActionModelEuler" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## IV. Write the problem, create the solver\n", "\n", - "First, you need to describe your shooting problem. For that, you have " - "to indicate the number of knots and their time step. For this " - "exercise we want to use 50 knots with $dt=$5e-2.\n", - "\n", "Here is how we create the problem." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Fill the number of knots (T) and the time step (dt)\n", - "x0 = np.array([0.0, 3.14, 0.0, 0.0])\n", "T = 50\n", - "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, " - "cartpoleIAM)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Problem can not solve, just integrate:"] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "us = [np.zeros(cartpoleIAM.differential.nu)] * T\n", - "xs = problem.rollout(us)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "In cartpole_utils, we provite a plotCartpole and a animateCartpole " - "methods. Let's display this rollout!\n", - "\n", "Note that to_jshtml spawns the video control commands." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : ["HTML(animateCartpole(xs).to_jshtml())"] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# %load solutions/cartpole_ddp.py\n", - "# " - "######################################################################" - "###\n", - "# ################# TODO: Create the DDP solver and run it " - "###############\n", - "# " - "######################################################################" - "####" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : ["HTML(animateCartpole(ddp.xs.tolist()).to_jshtml())"] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## Tune the problem, solve it\n", "\n", - "Give some indication about what should be tried for solving the " - "problem.\n", - "\n", "\n", - " - Without a terminal model, we can see some swings but we cannot " - "stabilize. What should we do?\n", - "\n", - " - The most important is to reach the standing position. Can we also " - "nullify the velocity?\n", - "\n", - " - Increasing all the weights is not working. How to slowly increase " - "the penalty?\n", - "\n" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# %load solutions/cartpole_tuning.py\n", - "# " - "######################################################################" - "####\n", - "# ################# TODO: Tune the weights for each cost " - "###################\n", - "# " - "######################################################################" - "####\n", - "terminalCartpole = DifferentialActionModelCartpole()\n", - "terminalCartpoleDAM = " - "crocoddyl.DifferentialActionModelNumDiff(terminalCartpole, True)\n", - "terminalCartpoleIAM = " - "crocoddyl.IntegratedActionModelEuler(terminalCartpoleDAM)\n", - "\n", "terminalCartpole.costWeights[0] = 0 # fix me :)\n", - "terminalCartpole.costWeights[1] = 0 # fix me :)\n", - "terminalCartpole.costWeights[2] = 0 # fix me :)\n", - "terminalCartpole.costWeights[3] = 0 # fix me :)\n", - "terminalCartpole.costWeights[4] = 0 # fix me :)\n", - "terminalCartpole.costWeights[5] = 0 # fix me :)\n", - "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, " - "terminalCartpoleIAM)" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Creating the DDP solver\n", "ddp = crocoddyl.SolverDDP(problem)\n", - "ddp.setCallbacks([crocoddyl.CallbackVerbose()])\n", "\n", - "# Solving this problem\n", "done = ddp.solve([], [], 300)\n", - "print(done)" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : ["HTML(animateCartpole(ddp.xs.tolist()).to_jshtml())"] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## Using analytical derivatives\n", "\n", - "You can get the solution of the analytical derivatives by " - "uncommenting the following line:" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# %load solutions/cartpole_analytical_derivatives.py\n", - "def cartpole_analytical_derivatives(model, data, x, u=None):\n", - " pass" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : - ["The analytical derivatives being defined, we do not need to use " - "DAMNumDiff to numerically approximate the derivatives."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "timeStep = 5e-2\n", - "cartpoleIAM = crocoddyl.IntegratedActionModelEuler(cartpoleDAM, " - "timeStep)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["You can now run again all the blocks in \"IV. Write the " - "problem, create the solver\" since cartpoleIAM has been " - "redefined to directly used the analytical derivatives."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [] - } - ], - "metadata" : { - "kernelspec" : { - "display_name" : "Python 3", - "language" : "python", - "name" : "python3" - }, - "language_info" : { - "codemirror_mode" : {"name" : "ipython", "version" : 3}, - "file_extension" : ".py", - "mimetype" : "text/x-python", - "name" : "python", - "nbconvert_exporter" : "python", - "pygments_lexer" : "ipython3", - "version" : "3.8.3" + "cells": [ + { + "attachments": { + "image.png": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAASgAAACrCAYAAADVVAHbAAAABHNCSVQICAgIfAhkiAAAGbNJREFUeF7tXQuwTdUbXyJy3byi4Ro3Qq5KoohikMF4VHdQGKkmmR7Ie0iR1/gzIRqv8miirrfCyKNkTOOdVyjiJvQQFaNk6Mr//Faz9ux9zrmPc+5+rL33b83ccc8+e631fb9v+921vv2t7ytyI9IEGxEoAALXr18XRYsWLcCdvIUI2IPATfYMw1HCgECxYsXEmTNnwqAqddQEARKUJobQXQxFTJ999pnuolK+ACFAggqQMZ1UJSsrSw4/fPhwJ6fh2ETAggAJig9EgRDYuHGjvO/8+fPit99+K1Af3kQECosACaqwCIag/+HDh8WWLVsMTdevXx8CramiDgiQoHSwguYyrFu3zpCwXr16YsyYMZpLTPGCggAJKiiWdEgPhBYsXrxYTJ06VdSoUUPMmTNHZGdnC6yq2IiA0wiQoJxG2Ofj79mzRxw8eFDcc889kpguXbokNTKvqnyuIsXXGAESlMbG0UE0RUh16tSR4qSlpYmHH36YjnIdjBMCGYowkjwEVi6kinhrV6JECVG6dGmxe/dukZGRIVJSUhhVXkhc2T1/BLiCyh+j0N9RoUIFAwP4pG699VaSU+ifCncAKObONJyFCCSOAKLXBwwYIC5fviw7z507V1StWjXxgdjDtwhwBeVb0wVbcJBTenq6WLVqlWjYsKFUFp9XrlwZbMWpnQUBrqD4QGiJwIoVK6RcOTk5xnZy6NChYvny5aJz585aykyh7EeAKyj7MeWINiCwa9cuMXHiRIuvq1mzZiI1NdWG0TmEXxAgQfnFUiGS888//xRLly4V9evXt2itDiyHCIrQq0qCCv0joB8Ap06dkkI1aNDAEA5vD0FazZs3109gSuQYAiQox6DlwMkisH//ftm1XLlyxhAXLlyQv0evqpKdg/38gQAJyh92CpWUe/fuFW+88YbF/3TixAmJwb333hsqLMKuLAkq7E+AhvqvWbPGCC1Q4iGTZ9u2bTWUliI5iQAJykl0OXbCCBw7dkycPHlS1K5d29J31KhRomXLlgmPxw7+RoAE5W/7BU56OMhLlSolatasaeimMng++uijgdOXCuWNAAkqb3z4rcsIYCvXs2dPi/8JKyo0FVHuskiczkMESFAegs+pYxGYPHlyDBEtW7Ys5lpsT14JIgIkqCBa1ac6qdJWjzzyiEUDOsh9alAbxCZB2QAih7AHAcQ/VaxY0eIgR1Q5MnrimAtb+BAgQYXP5tpqPGnSpJiDwE8//bSUt1WrVtrKTcGcQ4AE5Ry2HDlBBLBSqlatmtELx1sQE4WEeUWLFk1wNN4eBARIUEGwYgB0gP8JiekyMzMNbTZv3ix/Hz16dAA0pArJIECCSga1EPdxaiXz7bffSlTNAZpIuYLWqFGjECMebtWZsC7c9k9Y+w0bNgj8XLt2TYwfPz7X/kOGDJHbtQceeEA0btw41/vUFwgliD7Ksm3bNvl1pUqV8u3PG4KJAKu6BNOutmuFt2mo6qIajp2geEK8VqxYMZmqVzW8mUPqlN69e4sqVarEJawiRYrI4qADBw6U3eB/wjgbN24Ubdq0iTcNr4UAAa6gQmBkO1WcP3++ePzxx6XjOq8GQrt48aJA6ACc3zhLB7JBQ99XX31VFkQAyamjLF26dDGGVP4nc06ovObjd8FEgCuoYNrVdq3UCgp18ZI5cgISOnv2rHj99dclIW3fvl3KiLS+yPGE7V2kRqMh98yZM8WUKVPE999/b7suHNA/CHAF5R9baSEptl7JNKya8LN69WrZ/fDhwwIxTsOHD5efUa3Y3L788ksxePBgyzV+CB8CfIsXPptroTESzx04cEDs2LFDZi8wZypQOclr1aqlhawUwjsESFDeYc+ZIwjgDR/inzp06GDgceTIEfk7/U98REhQfAY8RWDnzp1yfnP+J6T8RTPnJPdUSE7uGQIkKM+g58RAAJkKsMUzvxWE/2ns2LE83sJHRJCg+BB4igCCMUeOHGmRAeWl7r77bk/l4uR6IECC0sMOoZQCbwQRG2V2kOPtHhqrt4TykYhRmgQVAwkvuIWACsasXr26MSUCOxGlHl00wS2ZOI9eCJCg9LJHqKRRCerM/ic4yElOoXoM8lSWBJUnPPzSSQS2bNki+vXrZ0yBLd/06dNjgjadlIFj640ACUpv+wRWOuV/at26taGjqh7crl27wOpNxRJDgASVGF682yYE9uzZI0dCdgPV4CCHPyq/g8g2icBhfIAACcoHRgqiiMePH5fxT2lpaYZ648aNk5kS2IiAQoAExWfBdgSwfcvvUPHWrVsFiiSYM3QiLQsS3LERARIUnwHbETh27Jg8slKmTBlJNCqmKXoikBfySlWtWtX4SuWEQuoVNiJAguIzYCsCKHqQkZEhsrOz5eFfrIbq1q0r1Fk782Tx8o/v27dP3sIATVvN4vvBuMXzvQn1UGDo0KFSkK5du4rTp0/Lf9H+97//xWz3sNKqUaOGJd4JZ/Kic0LpoRml8BIBJqzzEv2AzI3adTg/V69ePbFkyRKpFf49d+6crGv3888/W7ZzWVlZomPHjhbtZ8+eLeAkZyMCZgS4guLzUCgE4E+aNWuWHOPDDz+0jPXXX3/JzyVLljSuIxkdCipEJ6jDttCck9wyED+EFgGuoEJrevsUV/4jOMeVs/vKlSsiNTU1ZhKsptDMx1mQVRPN7DSP6cgLoUSABBVKs9unNAjn/PnzcsD09PS4A4Os0LDa6t+/vyy6YCaojz/+OKYmXtyBeDF0CHCLFzqT26sw/El5NSSeUysjHGVBepXoWKevv/5aPPbYY3kNw+9CigAJKqSGt0vt4sWLy6EWLlwocnJyxKVLl+S/qgZe+fLljalOnTolfze/rVMlqBhBbpdFgjUOt3jBsqdn2sD/hKhwVW0YVYLRzJVZFixYIK9lZmYacqJWHpr5yIvxJX8JPQJcQYX+ESgcANeuXZMDIFBTtZUrV8oVFIpxmsuWIxQBLSUlxbgX/ifERJmPvBhf8pfQI0CCCv0jUDgAsBpCBoK+ffvKoy2bNm0ywgWQ20k19XYPW0EzGSEnlCreWThJ2DuICJCggmhVF3XC2zgV04SjLVg1oeFf85s6HA5G69mzpyEdSAsEhQBPNiIQDwESVDxUeC0hBN58800xY8YMow/e3EUHba5fvz7mKIuKnzLnJE9oYt4ceAToJA+8iZ1XEGEEffr0kT+5NRx9mTZtmuVrVBCuWLEiE9TlBhqvsy4enwHnEcDxFhxladasmWWytWvXitdee815ATiDbxHgFs+3pvOP4J988okU1uyTQlQ5/E9Nmzb1jyKU1HUESFCuQx6+Cbdv3y5DCcxN+Z8qVaoUPkCocYERIEEVGCremCwCOMoS7Z86dOiQHK5s2bLJDst+IUCABBUCI3upIrZyWEG1b9/eIgau4eCwijz3UkbOrS8CJCh9bRMIyVR639tuu82iz+rVq2NIKxAKUwlbESBB2QonB4tGAKl84X8y17pDnnIEafL8XTRa/ByNAAkqGhF+thWBxYsXxxxlQSpgRI+zQIKtUAdyMBJUIM2qj1I//PBDDBGhHl6LFi30EZKSaIsACUpb0/hfMJQ3R7ZN1MpTDUGbcJBHB236X1tq4AQCJCgnUOWYEoGjR4/K8uZm/9PFixfld+agTcJFBHJDgASVGzK8XmgEsL0bOXKkMQ5WT6ifB6c5/U+FhjcUA5CgQmFmb5RE4ro6deoYk1+9elXWz2vdurU3AnFW3yFAgvKdyfwhMMIIUP78zjvvNARG0QS06KIJ/tCIUnqBAAnKC9RDMKc6a2deQc2bN09qTgd5CB4Am1QkQdkEJIexIoBcT6jUYk7v++OPP8qbzG/1iBsRyAsBElRe6PC7pBH44IMPLBk0cSZPFVJggYSkYQ1dRxJU6EzuvMLK/1S/fn1jMlXyfPny5c4LwBkCgwAJKjCm1EeRkydPSmGaNGliCIUzeYiJYvYCfezkB0lIUH6wks9kRIAmqgebyQhFFF544QWfaUJxvUaABOW1BQI4/5QpU4xSVFBPpfft1q1bALWlSk4iQIJyEt0Qjg0yQvzT/fffb2h/4cIF+XuVKlVCiAhVLgwCJKjCoMe+MQioYEzzURZV8hzlqdiIQCIIkKASQYv35ovAsWPH5D3mw8AoiY6YKDYikCgCJKhEEeP9eSKwZs0a0bVrV8s9e/fuFb169cqzH78kAvEQIEHFQ4XXkkZg/vz5whz/BJ8U8kKZryU9ODuGDgESVOhM7pzCSKeClpmZaUyybt06+TvLSzmHe5BHJkEF2bou66aixc3+J7zRQ/5xBmi6bIyATEeCCoghdVADwZgNGza0iILy5n379tVBPMrgQwRIUD40mq4if/HFF+K+++4zxFMBmsyeqavF9JeLBKW/jXwjIYohmMMJVNFOplfxjQm1E5QEpZ1J/CnQmTNnpOAtW7Y0FFAxUeaiCf7UjlJ7hQAJyivkAzZvVlaW1MjsDD9w4ICYOHFiwDSlOm4iQIJyE+0Az7V79+6YYEz4pBj/FGCju6AaCcoFkMMwxapVqyz+J2z54JMqV65cGNSnjg4hQIJyCNgwDat8TebV0v79+yUEGRkZYYJCC11x3Ojll1+OkWXmzJli3LhxMdd1vkCC0tk6PpFt27ZtUtK0tDRDYgRo4kweAzTdNyJKfc2ZM8eSIPDtt9+W8WiVK1d2X6BCzEiCKgR47PofAggnQLS4uRgCiiY89NBDhMgDBBB3hoymOBeJWLRFixaJQYMGyRcWfstqSoLy4AEK2pTIN24ucY6iCdnZ2aJp06ZBU9U3+kybNk3K2qNHD/HMM8+Il156SQwbNsw38itBSVC+M5leAqsKLuZocVW0s0GDBnoJGyJpgH2NGjVkqXm0ESNG+FJ7EpQvzaaP0IqMzP4nFO3Efw7Wv/POTsC+T58+UoCFCxcKv2YzJUF59wz5fma8vRszZkyMM3zChAmie/fuvtfPzwrs3LlT+p3QvvvuO9+qQoLyrem8F/zUqVMy1qlNmzaGMHDKYtt31113eS9giCVQW7rq1auL8ePHC5Wry2+QkKD8ZjGN5D1+/LiUpm7duoZUqmhCu3btNJI0XKIg1glpbhDd/9Zbb0nld+zY4UsQSFC+NJseQiO2BgeBzc5wbPvwV5sHhL2xEYIxR40aJf1OyM2lVrevvPKKDDnwWyNB+c1imsirQglatWplcYbj0DBebbN5g8DatWvFkCFDRM+ePaUACJRFPFRqaqpQGU+9kSy5WYsl1429wo7AyZMnJQQdOnSwQIHX2vgPweYNAhs2bIiZ+Pnnnxf48WPjCsqPVtNA5tOnT8vDweovNURSjljWwNPAQAERgQQVEEO6rcbnn38uHnzwQcu0yhFL/5Pb1khuPj+82SNBJWfb0PZCACD8TziMWq1aNQsOn376aUzRztACpbniqPZcunRp0a1bN6GyoeooMglKR6toKFNKSoqUqmTJkuLKlSvy9xYtWlgknT59unjyySct1/hBTwTuuOMO+ZYPPsP09HSZngWkpVtz3UmOv76IleExCN0ehbzluXDhgrwBYQRI5YtQgrNnz8ofNPik0PAZlYTZ9EdgwYIFYuvWrTINC1bE+EGKHDjUmzRpokeqnBsut06dOt2ImI4/xIDPgMbPAP6f6tCKQAg3uR77XWT8K1GihJvTcq5CIgB7IW0HAgDxL97UPfHEE+Lff/8VN910k3jnnXdknM3s2bOFWm0Vckp2dxAB2KxMmTJi/fr1lrAQrIwRfd65c2cHZ09gaB1YkjLoj0BOTo5c8axYsUL+GzlGYRG6bdu2NyI5h/RXhBJKBC5dunSjV69exioW9lu9evUN2Fmn5roPKgHu5K0aIfD3339LaVQK3+gS5xs3bvTteS+NYHZNFBz0RkDt2LFjxXPPPadtOhbXt3iuWYAT2YoAYmbwWhrEVL58eWGOWMbbHxwYjvxV1sOxaqvmwR0MNtU9ZzxXUMF9/hzRDG/oIts8y9jr1q2Tn3V/2B0BxMeD+sFejIPy8QPmlei1a9e2TI0SU0jSz0YE7EaAWzy7EQ3oeGqLB/UiTlSLlkWKFBHwQZkT1wUUBqrlMgJcQbkMuN+nQ65xc1PRx3Xq1PG7apRfQwRIUBoaRWeROnbsaBFPVRAuW7aszmJTNp8iQILyqeG8EhuHS1VDYn4Ebfbv358Ocq8MEvB5SVABN7Bd6sXLxnju3Dk5fLNmzeyahuMQAQsCJCg+EAVCAKlUohuOSaBFv9WLvo+fiUCyCJCgkkUuZP127dpl0Rhv9XD6vV69esJcVThksFBdhxEgQTkMcFCGP3r0qEUVteWrVKlSUFSkHhoiQILS0Ci6iQRn+MGDBy1iIbygYsWKYu7cubqJS3kChAAJKkDGdEqVvXv3xgyNpHVIuVK1atWY73iBCNiFAAnKLiQDPA5inWbMmCGzaKqGctqNGzcOsNZUTQcEeNRFBys4KAMqzW7atCmhGcyn3JGiefv27fIoSyRnkCynDaLC9g4O8lKlSolff/1VJq5TtfJU2anNmzeLpk2byrnRH/0QmgC/1aRJk/RJipYQOrzZTQSYzcBNtD2YS/mIkGM6mfbLL7/IbmlpaUb3ffv2yd/zG9NcMy+SzE72QWZOEB7KVmmTtdHQjL/ohgAJSjeLOCDPs88+KwYOHJjUyFh9YeV08803y/7IA3XkyBGZXB/pfZNpWNWpIzLJ9Gef8CBAggqBra9du5a0luoQ8Lx58+QY77//vtzKITd5sq0w8iQ7J/v5EwESlD/t5prUeEsHv1OHDh3knCAnpIo1b99cE4YThQ4BElToTJ64wkjzC19UsWLFJFlF5yNPfET2CBICqNSE50K5AezQ7Z9//hHNmzcXkqCQxhWloIoXL57U2EjA/tVXXyXd3zwplv+9e/cWffr0SUoWdioYAnio3nvvPePm1NRUkZmZKWrWrBm3qKoqmnD9+vW4E+A6tn94FlTD2zo4x1mkNS5k2l5EYG5+L0DcEL5ly5b/EdRHH30kULZal0YHqnOWQNjAU089JbZs2SInqVChgixlfvnyZTF8+HB5bceOHQnFOMHpjeq08RquI4c539jFQ0fPayh+kUzDyxSEk6iGlXblypWTGUpcvXpV9OjR4z+CGjx4sGjfvn1SAxW2U7ly5WRcDBpWcbk96IWdh/2FwKopPT1dxiNFaqBJv5J5dYO/nCiFjb+ecILn52dCGfQRI0aIVatWyfJFAwYMECkpKRJqjIt4KjjXu3TpIr8fOXIkzeAjBIYMGSKLeOa2ao5WBTY332vHyllu8eAI1eHIQqNGjaJ15mebEMCDA3KKFGuUpBGvITIcP3hzh6rBIJvcVj4gn3bt2kmneW75yFE1BOENCNaEbeFCGDZsWLypeU1DBFBmDC0Roknk3oKozKMuBUHJ5/fAv4TUKIj+fvfdd/PVBpHgEydOlCsfbAnjNUVycI7mVywBS32s2LCFxKorWV9nPDl4zTkEdLATCco5+2oxMh6yP/74Q4wePVpMmTKlwH8NsdJCw5YvXhs0aJDo169fgd/oqTAFrLrY/IFAbn+c3JSeBOUm2h7NhchvPGzxttBDhw4VixYtipEMznOsuE6cOGH5TvmWcDGe3xLzIG85/F3mhn6dOnWSW0IQpp2vpGOE54XAIECCCowp4ysCIsA2DA0hBNFt8uTJIl46FdxXrVo1cfz4censVgnqEA8FwsttvN9//10sXbpU/PTTT9FTidtvv11e27ZtW8x3vEAE4iFAgoqHSoCuIXxAZRlAdoHodvr0aTFu3Ljoy/JtzDfffCOjxuEszcjIkPfAea5iZOKNh/zkhw4dihumoFZq5lfRMRPzAhEwIUCCCvjjgNUMXhcjNcqyZctitMXbW7xti25ZWVkyNgrObazAFKngXxAQYl6mTp0qV1fRLV6OcjjHMR58WyBF5JdiIwL5IhApY61NiwQIoqb2jchDrI1MQREkQjIS2wjB5KtSJFAvxg7qGsZBi5CMvCcSU3Xj/PnzeY6Zk5NzIxIVfCNCknnexy/1QADPCGwb+cPmuUBaraCqVKmSL6HyhuQQwKt+rKSw8kGIQLzgOzi4kV6lfv36AiXOzSEJKgBTzY6VFyLEsX1E7BSOS8UbE9exJcQqDDmg2PRHIN6q2CuptTosrM57LVmyRNSqVcsrTAI5rzmmBWcdJ0yYIF588UVDV5ATHOaqIRYKsVM4doQ4KrUNXLBggcwJBac5riMqPTs7W74hBAl2795d4Dwl5sN2UG0NQXg4JYBjNfgPgBMEbHoigJMB2jTP13AmAdQ2JAKOXGLyhxjwGfDuGYgcT/KcHrTKSY6/rLNmzRJ//fWX6wSO84DqTCBif+yUoXz58qJMmTLyR7WzZ88a87murGlChCGYKwMjzQV0j17h4HrdunWl0/yWW26RI+AehB2o/ihFpRqO1aijEl7qx7mTRwBpnuO9QEl+xMR7akVQiYvPHm4hgD8eIBz4kpgPyi3U7ZsHB7VhN1XQwr6RnR1JKx+Us6pydDsQiOcIt2NcjuEsAsghb17BOzubfaOToOzDkiMRAW0RiBebpq2wJsG0CjPwA2CUkQgQAfcQIEG5hzVnIgJEIEEESFAJAsbbiQARcA8BEpR7WHMmIkAEEkSABJUgYLydCBAB9xAgQbmHNWciAkQgQQRIUAkCxtuJABFwDwESlHtYcyYiQAQSRIAElSBgvJ0IEAH3ECBBuYc1ZyICRCBBBEhQCQLG24kAEXAPARKUe1hzJiJABBJEgASVIGBhvR0pf1F4gWmZw/oEeKP3/wHnHFgv4b8ucgAAAABJRU5ErkJggg==" } + }, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Cartpole optimal control problem\n", + "\n", + "![image.png](attachment:image.png)\n", + "\n", + "A cartpole is another classical example of control. In this system, an underactuated pole is attached on top of a 1D actuacted cart. The game is to raise the pole to a standing position.\n", + "\n", + "The model is here:\n", + "https://en.wikipedia.org/wiki/Inverted_pendulum\n", + "\n", + "We denote by $m_1$ the cart mass, $m_2$ the pole mass ($m=m_1+m_2$), $l$ the pole length, $\\theta$ the pole angle w.r.t. the vertical axis, $p$ the cart position, and $g=$9.81 the gravity.\n", + "\n", + "The system acceleration can be rewritten as:\n", + "\n", + "$$\\ddot{\\theta} = \\frac{1}{\\mu(\\theta)} \\big( \\frac{\\cos \\theta}{l} f + \\frac{mg}{l} \\sin(\\theta) - m_2 \\cos(\\theta) \\sin(\\theta) \\dot{\\theta}^2\\big),$$\n", + "\n", + "$$\\ddot{p} = \\frac{1}{\\mu(\\theta)} \\big( f + m_2 \\cos(\\theta) \\sin(\\theta) g -m_2 l \\sin(\\theta) \\dot{\\theta} \\big),$$\n", + "\n", + "$\\hspace{12em}$with $$\\mu(\\theta) = m_1+m_2 \\sin(\\theta)^2,$$\n", + "\n", + "where $f$ represents the input command (i.e $f=u$) and $m=m_1+m_2$ is the total mass.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## I. Differential Action Model\n", + "\n", + "A Differential Action Model (DAM) describes the action (control/dynamics) in continous-time. In this exercise, we ask you to write the equation of motions for the cartpole.\n", + "\n", + "For more details, see the instructions inside the DifferentialActionModelCartpole class." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from cartpole_utils import animateCartpole\n", + "from IPython.display import HTML\n", + "\n", + "import crocoddyl\n", + "\n", + "\n", + "class DifferentialActionModelCartpole(crocoddyl.DifferentialActionModelAbstract):\n", + " def __init__(self):\n", + " crocoddyl.DifferentialActionModelAbstract.__init__(\n", + " self, crocoddyl.StateVector(4), 1, 6\n", + " ) # nu = 1; nr = 6\n", + " self.unone = np.zeros(self.nu)\n", + "\n", + " self.m1 = 1.0\n", + " self.m2 = 0.1\n", + " self.l = 0.5\n", + " self.g = 9.81\n", + " self.costWeights = [\n", + " 1.0,\n", + " 1.0,\n", + " 0.1,\n", + " 0.001,\n", + " 0.001,\n", + " 1.0,\n", + " ] # sin, 1-cos, x, xdot, thdot, f\n", + "\n", + " def calc(self, data, x, u=None):\n", + " if u is None:\n", + " u = model.unone\n", + " # Getting the state and control variables\n", + " y, th, ydot, thdot = x[0].item(), x[1].item(), x[2].item(), x[3].item()\n", + " f = u[0].item()\n", + "\n", + " # Shortname for system parameters\n", + " m1, m2, l, g = self.m1, self.m2, self.l, self.g\n", + " s, c = np.sin(th), np.cos(th)\n", + "\n", + " ###########################################################################\n", + " ############ TODO: Write the dynamics equation of your system #############\n", + " ###########################################################################\n", + " # Hint:\n", + " # You don't need to implement integration rules for your dynamic system.\n", + " # Remember that DAM implemented action models in continuous-time.\n", + " m = m1 + m2\n", + " mu = m1 + m2 * s**2\n", + " xddot, thddot = cartpole_dynamics(\n", + " self, data, x, u\n", + " ) # Write the cartpole dynamics here\n", + " data.xout = np.matrix([xddot, thddot]).T\n", + "\n", + " # Computing the cost residual and value\n", + " data.r = np.matrix(self.costWeights * np.array([s, 1 - c, y, ydot, thdot, f])).T\n", + " data.cost = 0.5 * sum(np.asarray(data.r) ** 2).item()\n", + "\n", + " def calcDiff(model, data, x, u=None):\n", + " # Advance user might implement the derivatives in cartpole_analytical_derivatives\n", + " cartpole_analytical_derivatives(model, data, x, u)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can get the solution of the cartpole dynamics by uncommenting the following line:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load solutions/cartpole_dyn.py" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You may want to check your computation. Here is how to create the model and run the calc method." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "cartpoleDAM = DifferentialActionModelCartpole()\n", + "cartpoleData = cartpoleDAM.createData()\n", + "x = cartpoleDAM.state.rand()\n", + "u = np.zeros(1)\n", + "cartpoleDAM.calc(cartpoleData, x, u)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## II. Write the derivatives with DAMNumDiff\n", + "\n", + "In the previous exercise, we didn't define the derivatives of the cartpole system. In crocoddyl, we can compute them without any additional code thanks to the DifferentialActionModelNumDiff class. This class computes the derivatives through numerical differentiation.\n", + "\n", + "In the following cell, you need to create a cartpole DAM that computes the derivates using NumDiff." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Creating the carpole DAM using NumDiff for computing the derivatives.\n", + "# We specify the withGaussApprox=True to have approximation of the\n", + "# Hessian based on the Jacobian of the cost residuals.\n", + "cartpoleND = crocoddyl.DifferentialActionModelNumDiff(cartpoleDAM, True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After creating your cartpole DAM with NumDiff. We would like that you answer the follows:\n", + "\n", + " - 2 columns of Fx are null. Wich ones? Why?\n", + "\n", + " - can you double check the values of Fu?\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load solutions/cartpole_fxfu.py" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## III. Integrate the model\n", + "\n", + "After creating DAM for the cartpole system. We need to create an Integrated Action Model (IAM). Remenber that an IAM converts the continuos-time action model into a discrete-time action model. For this exercise we'll use a simpletic Euler integrator." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load solutions/cartpole_integration.py\n", + "###########################################################################\n", + "################## TODO: Create an IAM with from the DAM ##################\n", + "###########################################################################\n", + "# Hint:\n", + "# Use IntegratedActionModelEuler" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## IV. Write the problem, create the solver\n", + "\n", + "First, you need to describe your shooting problem. For that, you have to indicate the number of knots and their time step. For this exercise we want to use 50 knots with $dt=$5e-2.\n", + "\n", + "Here is how we create the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Fill the number of knots (T) and the time step (dt)\n", + "x0 = np.array([0.0, 3.14, 0.0, 0.0])\n", + "T = 50\n", + "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, cartpoleIAM)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Problem can not solve, just integrate:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "us = [np.zeros(cartpoleIAM.differential.nu)] * T\n", + "xs = problem.rollout(us)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In cartpole_utils, we provite a plotCartpole and a animateCartpole methods. Let's display this rollout!\n", + "\n", + "Note that to_jshtml spawns the video control commands." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "HTML(animateCartpole(xs).to_jshtml())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load solutions/cartpole_ddp.py\n", + "# #########################################################################\n", + "# ################# TODO: Create the DDP solver and run it ###############\n", + "# ##########################################################################" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "HTML(animateCartpole(ddp.xs.tolist()).to_jshtml())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tune the problem, solve it\n", + "\n", + "Give some indication about what should be tried for solving the problem.\n", + "\n", + "\n", + " - Without a terminal model, we can see some swings but we cannot stabilize. What should we do?\n", + "\n", + " - The most important is to reach the standing position. Can we also nullify the velocity?\n", + "\n", + " - Increasing all the weights is not working. How to slowly increase the penalty?\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load solutions/cartpole_tuning.py\n", + "# ##########################################################################\n", + "# ################# TODO: Tune the weights for each cost ###################\n", + "# ##########################################################################\n", + "terminalCartpole = DifferentialActionModelCartpole()\n", + "terminalCartpoleDAM = crocoddyl.DifferentialActionModelNumDiff(terminalCartpole, True)\n", + "terminalCartpoleIAM = crocoddyl.IntegratedActionModelEuler(terminalCartpoleDAM)\n", + "\n", + "terminalCartpole.costWeights[0] = 0 # fix me :)\n", + "terminalCartpole.costWeights[1] = 0 # fix me :)\n", + "terminalCartpole.costWeights[2] = 0 # fix me :)\n", + "terminalCartpole.costWeights[3] = 0 # fix me :)\n", + "terminalCartpole.costWeights[4] = 0 # fix me :)\n", + "terminalCartpole.costWeights[5] = 0 # fix me :)\n", + "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, terminalCartpoleIAM)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Creating the DDP solver\n", + "ddp = crocoddyl.SolverDDP(problem)\n", + "ddp.setCallbacks([crocoddyl.CallbackVerbose()])\n", + "\n", + "# Solving this problem\n", + "done = ddp.solve([], [], 300)\n", + "print(done)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "HTML(animateCartpole(ddp.xs.tolist()).to_jshtml())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Using analytical derivatives\n", + "\n", + "You can get the solution of the analytical derivatives by uncommenting the following line:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# %load solutions/cartpole_analytical_derivatives.py\n", + "def cartpole_analytical_derivatives(model, data, x, u=None):\n", + " pass" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The analytical derivatives being defined, we do not need to use DAMNumDiff to numerically approximate the derivatives." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "timeStep = 5e-2\n", + "cartpoleIAM = crocoddyl.IntegratedActionModelEuler(cartpoleDAM, timeStep)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can now run again all the blocks in \"IV. Write the problem, create the solver\" since cartpoleIAM has been redefined to directly used the analytical derivatives." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" }, - "nbformat" : 4, - "nbformat_minor" : 2 + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 } diff --git a/examples/notebooks/cartpole_swing_up_sol.ipynb b/examples/notebooks/cartpole_swing_up_sol.ipynb index 231ac976b..0ee4f243a 100644 --- a/examples/notebooks/cartpole_swing_up_sol.ipynb +++ b/examples/notebooks/cartpole_swing_up_sol.ipynb @@ -1,587 +1,409 @@ { - "cells" : [ - { - "attachments" : { - "image.png" : { - "image/png" : - "iVBORw0KGgoAAAANSUhEUgAAASgAAACrCAYAAADVVAHbAAAABHNCSVQICAgIfAhk" - "iAAAGbNJREFUeF7tXQuwTdUbXyJy3byi4Ro3Qq5KoohikMF4VHdQGKkmmR7Ie0iR" - "1/gzIRqv8miirrfCyKNkTOOdVyjiJvQQFaNk6Mr//" - "Faz9ux9zrmPc+5+rL33b83ccc8+e631fb9v+" - "921vv2t7ytyI9IEGxEoAALXr18XRYsWLcCdvIUI2IPATfYMw1HCgECxYsXEmTNnw" - "qAqddQEARKUJobQXQxFTJ999pnuolK+ACFAggqQMZ1UJSsrSw4/" - "fPhwJ6fh2ETAggAJig9EgRDYuHGjvO/8+fPit99+K1Af3kQECosACaqwCIag/" - "+HDh8WWLVsMTdevXx8CramiDgiQoHSwguYyrFu3zpCwXr16YsyYMZpLTPGCggAJK" - "iiWdEgPhBYsXrxYTJ06VdSoUUPMmTNHZGdnC6yq2IiA0wiQoJxG2Ofj79mzRxw8e" - "FDcc889kpguXbokNTKvqnyuIsXXGAESlMbG0UE0RUh16tSR4qSlpYmHH36YjnIdj" - "BMCGYowkjwEVi6kinhrV6JECVG6dGmxe/dukZGRIVJSUhhVXkhc2T1/" - "BLiCyh+j0N9RoUIFAwP4pG699VaSU+" - "ifCncAKObONJyFCCSOAKLXBwwYIC5fviw7z507V1StWjXxgdjDtwhwBeVb0wVbcJ" - "BTenq6WLVqlWjYsKFUFp9XrlwZbMWpnQUBrqD4QGiJwIoVK6RcOTk5xnZy6NChYv" - "ny5aJz585aykyh7EeAKyj7MeWINiCwa9cuMXHiRIuvq1mzZiI1NdWG0TmEXxAgQf" - "nFUiGS888//" - "xRLly4V9evXt2itDiyHCIrQq0qCCv0joB8Ap06dkkI1aNDAEA5vD0FazZs3109gS" - "uQYAiQox6DlwMkisH//ftm1XLlyxhAXLlyQv0evqpKdg/38gQAJyh92CpWUe/" - "fuFW+88YbF/" - "3TixAmJwb333hsqLMKuLAkq7E+" - "AhvqvWbPGCC1Q4iGTZ9u2bTWUliI5iQAJykl0OXbCCBw7dkycPHlS1K5d29J31Kh" - "RomXLlgmPxw7+RoAE5W/" - "7BU56OMhLlSolatasaeimMng++uijgdOXCuWNAAkqb3z4rcsIYCvXs2dPi/" - "8JKyo0FVHuskiczkMESFAegs+" - "pYxGYPHlyDBEtW7Ys5lpsT14JIgIkqCBa1ac6qdJWjzzyiEUDOsh9alAbxCZB2QA" - "ih7AHAcQ/VaxY0eIgR1Q5MnrimAtb+BAgQYXP5tpqPGnSpJiDwE8//" - "bSUt1WrVtrKTcGcQ4AE5Ry2HDlBBLBSqlatmtELx1sQE4WEeUWLFk1wNN4eBARIU" - "EGwYgB0gP8JiekyMzMNbTZv3ix/" - "Hz16dAA0pArJIECCSga1EPdxaiXz7bffSlTNAZpIuYLWqFGjECMebtWZsC7c9k9Y" - "+w0bNgj8XLt2TYwfPz7X/kOGDJHbtQceeEA0btw41/" - "vUFwgliD7Ksm3bNvl1pUqV8u3PG4KJAKu6BNOutmuFt2mo6qIajp2geEK8VqxYMZ" - "mqVzW8mUPqlN69e4sqVarEJawiRYrI4qADBw6U3eB/" - "wjgbN24Ubdq0iTcNr4UAAa6gQmBkO1WcP3++" - "ePzxx6XjOq8GQrt48aJA6ACc3zhLB7JBQ99XX31VFkQAyamjLF26dDGGVP4nc06o" - "vObjd8FEgCuoYNrVdq3UCgp18ZI5cgISOnv2rHj99dclIW3fvl3KiLS+" - "yPGE7V2kRqMh98yZM8WUKVPE999/b7suHNA/" - "CHAF5R9baSEptl7JNKya8LN69WrZ/" - "fDhwwIxTsOHD5efUa3Y3L788ksxePBgyzV+" - "CB8CfIsXPptroTESzx04cEDs2LFDZi8wZypQOclr1aqlhawUwjsESFDeYc+" - "ZIwjgDR/inzp06GDgceTIEfk7/" - "U98REhQfAY8RWDnzp1yfnP+J6T8RTPnJPdUSE7uGQIkKM+g58RAAJkKsMUzvxWE/" - "2ns2LE83sJHRJCg+BB4igCCMUeOHGmRAeWl7r77bk/" - "l4uR6IECC0sMOoZQCbwQRG2V2kOPtHhqrt4TykYhRmgQVAwkvuIWACsasXr26MSU" - "COxGlHl00wS2ZOI9eCJCg9LJHqKRRCerM/" - "ic4yElOoXoM8lSWBJUnPPzSSQS2bNki+vXrZ0yBLd/" - "06dNjgjadlIFj640ACUpv+wRWOuV/" - "at26taGjqh7crl27wOpNxRJDgASVGF682yYE9uzZI0dCdgPV4CCHPyq/" - "g8g2icBhfIAACcoHRgqiiMePH5fxT2lpaYZ648aNk5kS2IiAQoAExWfBdgSwfcvv" - "UPHWrVsFiiSYM3QiLQsS3LERARIUnwHbETh27Jg8slKmTBlJNCqmKXoikBfySlWt" - "WtX4SuWEQuoVNiJAguIzYCsCKHqQkZEhsrOz5eFfrIbq1q0r1Fk782Tx8o/" - "v27dP3sIATVvN4vvBuMXzvQn1UGDo0KFSkK5du4rTp0/Lf9H+97//" - "xWz3sNKqUaOGJd4JZ/Kic0LpoRml8BIBJqzzEv2AzI3adTg/" - "V69ePbFkyRKpFf49d+6crGv3888/" - "W7ZzWVlZomPHjhbtZ8+eLeAkZyMCZgS4guLzUCgE4E+aNWuWHOPDDz+0jPXXX3/" - "JzyVLljSuIxkdCipEJ6jDttCck9wyED+EFgGuoEJrevsUV/4jOMeVs/" - "vKlSsiNTU1ZhKsptDMx1mQVRPN7DSP6cgLoUSABBVKs9unNAjn/" - "PnzcsD09PS4A4Os0LDa6t+/vyy6YCaojz/+OKYmXtyBeDF0CHCLFzqT26sw/" - "El5NSSeUysjHGVBepXoWKevv/" - "5aPPbYY3kNw+9CigAJKqSGt0vt4sWLy6EWLlwocnJyxKVLl+S/" - "qgZe+fLljalOnTolfze/rVMlqBhBbpdFgjUOt3jBsqdn2sD/" - "hKhwVW0YVYLRzJVZFixYIK9lZmYacqJWHpr5yIvxJX8JPQJcQYX+" - "ESgcANeuXZMDIFBTtZUrV8oVFIpxmsuWIxQBLSUlxbgX/" - "ifERJmPvBhf8pfQI0CCCv0jUDgAsBpCBoK+" - "ffvKoy2bNm0ywgWQ20k19XYPW0EzGSEnlCreWThJ2DuICJCggmhVF3XC2zgV04Sj" - "LVg1oeFf85s6HA5G69mzpyEdSAsEhQBPNiIQDwESVDxUeC0hBN58800xY8YMow/" - "e3EUHba5fvz7mKIuKnzLnJE9oYt4ceAToJA+8iZ1XEGEEffr0kT+" - "5NRx9mTZtmuVrVBCuWLEiE9TlBhqvsy4enwHnEcDxFhxladasmWWytWvXitdee81" - "5ATiDbxHgFs+3pvOP4J988okU1uyTQlQ5/" - "E9Nmzb1jyKU1HUESFCuQx6+Cbdv3y5DCcxN+" - "Z8qVaoUPkCocYERIEEVGCremCwCOMoS7Z86dOiQHK5s2bLJDst+" - "IUCABBUCI3upIrZyWEG1b9/" - "eIgau4eCwijz3UkbOrS8CJCh9bRMIyVR639tuu82iz+" - "rVq2NIKxAKUwlbESBB2QonB4tGAKl84X8y17pDnnIEafL8XTRa/" - "ByNAAkqGhF+thWBxYsXxxxlQSpgRI+zQIKtUAdyMBJUIM2qj1I//" - "PBDDBGhHl6LFi30EZKSaIsACUpb0/" - "hfMJQ3R7ZN1MpTDUGbcJBHB236X1tq4AQCJCgnUOWYEoGjR4/K8uZm/" - "9PFixfld+agTcJFBHJDgASVGzK8XmgEsL0bOXKkMQ5WT6ifB6c5/" - "U+FhjcUA5CgQmFmb5RE4ro6deoYk1+" - "9elXWz2vdurU3AnFW3yFAgvKdyfwhMMIIUP78zjvvNARG0QS06KIJ/" - "tCIUnqBAAnKC9RDMKc6a2deQc2bN09qTgd5CB4Am1QkQdkEJIexIoBcT6jUYk7v+" - "+OPP8qbzG/" - "1iBsRyAsBElRe6PC7pBH44IMPLBk0cSZPFVJggYSkYQ1dRxJU6EzuvMLK/1S/" - "fn1jMlXyfPny5c4LwBkCgwAJKjCm1EeRkydPSmGaNGliCIUzeYiJYvYCfezkB0lI" - "UH6wks9kRIAmqgebyQhFFF544QWfaUJxvUaABOW1BQI4/" - "5QpU4xSVFBPpfft1q1bALWlSk4iQIJyEt0Qjg0yQvzT/fffb2h/" - "4cIF+XuVKlVCiAhVLgwCJKjCoMe+" - "MQioYEzzURZV8hzlqdiIQCIIkKASQYv35ovAsWPH5D3mw8AoiY6YKDYikCgCJKhE" - "EeP9eSKwZs0a0bVrV8s9e/fuFb169cqzH78kAvEQIEHFQ4XXkkZg/vz5whz/" - "BJ8U8kKZryU9ODuGDgESVOhM7pzCSKeClpmZaUyybt06+" - "TvLSzmHe5BHJkEF2bou66aixc3+J7zRQ/" - "5xBmi6bIyATEeCCoghdVADwZgNGza0iILy5n379tVBPMrgQwRIUD40mq4if/" - "HFF+K+++4zxFMBmsyeqavF9JeLBKW/jXwjIYohmMMJVNFOplfxjQm1E5QEpZ1J/" - "CnQmTNnpOAtW7Y0FFAxUeaiCf7UjlJ7hQAJyivkAzZvVlaW1MjsDD9w4ICYOHFiw" - "DSlOm4iQIJyE+0Az7V79+6YYEz4pBj/" - "FGCju6AaCcoFkMMwxapVqyz+" - "J2z54JMqV65cGNSnjg4hQIJyCNgwDat8TebV0v79+yUEGRkZYYJCC11x3Ojll1+" - "OkWXmzJli3LhxMdd1vkCC0tk6PpFt27ZtUtK0tDRDYgRo4kweAzTdNyJKfc2ZM8e" - "SIPDtt9+" - "W8WiVK1d2X6BCzEiCKgR47PofAggnQLS4uRgCiiY89NBDhMgDBBB3hoymOBeJWLR" - "FixaJQYMGyRcWfstqSoLy4AEK2pTIN24ucY6iCdnZ2aJp06ZBU9U3+" - "kybNk3K2qNHD/" - "HMM8+Il156SQwbNsw38itBSVC+" - "M5leAqsKLuZocVW0s0GDBnoJGyJpgH2NGjVkqXm0ESNG+" - "FJ7EpQvzaaP0IqMzP4nFO3Efw7Wv/" - "POTsC+T58+UoCFCxcKv2YzJUF59wz5fma8vRszZkyMM3zChAmie/" - "fuvtfPzwrs3LlT+p3QvvvuO9+qQoLyrem8F/" - "zUqVMy1qlNmzaGMHDKYtt31113eS9giCVQW7rq1auL8ePHC5Wry2+" - "QkKD8ZjGN5D1+/LiUpm7duoZUqmhCu3btNJI0XKIg1glpbhDd/" - "9Zbb0nld+zY4UsQSFC+" - "NJseQiO2BgeBzc5wbPvwV5sHhL2xEYIxR40aJf1OyM2lVrevvPKKDDnwWyNB+" - "c1imsirQglatWplcYbj0DBebbN5g8DatWvFkCFDRM+ePaUACJRFPFRqaqpQGU+" - "9kSy5WYsl1429wo7AyZMnJQQdOnSwQIHX2vgPweYNAhs2bIiZ+" - "Pnnnxf48WPjCsqPVtNA5tOnT8vDweovNURSjljWwNPAQAERgQQVEEO6rcbnn38uH" - "nzwQcu0yhFL/" - "5Pb1khuPj+" - "82SNBJWfb0PZCACD8TziMWq1aNQsOn376aUzRztACpbniqPZcunRp0a1bN6Gyoeo" - "oMglKR6toKFNKSoqUqmTJkuLKlSvy9xYtWlgknT59unjyySct1/" - "hBTwTuuOMO+" - "ZYPPsP09HSZngWkpVtz3UmOv76IleExCN0ehbzluXDhgrwBYQRI5YtQgrNnz8ofN" - "Pik0PAZlYTZ9EdgwYIFYuvWrTINC1bE+EGKHDjUmzRpokeqnBsut06dOt2ImI4/" - "xIDPgMbPAP6f6tCKQAg3uR77XWT8K1GihJvTcq5CIgB7IW0HAgDxL97UPfHEE+" - "Lff/8VN910k3jnnXdknM3s2bOFWm0Vckp2dxAB2KxMmTJi/" - "fr1lrAQrIwRfd65c2cHZ09gaB1YkjLoj0BOTo5c8axYsUL+" - "GzlGYRG6bdu2NyI5h/" - "RXhBJKBC5dunSjV69exioW9lu9evUN2Fmn5roPKgHu5K0aIfD3339LaVQK3+" - "gS5xs3bvTteS+" - "NYHZNFBz0RkDt2LFjxXPPPadtOhbXt3iuWYAT2YoAYmbwWhrEVL58eWGOWMbbHxw" - "YjvxV1sOxaqvmwR0MNtU9ZzxXUMF9/hzRDG/oIts8y9jr1q2Tn3V/" - "2B0BxMeD+sFejIPy8QPmlei1a9e2TI0SU0jSz0YE7EaAWzy7EQ3oeGqLB/" - "UiTlSLlkWKFBHwQZkT1wUUBqrlMgJcQbkMuN+" - "nQ65xc1PRx3Xq1PG7apRfQwRIUBoaRWeROnbsaBFPVRAuW7aszmJTNp8iQILyqeG" - "8EhuHS1VDYn4Ebfbv358Ocq8MEvB5SVABN7Bd6sXLxnju3Dk5fLNmzeyahuMQAQs" - "CJCg+EAVCAKlUohuOSaBFv9WLvo+" - "fiUCyCJCgkkUuZP127dpl0Rhv9XD6vV69esJcVThksFBdhxEgQTkMcFCGP3r0qEU" - "VteWrVKlSUFSkHhoiQILS0Ci6iQRn+" - "MGDBy1iIbygYsWKYu7cubqJS3kChAAJKkDGdEqVvXv3xgyNpHVIuVK1atWY73iBC" - "NiFAAnKLiQDPA5inWbMmCGzaKqGctqNGzcOsNZUTQcEeNRFBys4KAMqzW7atCmhG" - "cyn3JGiefv27fIoSyRnkCynDaLC9g4O8lKlSolff/" - "1VJq5TtfJU2anNmzeLpk2byrnRH/0QmgC/1aRJk/RJipYQOrzZTQSYzcBNtD2YS/" - "mIkGM6mfbLL7/IbmlpaUb3ffv2yd/zG9NcMy+SzE72QWZOEB7KVmmTtdHQjL/" - "ohgAJSjeLOCDPs88+KwYOHJjUyFh9YeV08803y/7IA3XkyBGZXB/" - "pfZNpWNWpIzLJ9Gef8CBAggqBra9du5a0luoQ8Lx58+QY77//" - "vtzKITd5sq0w8iQ7J/v5EwESlD/" - "t5prUeEsHv1OHDh3knCAnpIo1b99cE4YThQ4BElToTJ64wkjzC19UsWLFJFlF5yN" - "PfET2CBICqNSE50K5AezQ7Z9//hHNmzcXkqCQxhWloIoXL57U2EjA/" - "tVXXyXd3zwplv+9e/" - "cWffr0SUoWdioYAnio3nvvPePm1NRUkZmZKWrWrBm3qKoqmnD9+vW4E+" - "A6tn94FlTD2zo4x1mkNS5k2l5EYG5+L0DcEL5ly5b/" - "EdRHH30kULZal0YHqnOWQNjAU089JbZs2SInqVChgixlfvnyZTF8+" - "HB5bceOHQnFOMHpjeq08RquI4c539jFQ0fPayh+" - "kUzDyxSEk6iGlXblypWTGUpcvXpV9OjR4z+" - "CGjx4sGjfvn1SAxW2U7ly5WRcDBpWcbk96IWdh/" - "2FwKopPT1dxiNFaqBJv5J5dYO/" - "nCiFjb+" - "ecILn52dCGfQRI0aIVatWyfJFAwYMECkpKRJqjIt4KjjXu3TpIr8fOXIkzeAjBIY" - "MGSKLeOa2ao5WBTY332vHyllu8eAI1eHIQqNGjaJ15mebEMCDA3KKFGuUpBGvITI" - "cP3hzh6rBIJvcVj4gn3bt2kmneW75yFE1BOENCNaEbeFCGDZsWLypeU1DBFBmDC0" - "Roknk3oKozKMuBUHJ5/fAv4TUKIj+fvfdd/" - "PVBpHgEydOlCsfbAnjNUVycI7mVywBS32s2LCFxKorWV9nPDl4zTkEdLATCco5+" - "2oxMh6yP/74Q4wePVpMmTKlwH8NsdJCw5YvXhs0aJDo169fgd/oqTAFrLrY/" - "IFAbn+" - "c3JSeBOUm2h7NhchvPGzxttBDhw4VixYtipEMznOsuE6cOGH5TvmWcDGe3xLzIG8" - "5/" - "F3mhn6dOnWSW0IQpp2vpGOE54XAIECCCowp4ysCIsA2DA0hBNFt8uTJIl46FdxXr" - "Vo1cfz4censVgnqEA8FwsttvN9//10sXbpU/" - "PTTT9FTidtvv11e27ZtW8x3vEAE4iFAgoqHSoCuIXxAZRlAdoHodvr0aTFu3Ljoy" - "/JtzDfffCOjxuEszcjIkPfAea5iZOKNh/" - "zkhw4dihumoFZq5lfRMRPzAhEwIUCCCvjjgNUMXhcjNcqyZctitMXbW7xti25ZWV" - "kyNgrObazAFKngXxAQYl6mTp0qV1fRLV6OcjjHMR58WyBF5JdiIwL5IhApY61Niw" - "QIoqb2jchDrI1MQREkQjIS2wjB5KtSJFAvxg7qGsZBi5CMvCcSU3Xj/" - "PnzeY6Zk5NzIxIVfCNCknnexy/" - "1QADPCGwb+cPmuUBaraCqVKmSL6HyhuQQwKt+" - "rKSw8kGIQLzgOzi4kV6lfv36AiXOzSEJKgBTzY6VFyLEsX1E7BSOS8UbE9exJcQq" - "DDmg2PRHIN6q2CuptTosrM57LVmyRNSqVcsrTAI5rzmmBWcdJ0yYIF588UVDV5AT" - "HOaqIRYKsVM4doQ4KrUNXLBggcwJBac5riMqPTs7W74hBAl2795d4Dwl5sN2UG0N" - "QXg4JYBjNfgPgBMEbHoigJMB2jTP13AmAdQ2JAKOXGLyhxjwGfDuGYgcT/" - "KcHrTKSY6/rLNmzRJ//" - "fWX6wSO84DqTCBif+" - "yUoXz58qJMmTLyR7WzZ88a87murGlChCGYKwMjzQV0j17h4HrdunWl0/" - "yWW26RI+AehB2o/" - "ihFpRqO1aijEl7qx7mTRwBpnuO9QEl+" - "xMR7akVQiYvPHm4hgD8eIBz4kpgPyi3U7ZsHB7VhN1XQwr6RnR1JKx+" - "Us6pydDsQiOcIt2NcjuEsAsghb17BOzubfaOToOzDkiMRAW0RiBebpq2wJsG0CjP" - "wA2CUkQgQAfcQIEG5hzVnIgJEIEEESFAJAsbbiQARcA8BEpR7WHMmIkAEEkSABJU" - "gYLydCBAB9xAgQbmHNWciAkQgQQRIUAkCxtuJABFwDwESlHtYcyYiQAQSRIAElSB" - "gvJ0IEAH3ECBBuYc1ZyICRCBBBEhQCQLG24kAEXAPARKUe1hzJiJABBJEgASVIGB" - "hvR0pf1F4gWmZw/oEeKP3/wHnHFgv4b8ucgAAAABJRU5ErkJggg==" - } - }, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# Cartpole optimal control problem\n", - "\n", - "![image.png](attachment:image.png)\n", - "\n", - "A cartpole is another classical example of control. In this system, " - "an underactuated pole is attached on top of a 1D actuacted cart. The " - "game is to raise the pole to a standing position.\n", - "\n", - "The model is here:\n", - "https://en.wikipedia.org/wiki/Inverted_pendulum\n", - "\n", - "We denote by $m_1$ the cart mass, $m_2$ the pole mass, $l$ the pole " - "length, $\\theta$ the pole angle w.r.t. the vertical axis, $x$ the " - "cart position, and $g=$9.81 the gravity.\n", - "\n", - "The system acceleration can be rewritten as:\n", - "\n", - "$$\\ddot{\\theta} = \\frac{1}{\\mu(\\theta)} \\big( \\frac{\\cos " - "\\theta}{l} f + \\frac{mg}{l} \\sin(\\theta) - m_2 \\cos(\\theta) " - "\\sin(\\theta) \\dot{\\theta}^2\\big),$$\n", - "\n", - "$$\\ddot{x} = \\frac{1}{\\mu(\\theta)} \\big( f + m_2 \\cos(\\theta) " - "\\sin(\\theta) g -m_2 l \\sin(\\theta) \\dot{\\theta} \\big),$$\n", - "\n", - "$\\hspace{12em}$with $$\\mu(\\theta) = m_1+m_2 \\sin(\\theta)^2,$$\n", - "\n", - "where $f$ represents the input command.\n" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## I. Differential Action Model\n", "\n", - "A Differential Action Model (DAM) describes the action " - "(control/dynamics) in continous-time. In this exercise, we ask you to " - "write the equation of motions for the cartpole.\n", - "\n", - "For more details, see the instructions inside the " - "DifferentialActionModelCartpole class." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "import crocoddyl\n", - "import pinocchio\n", - "import numpy as np\n", - "\n", - "\n", - "class " - "DifferentialActionModelCartpole(crocoddyl." - "DifferentialActionModelAbstract):\n", - " def __init__(self):\n", - " crocoddyl.DifferentialActionModelAbstract.__init__(\n", - " self, crocoddyl.StateVector(4), 1, 6\n", - " ) # nu = 1; nr = 6\n", - " self.unone = np.zeros(self.nu)\n", - "\n", - " self.m1 = 1.0\n", - " self.m2 = 0.1\n", - " self.l = 0.5\n", - " self.g = 9.81\n", - " self.costWeights = [\n", - " 1.0,\n", - " 1.0,\n", - " 0.1,\n", - " 0.001,\n", - " 0.001,\n", - " 1.0,\n", - " ] # sin, 1-cos, x, xdot, thdot, f\n", - "\n", - " def calc(self, data, x, u=None):\n", - " if u is None:\n", - " u = model.unone\n", - " # Getting the state and control variables\n", - " y, th, ydot, thdot = x[0], x[1], x[2], x[3]\n", - " f = u[0]\n", - "\n", - " # Shortname for system parameters\n", - " m1, m2, l, g = self.m1, self.m2, self.l, self.g\n", - " s, c = np.sin(th), np.cos(th)\n", - "\n", - " # Defining the equation of motions\n", - " m = m1 + m2\n", - " mu = m1 + m2 * s**2\n", - " xddot = (f + m2 * c * s * g - m2 * l * s * thdot**2) / mu\n", - " thddot = (c * f / l + m * g * s / l - m2 * c * s * thdot**2) " - "/ mu\n", - " data.xout = np.matrix([xddot, thddot]).T\n", - "\n", - " # Computing the cost residual and value\n", - " data.r = np.matrix(self.costWeights * np.array([s, 1 - c, y, " - "ydot, thdot, f])).T\n", - " data.cost = 0.5 * sum(np.asarray(data.r) ** 2)\n", - "\n", - " def calcDiff(self, data, x, u=None):\n", - " # Advance user might implement the derivatives\n", - " pass" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["You may want to check your computation. Here is how to " - "create the model and run the calc method."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "cartpoleDAM = DifferentialActionModelCartpole()\n", - "cartpoleData = cartpoleDAM.createData()\n", - "x = cartpoleDAM.state.rand()\n", "u = np.zeros(1)\n", - "cartpoleDAM.calc(cartpoleData, x, u)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## II. Write the derivatives with DAMNumDiff\n", "\n", - "In the previous exercise, we didn't define the derivatives of the " - "cartpole system. In crocoddyl, we can compute them without any " - "additional code thanks to the DifferentialActionModelNumDiff class. " - "This class computes the derivatives through numerical " - "differentiation.\n", - "\n", - "In the following cell, you need to create a cartpole DAM that " - "computes the derivates using NumDiff." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Creating the cartpole DAM\n", - "cartpoleND = crocoddyl.DifferentialActionModelNumDiff(cartpoleDAM, " - "True)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "After creating your cartpole DAM with NumDiff. We would like that you " - "answer the follows:\n", - "\n", " - 2 columns of Fx are null. Wich ones? Why?\n", "\n", - " - can you double check the values of Fu?\n" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## III. Integrate the model\n", "\n", - "After creating DAM for the cartpole system. We need to create an " - "Integrated Action Model (IAM). Remenber that an IAM converts the " - "continuos-time action model into a discrete-time action model. For " - "this exercise we'll use a simpletic Euler integrator." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "timeStep = 5e-2\n", - "cartpoleIAM = crocoddyl.IntegratedActionModelEuler(cartpoleND, " - "timeStep)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## IV. Write the problem, create the solver\n", "\n", - "First, you need to describe your shooting problem. For that, you have " - "to indicate the number of knots and their time step. For this " - "exercise we want to use 50 knots with $dt=$5e-2.\n", - "\n", "Here is how we create the problem." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Fill the number of knots (T) and the time step (dt)\n", - "x0 = np.matrix([0.0, 3.14, 0.0, 0.0]).T\n", "T = 50\n", - "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, " - "cartpoleIAM)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Problem cannot solve, just integrate:"] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "us = [pinocchio.utils.zero(cartpoleIAM.differential.nu)] * T\n", - "xs = problem.rollout(us)" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["In cartpole_utils, we provite a plotCartpole and a " - "animateCartpole methods."] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "%%capture\n", "%matplotlib inline\n", - "from cartpole_utils import animateCartpole\n", "\n", - "anim = animateCartpole(xs)\n", "\n", - "# If you encounter problems probably you need to install " - "ffmpeg/libav-tools:\n", - "# sudo apt-get install ffmpeg\n", "# or\n", - "# sudo apt-get install libav-tools" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "And let's display this rollout!\n", "\n", - "Note that to_jshtml spawns the video control commands." - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "from IPython.display import HTML\n", "\n", - "# HTML(anim.to_jshtml())\n", "HTML(anim.to_html5_video())" - ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["Now we want to create the solver (SolverDDP class) and run " - "it. Display the result. **Do you like it?**"] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Creating the DDP solver\n", "ddp = crocoddyl.SolverDDP(problem)\n", - "ddp.setCallbacks([crocoddyl.CallbackVerbose()])\n", "\n", - "# Solving this problem\n", "done = ddp.solve([], [], 1000)\n", - "print done\n", "print ddp.us" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {"scrolled" : true}, - "outputs" : [], - "source" : [ - "%%capture\n", "%matplotlib inline\n", "\n", "# Create animation\n", - "anim = animateCartpole(ddp.xs)" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : - [ "# HTML(anim.to_jshtml())\n", "HTML(anim.to_html5_video())" ] - }, - { - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## Tune the problem, solve it\n", "\n", - "Give some indication about what should be tried for solving the " - "problem.\n", - "\n", "\n", - " - Without a terminal model, we can see some swings but we cannot " - "stabilize. What should we do?\n", - "\n", - " - The most important is to reach the standing position. Can we also " - "nullify the velocity?\n", - "\n", - " - Increasing all the weights is not working. How to slowly increase " - "the penalty?\n", - "\n" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "######################################################################" - "#####\n", - "################## TODO: Tune the weights for each cost " - "###################\n", - "######################################################################" - "#####\n", - "terminalCartpole = DifferentialActionModelCartpole()\n", - "terminalCartpoleDAM = " - "crocoddyl.DifferentialActionModelNumDiff(terminalCartpole, True)\n", - "terminalCartpoleIAM = " - "crocoddyl.IntegratedActionModelEuler(terminalCartpoleDAM)\n", - "\n", "terminalCartpole.costWeights[0] = 100\n", - "terminalCartpole.costWeights[1] = 100\n", - "terminalCartpole.costWeights[2] = 1.0\n", - "terminalCartpole.costWeights[3] = 0.1\n", - "terminalCartpole.costWeights[4] = 0.01\n", - "terminalCartpole.costWeights[5] = 0.0001\n", - "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, " - "terminalCartpoleIAM)" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Creating the DDP solver\n", "ddp = crocoddyl.SolverDDP(problem)\n", - "ddp.setCallbacks([ crocoddyl.CallbackVerbose() ])\n", "\n", - "# Solving this problem\n", "done = ddp.solve([], [], 300)\n", - "print done" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {}, - "outputs" : [], - "source" : [ - "%%capture\n", "%matplotlib inline\n", "\n", "# Create animation\n", - "anim = animateCartpole(ddp.xs)" - ] - }, - { - "cell_type" : "code", - "execution_count" : null, - "metadata" : {"scrolled" : true}, - "outputs" : [], - "source" : - [ "# HTML(anim.to_jshtml())\n", "HTML(anim.to_html5_video())" ] - } - ], - "metadata" : { - "kernelspec" : { - "display_name" : "Python 2", - "language" : "python", - "name" : "python2" - }, - "language_info" : { - "codemirror_mode" : {"name" : "ipython", "version" : 2}, - "file_extension" : ".py", - "mimetype" : "text/x-python", - "name" : "python", - "nbconvert_exporter" : "python", - "pygments_lexer" : "ipython2", - "version" : "2.7.17" + "cells": [ + { + "attachments": { + "image.png": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAASgAAACrCAYAAADVVAHbAAAABHNCSVQICAgIfAhkiAAAGbNJREFUeF7tXQuwTdUbXyJy3byi4Ro3Qq5KoohikMF4VHdQGKkmmR7Ie0iR1/gzIRqv8miirrfCyKNkTOOdVyjiJvQQFaNk6Mr//Faz9ux9zrmPc+5+rL33b83ccc8+e631fb9v+921vv2t7ytyI9IEGxEoAALXr18XRYsWLcCdvIUI2IPATfYMw1HCgECxYsXEmTNnwqAqddQEARKUJobQXQxFTJ999pnuolK+ACFAggqQMZ1UJSsrSw4/fPhwJ6fh2ETAggAJig9EgRDYuHGjvO/8+fPit99+K1Af3kQECosACaqwCIag/+HDh8WWLVsMTdevXx8CramiDgiQoHSwguYyrFu3zpCwXr16YsyYMZpLTPGCggAJKiiWdEgPhBYsXrxYTJ06VdSoUUPMmTNHZGdnC6yq2IiA0wiQoJxG2Ofj79mzRxw8eFDcc889kpguXbokNTKvqnyuIsXXGAESlMbG0UE0RUh16tSR4qSlpYmHH36YjnIdjBMCGYowkjwEVi6kinhrV6JECVG6dGmxe/dukZGRIVJSUhhVXkhc2T1/BLiCyh+j0N9RoUIFAwP4pG699VaSU+ifCncAKObONJyFCCSOAKLXBwwYIC5fviw7z507V1StWjXxgdjDtwhwBeVb0wVbcJBTenq6WLVqlWjYsKFUFp9XrlwZbMWpnQUBrqD4QGiJwIoVK6RcOTk5xnZy6NChYvny5aJz585aykyh7EeAKyj7MeWINiCwa9cuMXHiRIuvq1mzZiI1NdWG0TmEXxAgQfnFUiGS888//xRLly4V9evXt2itDiyHCIrQq0qCCv0joB8Ap06dkkI1aNDAEA5vD0FazZs3109gSuQYAiQox6DlwMkisH//ftm1XLlyxhAXLlyQv0evqpKdg/38gQAJyh92CpWUe/fuFW+88YbF/3TixAmJwb333hsqLMKuLAkq7E+AhvqvWbPGCC1Q4iGTZ9u2bTWUliI5iQAJykl0OXbCCBw7dkycPHlS1K5d29J31KhRomXLlgmPxw7+RoAE5W/7BU56OMhLlSolatasaeimMng++uijgdOXCuWNAAkqb3z4rcsIYCvXs2dPi/8JKyo0FVHuskiczkMESFAegs+pYxGYPHlyDBEtW7Ys5lpsT14JIgIkqCBa1ac6qdJWjzzyiEUDOsh9alAbxCZB2QAih7AHAcQ/VaxY0eIgR1Q5MnrimAtb+BAgQYXP5tpqPGnSpJiDwE8//bSUt1WrVtrKTcGcQ4AE5Ry2HDlBBLBSqlatmtELx1sQE4WEeUWLFk1wNN4eBARIUEGwYgB0gP8JiekyMzMNbTZv3ix/Hz16dAA0pArJIECCSga1EPdxaiXz7bffSlTNAZpIuYLWqFGjECMebtWZsC7c9k9Y+w0bNgj8XLt2TYwfPz7X/kOGDJHbtQceeEA0btw41/vUFwgliD7Ksm3bNvl1pUqV8u3PG4KJAKu6BNOutmuFt2mo6qIajp2geEK8VqxYMZmqVzW8mUPqlN69e4sqVarEJawiRYrI4qADBw6U3eB/wjgbN24Ubdq0iTcNr4UAAa6gQmBkO1WcP3++ePzxx6XjOq8GQrt48aJA6ACc3zhLB7JBQ99XX31VFkQAyamjLF26dDGGVP4nc06ovObjd8FEgCuoYNrVdq3UCgp18ZI5cgISOnv2rHj99dclIW3fvl3KiLS+yPGE7V2kRqMh98yZM8WUKVPE999/b7suHNA/CHAF5R9baSEptl7JNKya8LN69WrZ/fDhwwIxTsOHD5efUa3Y3L788ksxePBgyzV+CB8CfIsXPptroTESzx04cEDs2LFDZi8wZypQOclr1aqlhawUwjsESFDeYc+ZIwjgDR/inzp06GDgceTIEfk7/U98REhQfAY8RWDnzp1yfnP+J6T8RTPnJPdUSE7uGQIkKM+g58RAAJkKsMUzvxWE/2ns2LE83sJHRJCg+BB4igCCMUeOHGmRAeWl7r77bk/l4uR6IECC0sMOoZQCbwQRG2V2kOPtHhqrt4TykYhRmgQVAwkvuIWACsasXr26MSUCOxGlHl00wS2ZOI9eCJCg9LJHqKRRCerM/ic4yElOoXoM8lSWBJUnPPzSSQS2bNki+vXrZ0yBLd/06dNjgjadlIFj640ACUpv+wRWOuV/at26taGjqh7crl27wOpNxRJDgASVGF682yYE9uzZI0dCdgPV4CCHPyq/g8g2icBhfIAACcoHRgqiiMePH5fxT2lpaYZ648aNk5kS2IiAQoAExWfBdgSwfcvvUPHWrVsFiiSYM3QiLQsS3LERARIUnwHbETh27Jg8slKmTBlJNCqmKXoikBfySlWtWtX4SuWEQuoVNiJAguIzYCsCKHqQkZEhsrOz5eFfrIbq1q0r1Fk782Tx8o/v27dP3sIATVvN4vvBuMXzvQn1UGDo0KFSkK5du4rTp0/Lf9H+97//xWz3sNKqUaOGJd4JZ/Kic0LpoRml8BIBJqzzEv2AzI3adTg/V69ePbFkyRKpFf49d+6crGv3888/W7ZzWVlZomPHjhbtZ8+eLeAkZyMCZgS4guLzUCgE4E+aNWuWHOPDDz+0jPXXX3/JzyVLljSuIxkdCipEJ6jDttCck9wyED+EFgGuoEJrevsUV/4jOMeVs/vKlSsiNTU1ZhKsptDMx1mQVRPN7DSP6cgLoUSABBVKs9unNAjn/PnzcsD09PS4A4Os0LDa6t+/vyy6YCaojz/+OKYmXtyBeDF0CHCLFzqT26sw/El5NSSeUysjHGVBepXoWKevv/5aPPbYY3kNw+9CigAJKqSGt0vt4sWLy6EWLlwocnJyxKVLl+S/qgZe+fLljalOnTolfze/rVMlqBhBbpdFgjUOt3jBsqdn2sD/hKhwVW0YVYLRzJVZFixYIK9lZmYacqJWHpr5yIvxJX8JPQJcQYX+ESgcANeuXZMDIFBTtZUrV8oVFIpxmsuWIxQBLSUlxbgX/ifERJmPvBhf8pfQI0CCCv0jUDgAsBpCBoK+ffvKoy2bNm0ywgWQ20k19XYPW0EzGSEnlCreWThJ2DuICJCggmhVF3XC2zgV04SjLVg1oeFf85s6HA5G69mzpyEdSAsEhQBPNiIQDwESVDxUeC0hBN58800xY8YMow/e3EUHba5fvz7mKIuKnzLnJE9oYt4ceAToJA+8iZ1XEGEEffr0kT+5NRx9mTZtmuVrVBCuWLEiE9TlBhqvsy4enwHnEcDxFhxladasmWWytWvXitdee815ATiDbxHgFs+3pvOP4J988okU1uyTQlQ5/E9Nmzb1jyKU1HUESFCuQx6+Cbdv3y5DCcxN+Z8qVaoUPkCocYERIEEVGCremCwCOMoS7Z86dOiQHK5s2bLJDst+IUCABBUCI3upIrZyWEG1b9/eIgau4eCwijz3UkbOrS8CJCh9bRMIyVR639tuu82iz+rVq2NIKxAKUwlbESBB2QonB4tGAKl84X8y17pDnnIEafL8XTRa/ByNAAkqGhF+thWBxYsXxxxlQSpgRI+zQIKtUAdyMBJUIM2qj1I//PBDDBGhHl6LFi30EZKSaIsACUpb0/hfMJQ3R7ZN1MpTDUGbcJBHB236X1tq4AQCJCgnUOWYEoGjR4/K8uZm/9PFixfld+agTcJFBHJDgASVGzK8XmgEsL0bOXKkMQ5WT6ifB6c5/U+FhjcUA5CgQmFmb5RE4ro6deoYk1+9elXWz2vdurU3AnFW3yFAgvKdyfwhMMIIUP78zjvvNARG0QS06KIJ/tCIUnqBAAnKC9RDMKc6a2deQc2bN09qTgd5CB4Am1QkQdkEJIexIoBcT6jUYk7v++OPP8qbzG/1iBsRyAsBElRe6PC7pBH44IMPLBk0cSZPFVJggYSkYQ1dRxJU6EzuvMLK/1S/fn1jMlXyfPny5c4LwBkCgwAJKjCm1EeRkydPSmGaNGliCIUzeYiJYvYCfezkB0lIUH6wks9kRIAmqgebyQhFFF544QWfaUJxvUaABOW1BQI4/5QpU4xSVFBPpfft1q1bALWlSk4iQIJyEt0Qjg0yQvzT/fffb2h/4cIF+XuVKlVCiAhVLgwCJKjCoMe+MQioYEzzURZV8hzlqdiIQCIIkKASQYv35ovAsWPH5D3mw8AoiY6YKDYikCgCJKhEEeP9eSKwZs0a0bVrV8s9e/fuFb169cqzH78kAvEQIEHFQ4XXkkZg/vz5whz/BJ8U8kKZryU9ODuGDgESVOhM7pzCSKeClpmZaUyybt06+TvLSzmHe5BHJkEF2bou66aixc3+J7zRQ/5xBmi6bIyATEeCCoghdVADwZgNGza0iILy5n379tVBPMrgQwRIUD40mq4if/HFF+K+++4zxFMBmsyeqavF9JeLBKW/jXwjIYohmMMJVNFOplfxjQm1E5QEpZ1J/CnQmTNnpOAtW7Y0FFAxUeaiCf7UjlJ7hQAJyivkAzZvVlaW1MjsDD9w4ICYOHFiwDSlOm4iQIJyE+0Az7V79+6YYEz4pBj/FGCju6AaCcoFkMMwxapVqyz+J2z54JMqV65cGNSnjg4hQIJyCNgwDat8TebV0v79+yUEGRkZYYJCC11x3Ojll1+OkWXmzJli3LhxMdd1vkCC0tk6PpFt27ZtUtK0tDRDYgRo4kweAzTdNyJKfc2ZM8eSIPDtt9+W8WiVK1d2X6BCzEiCKgR47PofAggnQLS4uRgCiiY89NBDhMgDBBB3hoymOBeJWLRFixaJQYMGyRcWfstqSoLy4AEK2pTIN24ucY6iCdnZ2aJp06ZBU9U3+kybNk3K2qNHD/HMM8+Il156SQwbNsw38itBSVC+M5leAqsKLuZocVW0s0GDBnoJGyJpgH2NGjVkqXm0ESNG+FJ7EpQvzaaP0IqMzP4nFO3Efw7Wv/POTsC+T58+UoCFCxcKv2YzJUF59wz5fma8vRszZkyMM3zChAmie/fuvtfPzwrs3LlT+p3QvvvuO9+qQoLyrem8F/zUqVMy1qlNmzaGMHDKYtt31113eS9giCVQW7rq1auL8ePHC5Wry2+QkKD8ZjGN5D1+/LiUpm7duoZUqmhCu3btNJI0XKIg1glpbhDd/9Zbb0nld+zY4UsQSFC+NJseQiO2BgeBzc5wbPvwV5sHhL2xEYIxR40aJf1OyM2lVrevvPKKDDnwWyNB+c1imsirQglatWplcYbj0DBebbN5g8DatWvFkCFDRM+ePaUACJRFPFRqaqpQGU+9kSy5WYsl1429wo7AyZMnJQQdOnSwQIHX2vgPweYNAhs2bIiZ+Pnnnxf48WPjCsqPVtNA5tOnT8vDweovNURSjljWwNPAQAERgQQVEEO6rcbnn38uHnzwQcu0yhFL/5Pb1khuPj+82SNBJWfb0PZCACD8TziMWq1aNQsOn376aUzRztACpbniqPZcunRp0a1bN6GyoeooMglKR6toKFNKSoqUqmTJkuLKlSvy9xYtWlgknT59unjyySct1/hBTwTuuOMO+ZYPPsP09HSZngWkpVtz3UmOv76IleExCN0ehbzluXDhgrwBYQRI5YtQgrNnz8ofNPik0PAZlYTZ9EdgwYIFYuvWrTINC1bE+EGKHDjUmzRpokeqnBsut06dOt2ImI4/xIDPgMbPAP6f6tCKQAg3uR77XWT8K1GihJvTcq5CIgB7IW0HAgDxL97UPfHEE+Lff/8VN910k3jnnXdknM3s2bOFWm0Vckp2dxAB2KxMmTJi/fr1lrAQrIwRfd65c2cHZ09gaB1YkjLoj0BOTo5c8axYsUL+GzlGYRG6bdu2NyI5h/RXhBJKBC5dunSjV69exioW9lu9evUN2Fmn5roPKgHu5K0aIfD3339LaVQK3+gS5xs3bvTteS+NYHZNFBz0RkDt2LFjxXPPPadtOhbXt3iuWYAT2YoAYmbwWhrEVL58eWGOWMbbHxwYjvxV1sOxaqvmwR0MNtU9ZzxXUMF9/hzRDG/oIts8y9jr1q2Tn3V/2B0BxMeD+sFejIPy8QPmlei1a9e2TI0SU0jSz0YE7EaAWzy7EQ3oeGqLB/UiTlSLlkWKFBHwQZkT1wUUBqrlMgJcQbkMuN+nQ65xc1PRx3Xq1PG7apRfQwRIUBoaRWeROnbsaBFPVRAuW7aszmJTNp8iQILyqeG8EhuHS1VDYn4Ebfbv358Ocq8MEvB5SVABN7Bd6sXLxnju3Dk5fLNmzeyahuMQAQsCJCg+EAVCAKlUohuOSaBFv9WLvo+fiUCyCJCgkkUuZP127dpl0Rhv9XD6vV69esJcVThksFBdhxEgQTkMcFCGP3r0qEUVteWrVKlSUFSkHhoiQILS0Ci6iQRn+MGDBy1iIbygYsWKYu7cubqJS3kChAAJKkDGdEqVvXv3xgyNpHVIuVK1atWY73iBCNiFAAnKLiQDPA5inWbMmCGzaKqGctqNGzcOsNZUTQcEeNRFBys4KAMqzW7atCmhGcyn3JGiefv27fIoSyRnkCynDaLC9g4O8lKlSolff/1VJq5TtfJU2anNmzeLpk2byrnRH/0QmgC/1aRJk/RJipYQOrzZTQSYzcBNtD2YS/mIkGM6mfbLL7/IbmlpaUb3ffv2yd/zG9NcMy+SzE72QWZOEB7KVmmTtdHQjL/ohgAJSjeLOCDPs88+KwYOHJjUyFh9YeV08803y/7IA3XkyBGZXB/pfZNpWNWpIzLJ9Gef8CBAggqBra9du5a0luoQ8Lx58+QY77//vtzKITd5sq0w8iQ7J/v5EwESlD/t5prUeEsHv1OHDh3knCAnpIo1b99cE4YThQ4BElToTJ64wkjzC19UsWLFJFlF5yNPfET2CBICqNSE50K5AezQ7Z9//hHNmzcXkqCQxhWloIoXL57U2EjA/tVXXyXd3zwplv+9e/cWffr0SUoWdioYAnio3nvvPePm1NRUkZmZKWrWrBm3qKoqmnD9+vW4E+A6tn94FlTD2zo4x1mkNS5k2l5EYG5+L0DcEL5ly5b/EdRHH30kULZal0YHqnOWQNjAU089JbZs2SInqVChgixlfvnyZTF8+HB5bceOHQnFOMHpjeq08RquI4c539jFQ0fPayh+kUzDyxSEk6iGlXblypWTGUpcvXpV9OjR4z+CGjx4sGjfvn1SAxW2U7ly5WRcDBpWcbk96IWdh/2FwKopPT1dxiNFaqBJv5J5dYO/nCiFjb+ecILn52dCGfQRI0aIVatWyfJFAwYMECkpKRJqjIt4KjjXu3TpIr8fOXIkzeAjBIYMGSKLeOa2ao5WBTY332vHyllu8eAI1eHIQqNGjaJ15mebEMCDA3KKFGuUpBGvITIcP3hzh6rBIJvcVj4gn3bt2kmneW75yFE1BOENCNaEbeFCGDZsWLypeU1DBFBmDC0Roknk3oKozKMuBUHJ5/fAv4TUKIj+fvfdd/PVBpHgEydOlCsfbAnjNUVycI7mVywBS32s2LCFxKorWV9nPDl4zTkEdLATCco5+2oxMh6yP/74Q4wePVpMmTKlwH8NsdJCw5YvXhs0aJDo169fgd/oqTAFrLrY/IFAbn+c3JSeBOUm2h7NhchvPGzxttBDhw4VixYtipEMznOsuE6cOGH5TvmWcDGe3xLzIG85/F3mhn6dOnWSW0IQpp2vpGOE54XAIECCCowp4ysCIsA2DA0hBNFt8uTJIl46FdxXrVo1cfz4censVgnqEA8FwsttvN9//10sXbpU/PTTT9FTidtvv11e27ZtW8x3vEAE4iFAgoqHSoCuIXxAZRlAdoHodvr0aTFu3Ljoy/JtzDfffCOjxuEszcjIkPfAea5iZOKNh/zkhw4dihumoFZq5lfRMRPzAhEwIUCCCvjjgNUMXhcjNcqyZctitMXbW7xti25ZWVkyNgrObazAFKngXxAQYl6mTp0qV1fRLV6OcjjHMR58WyBF5JdiIwL5IhApY61NiwQIoqb2jchDrI1MQREkQjIS2wjB5KtSJFAvxg7qGsZBi5CMvCcSU3Xj/PnzeY6Zk5NzIxIVfCNCknnexy/1QADPCGwb+cPmuUBaraCqVKmSL6HyhuQQwKt+rKSw8kGIQLzgOzi4kV6lfv36AiXOzSEJKgBTzY6VFyLEsX1E7BSOS8UbE9exJcQqDDmg2PRHIN6q2CuptTosrM57LVmyRNSqVcsrTAI5rzmmBWcdJ0yYIF588UVDV5ATHOaqIRYKsVM4doQ4KrUNXLBggcwJBac5riMqPTs7W74hBAl2795d4Dwl5sN2UG0NQXg4JYBjNfgPgBMEbHoigJMB2jTP13AmAdQ2JAKOXGLyhxjwGfDuGYgcT/KcHrTKSY6/rLNmzRJ//fWX6wSO84DqTCBif+yUoXz58qJMmTLyR7WzZ88a87murGlChCGYKwMjzQV0j17h4HrdunWl0/yWW26RI+AehB2o/ihFpRqO1aijEl7qx7mTRwBpnuO9QEl+xMR7akVQiYvPHm4hgD8eIBz4kpgPyi3U7ZsHB7VhN1XQwr6RnR1JKx+Us6pydDsQiOcIt2NcjuEsAsghb17BOzubfaOToOzDkiMRAW0RiBebpq2wJsG0CjPwA2CUkQgQAfcQIEG5hzVnIgJEIEEESFAJAsbbiQARcA8BEpR7WHMmIkAEEkSABJUgYLydCBAB9xAgQbmHNWciAkQgQQRIUAkCxtuJABFwDwESlHtYcyYiQAQSRIAElSBgvJ0IEAH3ECBBuYc1ZyICRCBBBEhQCQLG24kAEXAPARKUe1hzJiJABBJEgASVIGBhvR0pf1F4gWmZw/oEeKP3/wHnHFgv4b8ucgAAAABJRU5ErkJggg==" } + }, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Cartpole optimal control problem\n", + "\n", + "![image.png](attachment:image.png)\n", + "\n", + "A cartpole is another classical example of control. In this system, an underactuated pole is attached on top of a 1D actuacted cart. The game is to raise the pole to a standing position.\n", + "\n", + "The model is here:\n", + "https://en.wikipedia.org/wiki/Inverted_pendulum\n", + "\n", + "We denote by $m_1$ the cart mass, $m_2$ the pole mass, $l$ the pole length, $\\theta$ the pole angle w.r.t. the vertical axis, $x$ the cart position, and $g=$9.81 the gravity.\n", + "\n", + "The system acceleration can be rewritten as:\n", + "\n", + "$$\\ddot{\\theta} = \\frac{1}{\\mu(\\theta)} \\big( \\frac{\\cos \\theta}{l} f + \\frac{mg}{l} \\sin(\\theta) - m_2 \\cos(\\theta) \\sin(\\theta) \\dot{\\theta}^2\\big),$$\n", + "\n", + "$$\\ddot{x} = \\frac{1}{\\mu(\\theta)} \\big( f + m_2 \\cos(\\theta) \\sin(\\theta) g -m_2 l \\sin(\\theta) \\dot{\\theta} \\big),$$\n", + "\n", + "$\\hspace{12em}$with $$\\mu(\\theta) = m_1+m_2 \\sin(\\theta)^2,$$\n", + "\n", + "where $f$ represents the input command.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## I. Differential Action Model\n", + "\n", + "A Differential Action Model (DAM) describes the action (control/dynamics) in continous-time. In this exercise, we ask you to write the equation of motions for the cartpole.\n", + "\n", + "For more details, see the instructions inside the DifferentialActionModelCartpole class." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import crocoddyl\n", + "import pinocchio\n", + "import numpy as np\n", + "\n", + "\n", + "class DifferentialActionModelCartpole(crocoddyl.DifferentialActionModelAbstract):\n", + " def __init__(self):\n", + " crocoddyl.DifferentialActionModelAbstract.__init__(\n", + " self, crocoddyl.StateVector(4), 1, 6\n", + " ) # nu = 1; nr = 6\n", + " self.unone = np.zeros(self.nu)\n", + "\n", + " self.m1 = 1.0\n", + " self.m2 = 0.1\n", + " self.l = 0.5\n", + " self.g = 9.81\n", + " self.costWeights = [\n", + " 1.0,\n", + " 1.0,\n", + " 0.1,\n", + " 0.001,\n", + " 0.001,\n", + " 1.0,\n", + " ] # sin, 1-cos, x, xdot, thdot, f\n", + "\n", + " def calc(self, data, x, u=None):\n", + " if u is None:\n", + " u = model.unone\n", + " # Getting the state and control variables\n", + " y, th, ydot, thdot = x[0], x[1], x[2], x[3]\n", + " f = u[0]\n", + "\n", + " # Shortname for system parameters\n", + " m1, m2, l, g = self.m1, self.m2, self.l, self.g\n", + " s, c = np.sin(th), np.cos(th)\n", + "\n", + " # Defining the equation of motions\n", + " m = m1 + m2\n", + " mu = m1 + m2 * s**2\n", + " xddot = (f + m2 * c * s * g - m2 * l * s * thdot**2) / mu\n", + " thddot = (c * f / l + m * g * s / l - m2 * c * s * thdot**2) / mu\n", + " data.xout = np.matrix([xddot, thddot]).T\n", + "\n", + " # Computing the cost residual and value\n", + " data.r = np.matrix(self.costWeights * np.array([s, 1 - c, y, ydot, thdot, f])).T\n", + " data.cost = 0.5 * sum(np.asarray(data.r) ** 2)\n", + "\n", + " def calcDiff(self, data, x, u=None):\n", + " # Advance user might implement the derivatives\n", + " pass" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You may want to check your computation. Here is how to create the model and run the calc method." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "cartpoleDAM = DifferentialActionModelCartpole()\n", + "cartpoleData = cartpoleDAM.createData()\n", + "x = cartpoleDAM.state.rand()\n", + "u = np.zeros(1)\n", + "cartpoleDAM.calc(cartpoleData, x, u)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## II. Write the derivatives with DAMNumDiff\n", + "\n", + "In the previous exercise, we didn't define the derivatives of the cartpole system. In crocoddyl, we can compute them without any additional code thanks to the DifferentialActionModelNumDiff class. This class computes the derivatives through numerical differentiation.\n", + "\n", + "In the following cell, you need to create a cartpole DAM that computes the derivates using NumDiff." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Creating the cartpole DAM\n", + "cartpoleND = crocoddyl.DifferentialActionModelNumDiff(cartpoleDAM, True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After creating your cartpole DAM with NumDiff. We would like that you answer the follows:\n", + "\n", + " - 2 columns of Fx are null. Wich ones? Why?\n", + "\n", + " - can you double check the values of Fu?\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## III. Integrate the model\n", + "\n", + "After creating DAM for the cartpole system. We need to create an Integrated Action Model (IAM). Remenber that an IAM converts the continuos-time action model into a discrete-time action model. For this exercise we'll use a simpletic Euler integrator." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "timeStep = 5e-2\n", + "cartpoleIAM = crocoddyl.IntegratedActionModelEuler(cartpoleND, timeStep)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## IV. Write the problem, create the solver\n", + "\n", + "First, you need to describe your shooting problem. For that, you have to indicate the number of knots and their time step. For this exercise we want to use 50 knots with $dt=$5e-2.\n", + "\n", + "Here is how we create the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Fill the number of knots (T) and the time step (dt)\n", + "x0 = np.matrix([0.0, 3.14, 0.0, 0.0]).T\n", + "T = 50\n", + "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, cartpoleIAM)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Problem cannot solve, just integrate:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "us = [pinocchio.utils.zero(cartpoleIAM.differential.nu)] * T\n", + "xs = problem.rollout(us)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In cartpole_utils, we provite a plotCartpole and a animateCartpole methods." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%capture\n", + "%matplotlib inline\n", + "from cartpole_utils import animateCartpole\n", + "\n", + "anim = animateCartpole(xs)\n", + "\n", + "# If you encounter problems probably you need to install ffmpeg/libav-tools:\n", + "# sudo apt-get install ffmpeg\n", + "# or\n", + "# sudo apt-get install libav-tools" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And let's display this rollout!\n", + "\n", + "Note that to_jshtml spawns the video control commands." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import HTML\n", + "\n", + "# HTML(anim.to_jshtml())\n", + "HTML(anim.to_html5_video())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we want to create the solver (SolverDDP class) and run it. Display the result. **Do you like it?**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Creating the DDP solver\n", + "ddp = crocoddyl.SolverDDP(problem)\n", + "ddp.setCallbacks([crocoddyl.CallbackVerbose()])\n", + "\n", + "# Solving this problem\n", + "done = ddp.solve([], [], 1000)\n", + "print done\n", + "print ddp.us" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "%%capture\n", + "%matplotlib inline\n", + "\n", + "# Create animation\n", + "anim = animateCartpole(ddp.xs)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# HTML(anim.to_jshtml())\n", + "HTML(anim.to_html5_video())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tune the problem, solve it\n", + "\n", + "Give some indication about what should be tried for solving the problem.\n", + "\n", + "\n", + " - Without a terminal model, we can see some swings but we cannot stabilize. What should we do?\n", + "\n", + " - The most important is to reach the standing position. Can we also nullify the velocity?\n", + "\n", + " - Increasing all the weights is not working. How to slowly increase the penalty?\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "###########################################################################\n", + "################## TODO: Tune the weights for each cost ###################\n", + "###########################################################################\n", + "terminalCartpole = DifferentialActionModelCartpole()\n", + "terminalCartpoleDAM = crocoddyl.DifferentialActionModelNumDiff(terminalCartpole, True)\n", + "terminalCartpoleIAM = crocoddyl.IntegratedActionModelEuler(terminalCartpoleDAM)\n", + "\n", + "terminalCartpole.costWeights[0] = 100\n", + "terminalCartpole.costWeights[1] = 100\n", + "terminalCartpole.costWeights[2] = 1.0\n", + "terminalCartpole.costWeights[3] = 0.1\n", + "terminalCartpole.costWeights[4] = 0.01\n", + "terminalCartpole.costWeights[5] = 0.0001\n", + "problem = crocoddyl.ShootingProblem(x0, [cartpoleIAM] * T, terminalCartpoleIAM)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Creating the DDP solver\n", + "ddp = crocoddyl.SolverDDP(problem)\n", + "ddp.setCallbacks([ crocoddyl.CallbackVerbose() ])\n", + "\n", + "# Solving this problem\n", + "done = ddp.solve([], [], 300)\n", + "print done" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%capture\n", + "%matplotlib inline\n", + "\n", + "# Create animation\n", + "anim = animateCartpole(ddp.xs)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# HTML(anim.to_jshtml())\n", + "HTML(anim.to_html5_video())" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 2", + "language": "python", + "name": "python2" }, - "nbformat" : 4, - "nbformat_minor" : 2 + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "2.7.17" + } + }, + "nbformat": 4, + "nbformat_minor": 2 } diff --git a/examples/notebooks/introduction_to_crocoddyl.ipynb b/examples/notebooks/introduction_to_crocoddyl.ipynb index 3f4d2a227..ad9dcbaba 100644 --- a/examples/notebooks/introduction_to_crocoddyl.ipynb +++ b/examples/notebooks/introduction_to_crocoddyl.ipynb @@ -1,698 +1,618 @@ { - "cells" : [ - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# Crocoddyl: Contact RObot COntrol by Differential DYnamic " - "programming Library\n", - "\n", - "\n", - "## I. Welcome to crocoddyl\n", - "Crocoddyl is an **optimal control library for robot control under " - "contact sequence**. Its solver is based on an efficient Differential " - "Dynamic Programming (DDP) algorithm. Crocoddyl computes optimal " - "trajectories along with optimal feedback gains. It uses Pinocchio for " - "fast computation of robot dynamics and its analytical derivatives. \n", - "\n", - "Crocoddyl is focused on multi-contact optimal control problem (MCOP) " - "which as the form:\n", - "\n", - "$$\\mathbf{X}^*,\\mathbf{U}^*=\n", - "\\begin{Bmatrix} \\mathbf{x}^*_0,\\cdots,\\mathbf{x}^*_N \\\\\n", - "\t\t\t\t \\mathbf{u}^*_0,\\cdots,\\mathbf{u}^*_N\n", - "\\end{Bmatrix} =\n", - "\\arg\\min_{\\mathbf{X},\\mathbf{U}} \\sum_{k=1}^N " - "\\int_{t_k}^{t_k+\\Delta t} l(\\mathbf{x},\\mathbf{u})dt$$\n", - "subject to\n", - "$$ \\mathbf{\\dot{x}} = \\mathbf{f}(\\mathbf{x},\\mathbf{u}),$$\n", - "$$ \\mathbf{x}\\in\\mathcal{X}, \\mathbf{u}\\in\\mathcal{U}, " - "\\boldsymbol{\\lambda}\\in\\mathcal{K}.$$\n", - "where\n", - " - the state $\\mathbf{x}=(\\mathbf{q},\\mathbf{v})$ lies in a " - "manifold, e.g. Lie manifold $\\mathbf{q}\\in SE(3)\\times " - "\\mathbb{R}^{n_j}$, $n_j$ being the number of degrees of freedom of " - "the robot.\n", - " - the system has underactuacted dynamics, i.e. " - "$\\mathbf{u}=(\\mathbf{0},\\boldsymbol{\\tau})$,\n", - " - $\\mathcal{X}$, $\\mathcal{U}$ are the state and control " - "admissible sets, and\n", - " - $\\mathcal{K}$ represents the contact constraints.\n", - " \n", - " Note that " - "$\\boldsymbol{\\lambda}=\\mathbf{g}(\\mathbf{x},\\mathbf{u})$ denotes " - "the contact force, and is dependent on the state and control.\n", - " \n", - "Let's start by understanding the concept behind crocoddyl design." - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# II. Action models\n", - "\n", - "In crocoddyl, an action model combines dynamics and cost models. Each " - "node, in our optimal control problem, is described through an action " - "model. In order to describe a problem, we need to provide ways of " - "computing the dynamics, the cost functions and their derivatives. All " - "these are described inside the action model.\n", - "\n", - "To understand the mathematical aspects behind an action model, let's " - "first get a locally linearize version of our optimal control problem " - "as:\n", - "\n", - "$$\\mathbf{X}^*(\\mathbf{x}_0),\\mathbf{U}^*(\\mathbf{x}_0)\n", - "=\n", - "\\arg\\max_{\\mathbf{X},\\mathbf{U}} = cost_T(\\delta\\mathbf{x}_N) + " - "\\sum_{k=1}^N cost_t(\\delta\\mathbf{x}_k, \\delta\\mathbf{u}_k)$$\n", - "subject to\n", - "$$dynamics(\\delta\\mathbf{x}_{k+1},\\delta\\mathbf{x}_k," - "\\delta\\mathbf{u}_k)=\\mathbf{0},$$\n", - "\n", - "where\n", - "$$cost_T(\\delta\\mathbf{x}) = \\frac{1}{2}\n", - "\\begin{bmatrix} \n", - " 1 \\\\ \\delta\\mathbf{x}\n", - "\\end{bmatrix}^\\top\n", - "\\begin{bmatrix}\n", - "0 & \\mathbf{l_x}^\\top \\\\\n", - "\\mathbf{l_x} & \\mathbf{l_{xx}}\n", - "\\end{bmatrix}\n", - "\\begin{bmatrix}\n", - " 1 \\\\ \\delta\\mathbf{x}\n", - "\\end{bmatrix}\n", - "$$\n", - "\n", - "$$cost_t(\\delta\\mathbf{x},\\delta\\mathbf{u}) = \\frac{1}{2}\n", - "\\begin{bmatrix} \n", - " 1 \\\\ \\delta\\mathbf{x} \\\\ \\delta\\mathbf{u}\n", - "\\end{bmatrix}^\\top\n", - "\\begin{bmatrix}\n", - "0 & \\mathbf{l_x}^\\top & \\mathbf{l_u}^\\top\\\\\n", - "\\mathbf{l_x} & \\mathbf{l_{xx}} & \\mathbf{l_{ux}}^\\top\\\\\n", - "\\mathbf{l_u} & \\mathbf{l_{ux}} & \\mathbf{l_{uu}}\n", - "\\end{bmatrix}\n", - "\\begin{bmatrix}\n", - " 1 \\\\ \\delta\\mathbf{x} \\\\ \\delta\\mathbf{u}\n", - "\\end{bmatrix}\n", - "$$\n", - "\n", - "$$\n", - "dynamics(\\delta\\mathbf{x}_{k+1},\\delta\\mathbf{x}_k," - "\\delta\\mathbf{u}_k) = \\delta\\mathbf{x}_{k+1} - " - "(\\mathbf{f_x}\\delta\\mathbf{x}_k + " - "\\mathbf{f_u}\\delta\\mathbf{u}_k)\n", - "$$\n", - "\n", - "where an action model defines a time interval of this problem:\n", - " - $actions = dynamics + cost$\n", - "\n", - "### Important notes:\n", - " - An action model describes the dynamics and cost functions for a " - "node in our optimal control problem.\n", - " - Action models lie in the discrete time space.\n", - " - For debugging and prototyping, we have also implemented numerical " - "differentiation (NumDiff) abstractions. These computations depend " - "only on the definition of the dynamics equation and cost functions. " - "However to asses efficiency, crocoddyl uses **analytical " - "derivatives** computed from Pinocchio.\n", - "\n", - "\n", - "## II.a Differential and Integrated Action Models\n", - "Optimal control solvers require the time-discrete model of the cost " - "and the dynamics. However, it's often convenient to implement them in " - "continuous time (e.g. to combine with abstract integration rules). In " - "crocoddyl, this continuous-time action models are called " - "\"Differential Action Model (DAM)\". And together with predefined " - "\"Integrated Action Models (IAM)\", it possible to retrieve the " - "time-discrete action model.\n", - "\n", - "At the moment, we have:\n", - " - a simpletic Euler and\n", - " - a Runge-Kutte 4 integration rules.\n", - "\n", - "An optimal control problem can be written from a set of DAMs as:\n", - "$$\\mathbf{X}^*(\\mathbf{x}_0),\\mathbf{U}^*(\\mathbf{x}_0)\n", - "=\n", - "\\arg\\max_{\\mathbf{X},\\mathbf{U}} = cost_T(\\delta\\mathbf{x}_N) + " - "\\sum_{k=1}^N \\int_{t_k}^{t_k+\\Delta t} " - "cost_t(\\delta\\mathbf{x}_k, \\delta\\mathbf{u}_k) dt$$\n", - "subject to\n", - "$$dynamics(\\delta\\mathbf{x}_{k+1},\\delta\\mathbf{x}_k," - "\\delta\\mathbf{u}_k)=\\mathbf{0},$$\n", - "\n", - "where\n", - "$$cost_T(\\delta\\mathbf{x}) = \\frac{1}{2}\n", - "\\begin{bmatrix} \n", - " 1 \\\\ \\delta\\mathbf{x}\n", - "\\end{bmatrix}^\\top\n", - "\\begin{bmatrix}\n", - "0 & \\mathbf{l_x}^\\top \\\\\n", - "\\mathbf{l_x} & \\mathbf{l_{xx}}\n", - "\\end{bmatrix}\n", - "\\begin{bmatrix}\n", - " 1 \\\\ \\delta\\mathbf{x}\n", - "\\end{bmatrix}\n", - "$$\n", - "\n", - "$$cost_t(\\delta\\mathbf{x},\\delta\\mathbf{u}) = \\frac{1}{2}\n", - "\\begin{bmatrix} \n", - " 1 \\\\ \\delta\\mathbf{x} \\\\ \\delta\\mathbf{u}\n", - "\\end{bmatrix}^\\top\n", - "\\begin{bmatrix}\n", - "0 & \\mathbf{l_x}^\\top & \\mathbf{l_u}^\\top\\\\\n", - "\\mathbf{l_x} & \\mathbf{l_{xx}} & \\mathbf{l_{ux}}^\\top\\\\\n", - "\\mathbf{l_u} & \\mathbf{l_{ux}} & \\mathbf{l_{uu}}\n", - "\\end{bmatrix}\n", - "\\begin{bmatrix}\n", - " 1 \\\\ \\delta\\mathbf{x} \\\\ \\delta\\mathbf{u}\n", - "\\end{bmatrix}\n", - "$$\n", - "\n", - "$$\n", - "dynamics(\\delta\\mathbf{\\dot{x}},\\delta\\mathbf{x},\\delta\\mathbf{" - "u}) = \\delta\\mathbf{\\dot{x}} - (\\mathbf{f_x}\\delta\\mathbf{x} + " - "\\mathbf{f_u}\\delta\\mathbf{u})\n", - "$$" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "### Building a differential action model for robot forward dynamics\n", - "#### Loading the robot\n", "\n", - "Crocoddyl offers several robot models for benchmarking our optimal " - "control solvers (e.g. manipulators, humanoids, quadrupeds, etc). The " - "collection of Talos models can be downloaded in Ubuntu with the APT " - "package *robotpkg-talos-data*.\n", - "\n", "Let's load a single Talos arm (left one):" - ] - }, - { - "cell_type" : "code", - "execution_count" : 2, - "metadata" : {}, - "outputs" : [], - "source" : [ - "import crocoddyl\n", "import numpy as np\n", - "import example_robot_data\n", "\n", - "talos_arm = example_robot_data.load(\"talos_arm\")\n", - "robot_model = talos_arm.model # getting the Pinocchio model\n", "\n", - "# Defining a initial state\n", - "q0 = np.array([0.173046, 1.0, -0.52366, 0.0, 0.0, 0.1, -0.005])\n", - "x0 = np.concatenate([q0, np.zeros(talos_arm.model.nv)])" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "### calc and calcDiff\n", - "Optimal control solvers often need to compute a quadratic " - "approximation of the action model (as previously described); this " - "provides a search direction (computeDirection). Then it's needed to " - "try the step along this direction (tryStep).\n", - "\n", - "Typically calc and calcDiff do the precomputations that are required " - "before computeDirection and tryStep respectively (inside the solver). " - "These functions update the information of:\n", - " - **calc**: update the next state and its cost value\n", - " $$\\delta\\mathbf{\\dot{x}}_{k+1} = " - "\\mathbf{f}(\\delta\\mathbf{x}_k,\\mathbf{u}_k)$$\n", - " - **calcDiff**: update the derivatives of the dynamics and cost " - "(quadratic approximation)\n", - " $$\\mathbf{f_x}, \\mathbf{f_u} \\hspace{1em} (dynamics)$$\n", - " $$\\mathbf{l_x}, \\mathbf{l_u}, \\mathbf{l_{xx}}, \\mathbf{l_{ux}}, " - "\\mathbf{l_{uu}} \\hspace{1em} (cost)$$\n", - " \n", - " **Crocoddyl put all information inside data**, so avoiding dynamic " - "reallocation." - ] - }, - { - "cell_type" : "code", - "execution_count" : 3, - "metadata" : {}, - "outputs" : [], - "source" : [ - "import pinocchio\n", - "\n", - "\n", - "class " - "DifferentialFwdDynamics(crocoddyl.DifferentialActionModelAbstract):\n", - " def __init__(self, state, costModel):\n", - " crocoddyl.DifferentialActionModelAbstract.__init__(\n", - " self, state, state.nv, costModel.nr\n", - " )\n", - " self.costs = costModel\n", - " self.enable_force = True\n", - " self.armature = np.zeros(0)\n", - "\n", - " def calc(self, data, x, u=None):\n", - " if u is None:\n", - " u = self.unone\n", - " q, v = x[: self.state.nq], x[-self.state.nv :]\n", - " # Computing the dynamics using ABA or manually for armature " - "case\n", - " if self.enable_force:\n", - " data.xout = pinocchio.aba(self.state.pinocchio, " - "data.pinocchio, q, v, u)\n", - " else:\n", - " pinocchio.computeAllTerms(self.state.pinocchio, " - "data.pinocchio, q, v)\n", - " data.M = data.pinocchio.M\n", - " if self.armature.size == self.state.nv:\n", - " data.M[range(self.state.nv), range(self.state.nv)] += " - "self.armature\n", - " data.Minv = np.linalg.inv(data.M)\n", - " data.xout = data.Minv * (u - data.pinocchio.nle)\n", - " # Computing the cost value and residuals\n", - " pinocchio.forwardKinematics(self.state.pinocchio, " - "data.pinocchio, q, v)\n", - " pinocchio.updateFramePlacements(self.state.pinocchio, " - "data.pinocchio)\n", - " self.costs.calc(data.costs, x, u)\n", - " data.cost = data.costs.cost\n", - "\n", - " def calcDiff(self, data, x, u=None):\n", - " q, v = x[: self.state.nq], x[-self.state.nv :]\n", - " if u is None:\n", - " u = self.unone\n", - " if True:\n", - " self.calc(data, x, u)\n", - " # Computing the dynamics derivatives\n", - " if self.enable_force:\n", - " pinocchio.computeABADerivatives(\n", - " self.state.pinocchio, data.pinocchio, q, v, u\n", - " )\n", - " data.Fx = np.hstack([data.pinocchio.ddq_dq, " - "data.pinocchio.ddq_dv])\n", - " data.Fu = data.pinocchio.Minv\n", - " else:\n", - " pinocchio.computeRNEADerivatives(\n", - " self.state.pinocchio, data.pinocchio, q, v, " - "data.xout\n", - " )\n", - " data.Fx = -np.hstack(\n", - " [data.Minv * data.pinocchio.dtau_dq, data.Minv * " - "data.pinocchio.dtau_dv]\n", - " )\n", - " data.Fu = data.Minv\n", - " # Computing the cost derivatives\n", - " self.costs.calcDiff(data.costs, x, u)\n", - "\n", - " def set_armature(self, armature):\n", - " if armature.size is not self.state.nv:\n", - " print(\"The armature dimension is wrong, we cannot set " - "it.\")\n", - " else:\n", - " self.enable_force = False\n", - " self.armature = armature.T\n", - "\n", - " def createData(self):\n", - " data = " - "crocoddyl.DifferentialActionModelAbstract.createData(self)\n", - " data.pinocchio = pinocchio.Data(self.state.pinocchio)\n", - " data.multibody = " - "crocoddyl.DataCollectorMultibody(data.pinocchio)\n", - " data.costs = self.costs.createData(data.multibody)\n", - " data.costs.shareMemory(\n", - " data\n", - " ) # this allows us to share the memory of cost-terms of " - "action model\n", - " return data" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "## II.b State and its integrate and difference rules\n", - "General speaking, the system's state can lie in a manifold $M$ where " - "the state rate of change lies in its tangent space $T_\\mathbf{x}M$. " - "There are few operators that needs to be defined for different " - "rutines inside our solvers:\n", - " - $\\mathbf{x}_{k+1} = " - "integrate(\\mathbf{x}_k,\\delta\\mathbf{x}_k) = \\mathbf{x}_k \\oplus " - "\\delta\\mathbf{x}_k$\n", - " - $\\delta\\mathbf{x}_k = " - "difference(\\mathbf{x}_{k+1},\\mathbf{x}_k) = \\mathbf{x}_{k+1} " - "\\ominus \\mathbf{x}_k$\n", - "\n", - "where $\\mathbf{x}\\in M$ and $\\delta\\mathbf{x}\\in T_\\mathbf{x} " - "M$.\n", - " \n", "\n", - "And we also need to defined the Jacobians of these operators with " - "respect to the first and second arguments:\n", - " - $\\frac{\\partial \\mathbf{x}\\oplus\\delta\\mathbf{x}}{\\partial " - "\\mathbf{x}}, \\frac{\\partial " - "\\mathbf{x}\\oplus\\delta\\mathbf{x}}{\\partial\\delta\\mathbf{x}} " - "=Jintegrante(\\mathbf{x},\\delta\\mathbf{x})$\n", - " - $\\frac{\\partial\\mathbf{x}_2\\ominus\\mathbf{x}_2}{\\partial " - "\\mathbf{x}_1}, \\frac{\\partial " - "\\mathbf{x}_2\\ominus\\mathbf{x}_1}{\\partial\\mathbf{x}_1} " - "=Jdifference(\\mathbf{x}_2,\\mathbf{x}_1)$\n", - "\n", - "For instance, a state that lies in the Euclidean space will the " - "typical operators:\n", - " - $integrate(\\mathbf{x},\\delta\\mathbf{x}) = \\mathbf{x} + " - "\\delta\\mathbf{x}$\n", - " - $difference(\\mathbf{x}_2,\\mathbf{x}_1) = \\mathbf{x}_2 - " - "\\mathbf{x}_1$\n", - " - $Jintegrate(\\cdot,\\cdot) = Jdifference(\\cdot,\\cdot) = " - "\\mathbf{I}$\n", - " \n", "\n", - "These defines inare encapsulate inside the State class. **For " - "Pinocchio models, we have implemented the StatePinocchio class which " - "can be used for any robot model**." - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "# III. Solving optimal control problems with DDP\n", "\n", - "## III.a ABA dynamics for reaching a goal with Talos arm\n", "\n", - "Our optimal control solver interacts with a defined ShootingProblem. " - "A shooting problem represents a stack of action models in which an " - "action model defines a specific node along the OC problem.\n", - "\n", - "First we need to create an action model from DifferentialFwdDynamics. " - "We use it for building terminal and running action models. In this " - "example, we employ an simpletic Euler integration rule.\n", - "\n", - "Next we define the set of cost functions for this problem. For this " - "particular example, we formulate three running-cost functions:\n", - "\n", " goal-tracking cost, 𝑙𝑜𝑔(𝑓𝑋𝑑𝑜𝑜𝑋𝑓)\n", "\n", - "state and control regularization; and ‖𝐱−𝐱𝑟𝑒𝑓‖,‖𝐮‖\n", "\n", - "one terminal-cost:\n", "\n", " goal cost. ‖𝐮𝑇‖\n", "\n", - "First, let's create the common cost functions." - ] - }, - { - "cell_type" : "code", - "execution_count" : 9, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# Create the cost functions\n", - "target = np.array([0.4, 0.0, 0.4])\n", - "state = crocoddyl.StateMultibody(robot_model)\n", - "frameTranslationResidual = crocoddyl.ResidualModelFrameTranslation(\n", - " state, robot_model.getFrameId(\"gripper_left_joint\"), target\n", - ")\n", - "goalTrackingCost = crocoddyl.CostModelResidual(state, " - "frameTranslationResidual)\n", - "xRegCost = crocoddyl.CostModelResidual(state, " - "crocoddyl.ResidualModelState(state))\n", - "uRegCost = crocoddyl.CostModelResidual(state, " - "crocoddyl.ResidualModelControl(state))\n", - "\n", - "# Create cost model per each action model\n", - "runningCostModel = crocoddyl.CostModelSum(state)\n", - "terminalCostModel = crocoddyl.CostModelSum(state)\n", - "\n", - "# Then let's added the running and terminal cost functions\n", - "runningCostModel.addCost(\"gripperPose\", goalTrackingCost, 1e2)\n", - "runningCostModel.addCost(\"stateReg\", xRegCost, 1e-4)\n", - "runningCostModel.addCost(\"ctrlReg\", uRegCost, 1e-7)\n", - "terminalCostModel.addCost(\"gripperPose\", goalTrackingCost, 1e5)\n", - "terminalCostModel.addCost(\"stateReg\", xRegCost, 1e-4)\n", - "terminalCostModel.addCost(\"ctrlReg\", uRegCost, 1e-7)\n", - "\n", - "# Running and terminal action models\n", - "DT = 1e-3\n", - "actuationModel = crocoddyl.ActuationModelFull(state)\n", - "runningModel = crocoddyl.IntegratedActionModelEuler(\n", - " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", - " state, actuationModel, runningCostModel\n", - " ),\n", - " DT,\n", - ")\n", - "terminalModel = crocoddyl.IntegratedActionModelEuler(\n", - " crocoddyl.DifferentialActionModelFreeFwdDynamics(\n", - " state, actuationModel, terminalCostModel\n", - " ),\n", - " 0.0,\n", - ")" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : ["We create a trajectory with 250 knots"] - }, - { - "cell_type" : "code", - "execution_count" : 10, - "metadata" : {}, - "outputs" : [], - "source" : [ - "# For this optimal control problem, we define 250 knots (or running " - "action\n", - "# models) plus a terminal knot\n", "T = 250\n", - "problem = crocoddyl.ShootingProblem(x0, [runningModel] * T, " - "terminalModel)" - ] - }, - { - "attachments" : {}, - "cell_type" : "markdown", - "metadata" : {}, - "source" : [ - "Onces we have defined our shooting problem, we create a DDP solver " - "object and pass some callback functions for analysing its " - "performance.\n", - "\n", "Please note that:\n", - "- CallbackDDPLogger: store the solution information.\n", - "- CallbackDDPVerbose(level): printing message during the iterates.\n", - "- CallbackDisplay(robot,rate): display the state trajectory using " - "Gepetto viewer." - ] - }, - { - "cell_type" : "code", - "execution_count" : 12, - "metadata" : {}, - "outputs" : [ { - "name" : "stdout", - "output_type" : "stream", - "text" : [ - "You can open the visualizer by visiting the following URL:\n", - "http://127.0.0.1:7001/static/\n" - ] - } ], - "source" : [ - "# Creating the DDP solver for this OC problem, defining a logger\n", - "ddp = crocoddyl.SolverDDP(problem)\n", - "log = crocoddyl.CallbackLogger()\n", "\n", - "# Using the meshcat displayer, you could enable gepetto viewer for " - "nicer view\n", - "# display = crocoddyl.GepettoDisplay(talos_arm, 4, 4)\n", - "display = crocoddyl.MeshcatDisplay(talos_arm, 4, 4, False)\n", - "ddp.setCallbacks([log, crocoddyl.CallbackVerbose(), " - "crocoddyl.CallbackDisplay(display)])" - ] - }, - { - "cell_type" : "code", - "execution_count" : 13, - "metadata" : {}, - "outputs" : [ { - "data" : { - "text/html" : [ - "\n", - "