Skip to content

Commit

Permalink
Adds IMU sensor (isaac-sim#619)
Browse files Browse the repository at this point in the history
# Description

Add `IMU` sensor with cfg class `IMUCfg` and data class `IMUData`.
Compared to the Isaac Sim implementation of the IMU Sensor, this sensor
directly accesses the PhysX view buffers for speed acceleration.

This PR also moves and renames a utility used for cameras to a general
utility location.

Fixes isaac-sim#440 

## Type of change

- New feature (non-breaking change which adds functionality)
- Breaking change (
## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./isaaclab.sh --format`
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [x] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there
  • Loading branch information
pascal-roth authored Oct 18, 2024
1 parent 91f760e commit be52603
Show file tree
Hide file tree
Showing 18 changed files with 1,447 additions and 160 deletions.
16 changes: 16 additions & 0 deletions docs/source/api/lab/omni.isaac.lab.sensors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@
RayCasterCfg
RayCasterCamera
RayCasterCameraCfg
Imu
ImuCfg

Sensor Base
-----------
Expand Down Expand Up @@ -150,3 +152,17 @@ Ray-Cast Camera
:inherited-members:
:show-inheritance:
:exclude-members: __init__, class_type

Inertia Measurement Unit
------------------------

.. autoclass:: Imu
:members:
:inherited-members:
:show-inheritance:

.. autoclass:: ImuCfg
:members:
:inherited-members:
:show-inheritance:
:exclude-members: __init__, class_type
2 changes: 1 addition & 1 deletion source/extensions/omni.isaac.lab/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

# Note: Semantic Versioning is used: https://semver.org/
version = "0.25.2"
version = "0.26.0"

# Description
title = "Isaac Lab framework for Robot Learning"
Expand Down
15 changes: 14 additions & 1 deletion source/extensions/omni.isaac.lab/docs/CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,21 @@
Changelog
---------

0.26.0 (2024-10-16)
~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added Imu sensor implementation that directly accesses the physx view :class:`omni.isaac.lab.sensors.Imu`. The
sensor comes with a configuration class :class:`omni.isaac.lab.sensors.ImuCfg` and data class
:class:`omni.isaac.lab.sensors.ImuData`.
* Moved and renamed :meth:`omni.isaac.lab.sensors.camera.utils.convert_orientation_convention` to :meth:`omni.isaac.lab.utils.math.convert_camera_frame_orientation_convention`
* Moved :meth:`omni.isaac.lab.sensors.camera.utils.create_rotation_matrix_from_view` to :meth:`omni.isaac.lab.utils.math.create_rotation_matrix_from_view`


0.25.2 (2024-10-16)
~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~

Added
^^^^^
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import omni.isaac.lab.utils.math as math_utils
from omni.isaac.lab.assets import Articulation, RigidObject
from omni.isaac.lab.managers import SceneEntityCfg
from omni.isaac.lab.sensors import Camera, RayCaster, RayCasterCamera, TiledCamera
from omni.isaac.lab.sensors import Camera, Imu, RayCaster, RayCasterCamera, TiledCamera

if TYPE_CHECKING:
from omni.isaac.lab.envs import ManagerBasedEnv, ManagerBasedRLEnv
Expand Down Expand Up @@ -182,6 +182,48 @@ def body_incoming_wrench(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg) -> tor
return link_incoming_forces.view(env.num_envs, -1)


def imu_orientation(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("imu")) -> torch.Tensor:
"""Imu sensor orientation w.r.t the env.scene.origin.
Args:
env: The environment.
asset_cfg: The SceneEntity associated with an Imu sensor.
Returns:
Orientation quaternion (wxyz), shape of torch.tensor is (num_env,4).
"""
asset: Imu = env.scene[asset_cfg.name]
return asset.data.quat_w


def imu_ang_vel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("imu")) -> torch.Tensor:
"""Imu sensor angular velocity w.r.t. env.scene.origin expressed in the sensor frame.
Args:
env: The environment.
asset_cfg: The SceneEntity associated with an Imu sensor.
Returns:
Angular velocity (rad/s), shape of torch.tensor is (num_env,3).
"""
asset: Imu = env.scene[asset_cfg.name]
return asset.data.ang_vel_b


def imu_lin_acc(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("imu")) -> torch.Tensor:
"""Imu sensor linear acceleration w.r.t. env.scene.origin expressed in sensor frame.
Args:
env: The environment.
asset_cfg: The SceneEntity associated with an Imu sensor.
Returns:
linear acceleration (m/s^2), shape of torch.tensor is (num_env,3).
"""
asset: Imu = env.scene[asset_cfg.name]
return asset.data.lin_acc_b


def image(
env: ManagerBasedEnv,
sensor_cfg: SceneEntityCfg = SceneEntityCfg("tiled_camera"),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,15 @@
+---------------------+---------------------------+---------------------------------------------------------------+
| Frame Transformer | /World/robot/base | Leaf exists and is a physics body (Articulation / Rigid Body) |
+---------------------+---------------------------+---------------------------------------------------------------+
| Imu | /World/robot/base | Leaf exists and is a physics body (Rigid Body) |
+---------------------+---------------------------+---------------------------------------------------------------+
"""

from .camera import * # noqa: F401, F403
from .contact_sensor import * # noqa: F401, F403
from .frame_transformer import * # noqa: F401
from .imu import * # noqa: F401, F403
from .ray_caster import * # noqa: F401, F403
from .sensor_base import SensorBase # noqa: F401
from .sensor_base_cfg import SensorBaseCfg # noqa: F401
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from typing import TYPE_CHECKING, Any, Literal

import carb
import omni.isaac.core.utils.stage as stage_utils
import omni.kit.commands
import omni.usd
from omni.isaac.core.prims import XFormPrimView
Expand All @@ -21,11 +22,14 @@
import omni.isaac.lab.sim as sim_utils
from omni.isaac.lab.utils import to_camel_case
from omni.isaac.lab.utils.array import convert_to_torch
from omni.isaac.lab.utils.math import quat_from_matrix
from omni.isaac.lab.utils.math import (
convert_camera_frame_orientation_convention,
create_rotation_matrix_from_view,
quat_from_matrix,
)

from ..sensor_base import SensorBase
from .camera_data import CameraData
from .utils import convert_orientation_convention, create_rotation_matrix_from_view

if TYPE_CHECKING:
from .camera_cfg import CameraCfg
Expand Down Expand Up @@ -116,7 +120,9 @@ def __init__(self, cfg: CameraCfg):
if self.cfg.spawn is not None:
# compute the rotation offset
rot = torch.tensor(self.cfg.offset.rot, dtype=torch.float32).unsqueeze(0)
rot_offset = convert_orientation_convention(rot, origin=self.cfg.offset.convention, target="opengl")
rot_offset = convert_camera_frame_orientation_convention(
rot, origin=self.cfg.offset.convention, target="opengl"
)
rot_offset = rot_offset.squeeze(0).numpy()
# ensure vertical aperture is set, otherwise replace with default for squared pixels
if self.cfg.spawn.vertical_aperture is None:
Expand Down Expand Up @@ -289,7 +295,7 @@ def set_world_poses(
- :obj:`"ros"` - forward axis: +Z - up axis -Y - Offset is applied in the ROS convention
- :obj:`"world"` - forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See :meth:`omni.isaac.lab.sensors.camera.utils.convert_orientation_convention` for more details
See :meth:`omni.isaac.lab.sensors.camera.utils.convert_camera_frame_orientation_convention` for more details
on the conventions.
Args:
Expand Down Expand Up @@ -318,7 +324,7 @@ def set_world_poses(
orientations = torch.from_numpy(orientations).to(device=self._device)
elif not isinstance(orientations, torch.Tensor):
orientations = torch.tensor(orientations, device=self._device)
orientations = convert_orientation_convention(orientations, origin=convention, target="opengl")
orientations = convert_camera_frame_orientation_convention(orientations, origin=convention, target="opengl")
# set the pose
self._view.set_world_poses(positions, orientations, env_ids)

Expand All @@ -339,8 +345,10 @@ def set_world_poses_from_view(
# resolve env_ids
if env_ids is None:
env_ids = self._ALL_INDICES
# get up axis of current stage
up_axis = stage_utils.get_stage_up_axis()
# set camera poses using the view
orientations = quat_from_matrix(create_rotation_matrix_from_view(eyes, targets, device=self._device))
orientations = quat_from_matrix(create_rotation_matrix_from_view(eyes, targets, up_axis, device=self._device))
self._view.set_world_poses(eyes, orientations, env_ids)

"""
Expand Down Expand Up @@ -596,7 +604,9 @@ def _update_poses(self, env_ids: Sequence[int]):
# get the poses from the view
poses, quat = self._view.get_world_poses(env_ids)
self._data.pos_w[env_ids] = poses
self._data.quat_w_world[env_ids] = convert_orientation_convention(quat, origin="opengl", target="world")
self._data.quat_w_world[env_ids] = convert_camera_frame_orientation_convention(
quat, origin="opengl", target="world"
)

def _create_annotator_data(self):
"""Create the buffers to store the annotator data.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from tensordict import TensorDict
from typing import Any

from .utils import convert_orientation_convention
from omni.isaac.lab.utils.math import convert_camera_frame_orientation_convention


@dataclass
Expand Down Expand Up @@ -77,7 +77,7 @@ def quat_w_ros(self) -> torch.Tensor:
Shape is (N, 4) where N is the number of sensors.
"""
return convert_orientation_convention(self.quat_w_world, origin="world", target="ros")
return convert_camera_frame_orientation_convention(self.quat_w_world, origin="world", target="ros")

@property
def quat_w_opengl(self) -> torch.Tensor:
Expand All @@ -89,4 +89,4 @@ def quat_w_opengl(self) -> torch.Tensor:
Shape is (N, 4) where N is the number of sensors.
"""
return convert_orientation_convention(self.quat_w_world, origin="world", target="opengl")
return convert_camera_frame_orientation_convention(self.quat_w_world, origin="world", target="opengl")
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,11 @@
# needed to import for allowing type-hinting: torch.device | str | None
from __future__ import annotations

import math
import numpy as np
import torch
import torch.nn.functional as F
from collections.abc import Sequence
from typing import Literal

import omni.isaac.core.utils.stage as stage_utils
import warp as wp
from pxr import UsdGeom

import omni.isaac.lab.utils.math as math_utils
from omni.isaac.lab.utils.array import TensorData, convert_to_torch
Expand Down Expand Up @@ -262,143 +257,6 @@ def create_pointcloud_from_rgbd(
return points_xyz, points_rgb


def convert_orientation_convention(
orientation: torch.Tensor,
origin: Literal["opengl", "ros", "world"] = "opengl",
target: Literal["opengl", "ros", "world"] = "ros",
) -> torch.Tensor:
r"""Converts a quaternion representing a rotation from one convention to another.
In USD, the camera follows the ``"opengl"`` convention. Thus, it is always in **Y up** convention.
This means that the camera is looking down the -Z axis with the +Y axis pointing up , and +X axis pointing right.
However, in ROS, the camera is looking down the +Z axis with the +Y axis pointing down, and +X axis pointing right.
Thus, the camera needs to be rotated by :math:`180^{\circ}` around the X axis to follow the ROS convention.
.. math::
T_{ROS} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} T_{USD}
On the other hand, the typical world coordinate system is with +X pointing forward, +Y pointing left,
and +Z pointing up. The camera can also be set in this convention by rotating the camera by :math:`90^{\circ}`
around the X axis and :math:`-90^{\circ}` around the Y axis.
.. math::
T_{WORLD} = \begin{bmatrix} 0 & 0 & -1 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} T_{USD}
Thus, based on their application, cameras follow different conventions for their orientation. This function
converts a quaternion from one convention to another.
Possible conventions are:
- :obj:`"opengl"` - forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention
- :obj:`"ros"` - forward axis: +Z - up axis -Y - Offset is applied in the ROS convention
- :obj:`"world"` - forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
Args:
orientation: Quaternion of form `(w, x, y, z)` with shape (..., 4) in source convention
origin: Convention to convert to. Defaults to "ros".
target: Convention to convert from. Defaults to "opengl".
Returns:
Quaternion of form `(w, x, y, z)` with shape (..., 4) in target convention
"""
if target == origin:
return orientation.clone()

# -- unify input type
if origin == "ros":
# convert from ros to opengl convention
rotm = math_utils.matrix_from_quat(orientation)
rotm[:, :, 2] = -rotm[:, :, 2]
rotm[:, :, 1] = -rotm[:, :, 1]
# convert to opengl convention
quat_gl = math_utils.quat_from_matrix(rotm)
elif origin == "world":
# convert from world (x forward and z up) to opengl convention
rotm = math_utils.matrix_from_quat(orientation)
rotm = torch.matmul(
rotm,
math_utils.matrix_from_euler(
torch.tensor([math.pi / 2, -math.pi / 2, 0], device=orientation.device), "XYZ"
),
)
# convert to isaac-sim convention
quat_gl = math_utils.quat_from_matrix(rotm)
else:
quat_gl = orientation

# -- convert to target convention
if target == "ros":
# convert from opengl to ros convention
rotm = math_utils.matrix_from_quat(quat_gl)
rotm[:, :, 2] = -rotm[:, :, 2]
rotm[:, :, 1] = -rotm[:, :, 1]
return math_utils.quat_from_matrix(rotm)
elif target == "world":
# convert from opengl to world (x forward and z up) convention
rotm = math_utils.matrix_from_quat(quat_gl)
rotm = torch.matmul(
rotm,
math_utils.matrix_from_euler(
torch.tensor([math.pi / 2, -math.pi / 2, 0], device=orientation.device), "XYZ"
).T,
)
return math_utils.quat_from_matrix(rotm)
else:
return quat_gl.clone()


# @torch.jit.script
def create_rotation_matrix_from_view(
eyes: torch.Tensor,
targets: torch.Tensor,
device: str = "cpu",
) -> torch.Tensor:
"""
This function takes a vector ''eyes'' which specifies the location
of the camera in world coordinates and the vector ''targets'' which
indicate the position of the object.
The output is a rotation matrix representing the transformation
from world coordinates -> view coordinates.
The inputs camera_position and targets can each be a
- 3 element tuple/list
- torch tensor of shape (1, 3)
- torch tensor of shape (N, 3)
Args:
eyes: position of the camera in world coordinates
targets: position of the object in world coordinates
The vectors are broadcast against each other so they all have shape (N, 3).
Returns:
R: (N, 3, 3) batched rotation matrices
Reference:
Based on PyTorch3D (https://github.com/facebookresearch/pytorch3d/blob/eaf0709d6af0025fe94d1ee7cec454bc3054826a/pytorch3d/renderer/cameras.py#L1635-L1685)
"""
up_axis_token = stage_utils.get_stage_up_axis()
if up_axis_token == UsdGeom.Tokens.y:
up_axis = torch.tensor((0, 1, 0), device=device, dtype=torch.float32).repeat(eyes.shape[0], 1)
elif up_axis_token == UsdGeom.Tokens.z:
up_axis = torch.tensor((0, 0, 1), device=device, dtype=torch.float32).repeat(eyes.shape[0], 1)
else:
raise ValueError(f"Invalid up axis: {up_axis_token}")

# get rotation matrix in opengl format (-Z forward, +Y up)
z_axis = -F.normalize(targets - eyes, eps=1e-5)
x_axis = F.normalize(torch.cross(up_axis, z_axis, dim=1), eps=1e-5)
y_axis = F.normalize(torch.cross(z_axis, x_axis, dim=1), eps=1e-5)
is_close = torch.isclose(x_axis, torch.tensor(0.0), atol=5e-3).all(dim=1, keepdim=True)
if is_close.any():
replacement = F.normalize(torch.cross(y_axis, z_axis, dim=1), eps=1e-5)
x_axis = torch.where(is_close, replacement, x_axis)
R = torch.cat((x_axis[:, None, :], y_axis[:, None, :], z_axis[:, None, :]), dim=1)
return R.transpose(1, 2)


def save_images_to_file(images: torch.Tensor, file_path: str):
"""Save images to file.
Expand Down
Loading

0 comments on commit be52603

Please sign in to comment.