diff --git a/.gitmodules b/.gitmodules
deleted file mode 100644
index 1350b67..0000000
--- a/.gitmodules
+++ /dev/null
@@ -1,3 +0,0 @@
-[submodule "deep-contact-estimator"]
- path = morpho_symm/datasets/contact_dataset/deep-contact-estimator
- url = https://github.com/UMich-CURLY/deep-contact-estimator/
diff --git a/README.md b/README.md
index e683888..6acaa01 100644
--- a/README.md
+++ b/README.md
@@ -1,19 +1,113 @@
-# Morphological Symmetries (MorphoSymm) in Locomoting Dynamical Systems
+# Morphological Symmetries - _MorphoSymm_
+The Morphological Symmetries (MorphoSymm) repository offers a comprehensive toolkit for the detection, analysis, and exploitation of morphological symmetries in the modeling and control of robotic systems. These symmetries are a common characteristic of both biological and artificial locomotion systems, including legged, swimming, and flying animals, robots, and virtual characters. Essentially, a morphological symmetry is a transformation that connects two equivalent states under the governing dynamics, meaning two states that will evolve identically when influenced by equivalent moving forces. For example, consider these two motion trajectories from the mini-cheetah quadruped robot, which are related by a symmetry transformation.
-Welcome to the Morphological Symmetries (MorphoSymm) repository! Here, you will find a comprehensive set of tools for the identification, study, and exploitation of morphological symmetries in locomoting dynamical systems. These symmetries are commonly found in a wide range of systems, including legged, swimming, and flying animals, robotic systems, and animated characters. As such, this repository aims to provide valuable resources for researchers and practitioners in the fields of Robotics, Computer Graphics, and Computational Biology.
+
+
+
-This repository holds the code for the paper: [On discrete symmetries of robotic systems: A data-driven and group-theoretic analysis](https://scholar.google.it/scholar?q=on+discrete+symmetries+of+robotic+systems:+a+data-driven+and+group-theoretic+analysis&hl=en&as_sdt=0&as_vis=1&oi=scholart).
-Accepted to *Robotics Science and Systems 2023 (RSS 2023)*. For reproducing the experiments of the paper, please see the master branch.
+These symmetries carry significant implications. Notably, they offer a valuable geometric bias, since by modeling and controlling the dynamics of one state, we can effectively identify and control the dynamics of all its symmetric counterparts (see [our paper](https://danfoa.github.io/MorphoSymm/) for details).
-#### Contents:
+
+
+
+
+
+## Group and representation theory
+
+To exploit these symmetries, we employ the language of group and representation theory, enabling us to represent these transformations via linear algebra. The set of morphological symmetries of a robotic system forms the system's [discrete (or finite) symmetry group](https://en.wikipedia.org/wiki/Discrete_group). For example, the mini-cheetah robot's symmetry group $\mathbb{G}$ contains $8$ distinct transformations generated by combinations of the transformations $\{e, g_s, g_t, g_f\}$:
+
+
+
+
+
+
+
+
+
+
+
+
+As depicted above, each symmetry transformation, represented by a group element $g \in \mathbb{G}$, influences the system's configuration and any proprioceptive and exteroceptive observations associated with the system's dynamic evolution. These observations include aspects such as joint torques, center of mass momentum, ground reaction forces, and terrain elevation/orientation. To numerically transform these observations, the appropriate group representation is required. This representation maps each symmetry transformation to an invertible matrix in the space where the observations resides. For example, to transform the robot's joint space generalized velocity coordinates $\mathbf{v}_ {js} \in \mathcal{T}_ {\mathbf{q}}\mathcal{Q}_ {js} \subset \mathbb{R}^{12}$, the group representation $\rho_ {\mathcal{T}_ {\mathbf{q}}\mathcal{Q}_ {js}}: \mathbb{G} \rightarrow \mathbb{GL}(\mathbb{R}^{12})$ is needed. This defines the symmetry transformations in the $12$ dimensional space in which $\mathbf{v}_ {js}$ evolves. In MorphoSymm, when you load a robotic system, you automatically receive its symmetry group and all the relevant group representations required to transform observations used in robotics. For our example, this appears as follows:
+
+```python
+from morpho_symm.utils.robot_utils import load_symmetric_system
+
+robot, G = load_symmetric_system(robot_name='mini_cheetah')
+# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ Qjs, dq_js ∈ TqQjs
+_, v_js = robot.get_joint_space_state()
+
+# Get the group representation on the space of joint space generalized velocity coordinates
+rep_TqQ_js = G.representations['TqQ_js']
+for g in G.elements:
+ # Transform the observations
+ g_v_js = rep_TqQ_js(g) @ v_js # rep_TqQ_js(g) ∈ R^12x12
+```
+
+For more details follow the [getting started guide](#getting-started)
+
+## Uses of morphological symmetries in robotics
+There are three use cases of morphological symmetries supported by MorphoSymm. For details and tutorials see the getting started guide
+
+
+
+
+
Data Augmentation
+
+
+
+Any data recording of proprioceptive and exteroceptive observations can be augmented using the appropriate group representations. This means that if you collect data from a symmetric robotic system, you can use the system's symmetries to analitically augment/multiply the recorded data.
+
+
+
+
+
+
+
Equivariant/Invariant Function Approximation
+
+
+
+Given the geometric nature of the data processed in robotics, most applications of machine learning to supervised, unsupervised, and reinforcement learning in robotics rely on the approximation of equivariant/invariant functions. To approximate these functions using deep learning, we should use equivariant/invariant neural networks that respect the symmetries of the learning problem. These networks feature enhanced sample efficiency and generalization, as demonstrated in our paper results. This repository relies on pytorch and the escnn library to offer equivariant and invariant neural network architectures.
+
+
+
+
+
+
+
Dynamics Harmonics Analysis
+
+
+
+Through the application of abstract harmonic analysis, the symmetries of a robotic system enable us to decompose any motion into a superposition of simpler symmetric synergistic motions. Each of these motions evolves in a lower-dimensional space, known as an isotypic subspace. By projecting entire motion trajectories into these spaces, we can characterize each motion as a superposition of lower-dimensional and synergistic motions. For further details on the use of abstract harmonic analysis in robotics see our paper Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modeling
+
+
+
+
+
+
+
+
+
+
+
+
+## Contents:
- [Installation](#installation)
- [Library of symmetric dynamical systems](#library-of-symmetric-dynamical-systems)
-- [Tutorial](#tutorial)
+- [Getting Started](#getting-started)
- [Loading symmetric dynamical systems](#loading-symmetric-dynamical-systems)
- - [Exploiting Morphological Symmetries](#exploiting-morphological-symmetries)
- - [Equivariant Neural Networks](#equivariant-neural-networks)
-- [Citation](#citation)
+ - [Available group representations](#available-group-representations)
+ - [Data-Augmentation](#data-augmentation)
+ - [Equivariant/Invariant Neural Networks](#equivariantinvariant-neural-networks)
+- [How to cite us?](#how-to-cite-us)
- [Contributing](#contributing)
## Installation:
@@ -23,163 +117,311 @@ git clone https://github.com/Danfoa/MorphoSymm.git
cd MorphoSymm
pip install -e .
```
+
## Library of symmetric dynamical systems
The following is a non-exhaustive and expanding list of dynamical systems with Discrete Morphological Symmetries. Each example can be
reproduced in a 3D interactive environment running:
```python
python morpho_symm.robot_symmetry_visualization.py robot= gui=True
```
-This script functions as an introductory tutorial showing how we define the representations of Discrete Morphological Symmetries in order to perform symmetry transformations on the robot state, and proprioceptive and exteroceptive measurements.
-### $\mathcal{G}=\mathcal{C}_2$: Reflection Symmetry
-| Cassie | Atlas | Bolt | Baxter |
-|:----------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------:|:----------------------------------------------------------------------------:|
-| ![cassie](docs/static/animations/cassie-C2-symmetries_anim_static.gif) | ![atlas](https://user-images.githubusercontent.com/8356912/200183197-94242c57-bd9d-41cb-8a0b-509dceef5cb9.gif) | ![bolt](docs/static/animations/bolt-C2-symmetries_anim_static.gif) | ![baxter](docs/static/animations/baxter-C2-symmetries_anim_static.gif) |
-| [Solo](https://open-dynamic-robot-initiative.github.io/) | **A1** | **HyQ** | Mini-Cheetah |
-| ![solo](docs/static/animations/solo-C2-symmetries_anim_static.gif) | ![a1](docs/static/animations/a1-C2-symmetries_anim_static.gif) | ![hyq](docs/static/animations/hyq-C2-symmetries_anim_static.gif) | ![mini-cheetah](docs/static/animations/mini_cheetah-C2-symmetries_anim_static.gif) |
-| **Anymal-C** | **Anymal-B** | **B1** | Go1 |
-| ![anymal_c](docs/static/animations/anymal_c-C2-symmetries_anim_static.gif) | ![anymal_b](docs/static/animations/anymal_b-C2-symmetries_anim_static.gif) | ![b1](docs/static/animations/b1-C2-symmetries_anim_static.gif) | ![go1](docs/static/animations/go1-C2-symmetries_anim_static.gif) |
-| **UR-3** | **UR5** | **UR10** | KUKA-iiwa |
-| ![ur3](docs/static/animations/ur3-C2-symmetries_anim_static.gif) | ![ur5](docs/static/animations/ur5-C2-symmetries_anim_static.gif) | ![ur10](docs/static/animations/ur10-C2-symmetries_anim_static.gif) | ![iiwa](docs/static/animations/iiwa-C2-symmetries_anim_static.gif) |
-
-### $\mathcal{G}=\mathcal{C}_n$: Symmetric Systems with Cyclic Group Symmetries
-| [Trifinger](https://sites.google.com/view/trifinger/home-page)-C3 |
-|:-------------------------------------------------------------------------------:|
-| ![trifinger-edu](docs/static/animations/trifinger_edu-C3-symmetries_anim_static.gif) |
-
-### $\mathcal{G}=\mathcal{K}_4$: Klein-Four Symmetry
-| [Solo](https://open-dynamic-robot-initiative.github.io/)| HyQ | Mini-Cheetah | Anymal-C | Anymal-B |
-|:---------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:------------------------------------------------------------------------:|
-| ![Solo-K4](docs/static/animations/solo-Klein4-symmetries_anim_static.gif) | ![hyq](docs/static/animations/hyq-Klein4-symmetries_anim_static.gif) | ![Mini-Cheetah-K4](docs/static/animations/mini_cheetah-Klein4-symmetries_anim_static.gif) | ![anymal_c](docs/static/animations/anymal_c-Klein4-symmetries_anim_static.gif) | ![anymal_c](docs/static/animations/anymal_b-Klein4-symmetries_anim_static.gif) |
-
-### $\mathcal{G}=\mathcal{C}_2\times\mathcal{C}_2\times\mathcal{C}_2$: Regular cube symmetry
-| [Solo](https://open-dynamic-robot-initiative.github.io/) | Mini-Cheetah |
-|:--------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|
-| ![solo-c2xc2xc2](docs/static/animations/solo-C2xC2xC2-symmetries_anim_static.gif) | ![bolt](docs/static/animations/mini_cheetah-C2xC2xC2-symmetries_anim_static.gif) |
+This script functions as an introductory tutorial showing how we define the representations of Discrete Morphological Symmetries in order to perform symmetry transformations on the robot state, and proprioceptive and exteroceptive observations.
+### $\mathbb{G}=\mathbb{C}_2$: Reflection Symmetry
+
+
+
+
Cassie
+
Atlas
+
KUKA-iiwa
+
Mini-Cheetah
+
Bolt
+
Baxter
+
Solo
+
A1
+
HyQ
+
Anymal-C
+
Anymal-B
+
Go1
+
B1
+
UR-3
+
UR-5
+
UR-10
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+### $\mathbb{G}=\mathbb{C}_n$: Cyclic group symmetries
+
+
### Addition of new dynamical systems to the library.
-This repository aims at becoming a central tool in the exploitation of Morphological Symmetries in Robotics, Computer Graphics and Computational Biology.
-Therefore, here we summarize the present and future efforts to enlarge the library of dynamical systems used in each of these fields.
+If you want a new robot to be introduced in the library create an issue request indicating where to find the URDF of the system.
+___________________________________________
+## Getting Started
-## Tutorial
### Loading symmetric dynamical systems
Each symmetric dynamical system has a configuration file in the folder `morpho_symm/cfg/supervised/robot`. To load one
of these systems, simply use the function `load_symmetric_system` as follows:
+
```python
from morpho_symm.utils.robot_utils import load_symmetric_system
-from hydra import compose, initialize
-
robot_name = 'solo' # or any of the robots in the library (see `/morpho_symm/cfg/robot`)
-initialize(config_path="morpho_symm/cfg/supervised/robot", version_base='1.3')
-robot_cfg = compose(config_name=f"{robot_name}.yaml")
-# Load robot instance and its symmetry group
-robot, G = load_symmetric_system(robot_cfg=robot_cfg)
+
+robot, G = load_symmetric_system(robot_name=robot_name)
```
The function returns:
- `robot` an instance of the class [`PinBulletWrapper`](https://github.com/Danfoa/MorphoSymm/blob/devel/morpho_symm/robots/PinBulletWrapper.py) (utility class bridging [`pybullet`](https://pybullet.org/wordpress/) and [`pinocchio`](https://github.com/stack-of-tasks/pinocchio)).
- `G`: the symmetry group of the system of instance [`escnn.group.Group`](https://quva-lab.github.io/escnn/api/escnn.group.html#group)
-#### Getting and resetting the state of the system
-
-The system state is defined as $`(\mathbf{q}, \mathbf{v}) \;|\; \mathbf{q} \in \mathrm{Q}, \; \mathbf{v} \in T_{q}\mathrm{Q}`$, being $\mathrm{Q}$ the space of generalized position coordinates, and $T_{q}\mathrm{Q}$ the space of generalized velocity coordinates. Recall from the [paper convention](https://arxiv.org/abs/2302.10433) that the state configuration can be separated into base configuration and joint-space configuration $`\mathrm{Q} := \mathrm{E}_d \times \mathrm{Q}_J`$, such that $`
-\mathbf{q} :=
-\begin{bsmallmatrix}
-\mathbf{X}_B \\ \mathbf{q}_{js}
-\end{bsmallmatrix}
-\begin{smallmatrix}
-\in \; \mathbb{E}_d \\ \in \; \mathrm{Q}_J
-\end{smallmatrix}
-`$. Where, $\mathrm{E}\_d$ is the Euclidean space in which the system evolves, and $`\mathrm{Q}_J`$ is the joint-space position coordinates. To access these quantities in code we do:
+
+### Getting and resetting the state of the system
+
+The system state is defined as $(\mathbf{q}, \mathbf{v}) | \mathbf{q} \in \mathcal{Q}, \mathbf{v} \in T_{q}\mathcal{Q}$, being $\mathcal{Q}$ the space of generalized position coordinates, and $\mathcal{T}_ {q}\mathcal{Q}$ the space of generalized velocity coordinates. Recall from the [paper convention](https://arxiv.org/abs/2302.10433) that the state configuration can be separated into base configuration and joint space configuration $\mathcal{Q} := \mathbb{E}_ d \times \mathcal{Q}_ {js}$. Where, $\mathbb{E}_ d$ is the Euclidean space in which the system evolves, and $\mathcal{Q}_ {js}$ is the joint space position coordinates. This enables to express every system state as $\mathbf{q} := \[\mathbf{X}_ B, \mathbf{q}_ {js}\]^ T$, where $\mathbf{X}_ B \in \mathbb{E}_ d$ and $\mathbf{q}_ {js} \in \mathcal{Q}_ {js}$. To access these quantities in code we do:
```python
# Get the state of the system
q, v = robot.get_state() # q ∈ Q, v ∈ TqQ
# Get the robot's base configuration XB ∈ Ed as a homogenous transformation matrix.
XB = robot.get_base_configuration()
-# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ QJ, dq_js ∈ TqQJ
+# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ Qjs, dq_js ∈ TqQjs
q_js, v_js = robot.get_joint_space_state()
```
-### Exploiting Morphological Symmetries
-
- > _This section shows how to get the group representations required for doing *data augmentation* and for the construction of *equivariant neural networks*_.
+### Available group representations
-The system's symmetry group instance `G` contains the group representations required to transform most proprioceptive and exteroceptive measurements (e.g., joint positions/velocities/accelerations, joint forces/torques, contact locations & forces, linear and angular velocities, terrain heightmaps, depthmaps, etc). These are:
-
-
- - $`\rho_{\mathbb{E}_d}: \mathcal{G} \rightarrow \mathbb{E}(d)`$: Representation mapping symmetry actions to elements of the Euclidean group $\mathbb{E}(d)$. Essentially homogenous transformation matrices describing a rotation/reflection and translation of space (Euclidean isometry) in $d$ dimensions.
- - $`\rho_{\mathrm{Q}_J}: \mathcal{G} \rightarrow \mathcal{GL}(\mathrm{Q}_J)`$ and $`\rho_{T_q\mathrm{Q}_J}: \mathcal{G} \rightarrow \mathcal{GL}(T_q\mathrm{Q}_J)`$: Representations mapping symmetry actions to transformation matrices of joint-space position $`\mathrm{Q}_J`$ and velocity $`T_{q}\mathrm{Q}_J`$ coordinates.
- - $`\rho_{\mathrm{O}_d}: \mathcal{G} \rightarrow \mathcal{\mathrm{O}_d}`$: Representation mapping symmetry actions to elements of the Orthogonal group $\mathrm{O}(d)$. Essentially rotation and reflection matrices in $d$ dimensions.
- - $`\rho_{reg}: \mathcal{G} \rightarrow \mathbb{R}^{|\mathcal{G}|}`$: The group regular representation.
- - $`\hat{\rho}_{i}: \mathcal{G} \rightarrow \mathcal{GL}(|\hat{\rho}_{i}|)`$: Each of the group irreducible representations.
-
-In practice, products and additions of these representations are enough to obtain the representations of any proprioceptive and exteroceptive measurement. For instance, we can use these representations to transform elements of:
-
-- $\mathrm{E}\_d$: The Euclidean space (of $d$ dimensions) in which the system evolves.
+The system's symmetry group instance `G` contains the group representations required to transform most proprioceptive and exteroceptive observations (e.g., joint positions/velocities/accelerations, joint forces/torques, contact locations & forces, linear and angular velocities, terrain heightmaps, depthmaps, etc). These are:
+
+
+ - $\rho_ {\mathbb{E}_d}: \mathbb{G} \rightarrow \mathbb{E}_d$: Representation mapping symmetry actions to elements of the Euclidean group $\mathbb{E}_d$. Essentially, **homogenous transformation matrices** describing a rotation/reflection and translation of space (Euclidean isometry) in $d$ dimensions.
+
+ - $\rho_ {\mathcal{Q}_ {js}}: \mathbb{G} \rightarrow \mathcal{GL}(\mathcal{Q}_ {js})$ and $\rho_ {\mathcal{T}_ {q}\mathcal{Q}_ {js}}: \mathbb{G} \rightarrow \mathcal{GL}(\mathcal{T}_ {q}\mathcal{Q}_ {js})$: Representations mapping symmetry actions to transformation matrices of joint space position $\mathcal{Q}_ {js}$ and velocity $T_{q}\mathcal{Q}_ {js}$ coordinates.
+
+ - $\rho_ {\mathbb{R}^d}: \mathbb{G} \rightarrow \mathcal{GL}(\mathcal{\mathbb{R}^d})$: Representation mapping symmetry actions to elements to invertible transformations in $d$ dimensions. In practise, **rotation** and **reflection** matrices .
+
+ - $\rho_ {reg}: \mathbb{G} \rightarrow \mathbb{R}^{|\mathbb{G}|}$: The group regular representation.
+
+ - $\hat{\rho}_ {i}: \mathbb{G} \rightarrow \mathcal{GL}(|\hat{\rho}_ {i}|)$: Each of the group irreducible representations.
+
+In practice, direct products and direct sums (block-diagonal stacking) of these representations are enough to obtain the representations of any proprioceptive and exteroceptive measurements/observations. That is, these representations are sufficient to perform [data-augmentation](#data-augmentation) and to construct [equivariant/invariant neural networks](#equivariantinvariant-neural-networks).
+
+### Data-Augmentation
+
+Here, we provide some basic examples of augmenting observations frequently used in robotics, such as the configuration of a rigid-body, the joint space position, velocity, and accelerations, contact forces, joint torques, etc. To gain intiution, focus on the simplest symmetry group, the relfection group $\mathbb{C}_ 2 = \\{e, g_s\\}$, characteristic of the symmetries of a bipedal system:
+
+
+
+
+
+
+
+
+
+
+
+Any observation from this robot has a symmetric equivalent observation. In this tutorial we will show how to obtain these symmetric observations. You can also check the script [robot_symmetry_visualization.py](https://github.com/Danfoa/MorphoSymm/blob/devel/morpho_symm/robot_symmetry_visualization.py), where we use data-augmentation to generate the animation displayed above. This script works for any robot in the library.
+
+#### Observations evolving in the Euclidean group of $d$ dimensions $\mathbb{E}_d$.
- The representation $`\rho_{\mathbb{E}_d}`$ can be used to transform:
- - The system base configuration $\mathbf{X}\_B$. If you want to obtain the set of symmetric base configurations $`
- \{{g \cdot \mathbf{X}_B:= \rho_{\mathbb{E}_d}(g) \; \mathbf{X}_B \; \rho_{\mathbb{E}_d}(g)^{-1} \;|\; \forall\; g \in \mathcal{G}}\}`$ [(1)](https://arxiv.org/abs/2302.1043), you can do it with:
+The homogenous matrix describing the configuration of any rigid body, including the system's base configuration $\mathbf{X}_ B$ is an observation evolving in $\mathbb{E}_ d$. To obtain the set of symmetric base configurations, i.e. the observation group orbit: $\mathbb{G} \cdot \mathbf{X}_ B = \\{g \cdot \mathbf{X}_ B := \rho_ {\mathbb{E}_ d}(g) \\; \mathbf{X}_ B \\; \rho_ {\mathbb{E}_ d}(g)^{-1} | \forall \\; g \in \mathbb{G}\\}$ [(1)](https://danfoa.github.io/MorphoSymm/), you can do the following:
```python
- rep_Ed = G.representations['Ed'] # rep_Ed(g) is a homogenous transformation matrix ∈ R^(d+1)x(d+1)
- # The orbit of the base configuration XB is a map from group elements g ∈ G to base configurations g·XB ∈ Ed
- orbit_X_B = {g: rep_Ed(g) @ XB @ rep_Ed(g).T for g in G.elements()}
+rep_Ed = G.representations['Ed'] # rep_Ed(g) ∈ R^(d+1)x(d+1) is a homogenous transformation matrix
+# The orbit of the base configuration XB is a map from group elements g ∈ G to base configurations g·XB ∈ Ed
+orbit_X_B = {g: rep_Ed(g) @ XB @ rep_Ed(g).T for g in G.elements()}
```
--
- - Points in $\mathbb{R}^d$. These can represent contact locations, object/body positions, etc. To obtain the set of symmetric points you can do it with:
+
+Another example of an observation transfromed by $\rho_ {\mathbb{E}_ d}$ are **points** in $\mathbb{R}^d$. These can represent contact locations, object/body positions, etc. To obtain the point orbit, $\mathbb{G} \cdot \mathbf{r} = \\{g \cdot \mathbf{r} := \rho_ {\mathbb{E}_ d}(g) \\; \mathbf{r} | \forall \\; g \in \mathbb{G} \\}$, you can do:
```python
- r = np.random.rand(3) # Example point in Ed, assuming d=3.
- r_hom = np.concatenate((r, np.ones(1))) # Use homogenous coordinates to represent a point in Ed
- # The orbit of the point is a map from group elements g ∈ G to the set of symmetric points g·r ∈ R^d
- orbit_r = {g: (rep_Ed(g) @ r_hom)[:3] for g in G.elements}
+r = np.random.rand(3) # Example point in Ed, assuming d=3.
+r_hom = np.concatenate((r, np.ones(1))) # Use homogenous coordinates to represent a point in Ed
+# The orbit of the point is a map from group elements g ∈ G to the set of symmetric points g·r ∈ R^d
+orbit_r = {g: (rep_Ed(g) @ r_hom)[:3] for g in G.elements}
```
-- $`\mathrm{Q}_J`$ and $`T_{q}\mathrm{Q}_J`$: The spaces of joint-space position $`\mathrm{Q}_J`$ and velocity $`T_{q}\mathrm{Q}_J`$ generalized coordinates.
+#### Observations evolving in Joint Space (e.g. joint positions, velocity, torques).
- To transform joint-space states $`
- (\mathbf{q}_{js}, \mathbf{v}_{js}) \;|\; \mathbf{q}_{js} \in \mathrm{Q}_J, \;\mathbf{v}_{js} \in T_{q}\mathrm{Q}_J
- `$ we use the representations $`\rho_{\mathrm{Q}_J}`$ and $`\rho_{T_q\mathrm{Q}_J}`$. For instance, for a given joint-space configuration $` (\mathbf{q}_{js}, \mathbf{v}_{js})`$, the set of symmetric joint-space configurations (orbit) is given by $`
- \{
- (\rho_{\mathrm{Q}_J}(g) \; \mathbf{q}_{js}, \;\rho_{T_q\mathrm{Q}_J}(g) \;\mathbf{v}_{js}) \; | \; \forall \; g \in \mathcal{G}
- \}
- `$. Equivalently, in code we can do:
+Joint space observations usually evolve in the space of generalized position $\mathcal{Q}_ {js}$ and velocity $T_{q}\mathcal{Q}_ {js}$ generalized coordinates. These include joints position, velocity, acceleration, and generalized forces/torques. In this example we will transfrom the joint space state, composed of the joint position and velocity coordinates $(\mathbf{q}_ {js}, \mathbf{v}_ {js}) \\; | \\; \mathbf{q}_ {js} \in \mathcal{Q}_ {js}, \mathbf{v}_ {js} \in \mathcal{T}_ {q}\mathcal{Q}_ {js}
+$, using the representations $\rho_ {\mathcal{Q}_ {js}}$ and $\rho_ {\mathcal{T}_ {q}\mathcal{Q}_ {js}}$. To obtain the orbit of the joint space state in code you can do:
```python
- rep_QJ = G.representations['Q_js']
- rep_TqJ = G.representations['TqQ_js']
- # Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ QJ, dq_js ∈ TqQJ
- q_js, v_js = robot.get_joint_space_state()
- # The joint-space state orbit is a map from group elements g ∈ G to joint-space states (g·q_js, g·v_js)
- orbit_js_state = {g: (rep_QJ(g) @ q_js, rep_TqJ(g) @ v_js) for g in G.elements}
+rep_Qjs = G.representations['Q_js']
+rep_TqQjs = G.representations['TqQ_js']
+# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ Qjs, dq_js ∈ TqQjs
+q_js, v_js = robot.get_joint_space_state()
+# The joint space state orbit is a map from group elements g ∈ G to joint space states (g·q_js, g·v_js)
+orbit_js_state = {g: (rep_Qjs(g) @ q_js, rep_TqQjs(g) @ v_js) for g in G.elements}
```
-- Vectors, Pseudo-vectors in $\mathrm{E}\_d$.
- Vector measurements can represent linear velocities, forces, linear accelerations, etc. While [pseudo-vectors](https://en.wikipedia.org/wiki/Pseudovector#:~:text=In%20physics%20and%20mathematics%2C%20a,of%20the%20space%20is%20changed) (or axial-vectors) can represent angular velocities, angular accelerations, etc. To obtain symmetric measurements we transform vectors with $`
- \rho_{\mathrm{O}_d}`$. Likewise, to obtain symmetric pseudo-vectors we use $`\rho_{\mathrm{O}_{d,pseudo}}(g) := |\rho_{\mathrm{O}_d}(g)| \rho_{\mathrm{O}_d}(g) \; | \; g \in \mathcal{G}`$. Equivalently, in code we can do:
+#### Obervations evolving $\mathbb{R}_ d$ (e.g. vectors, pseudo-vectors).
+
+Observations evolving in $\mathbb{R}_ d$ include contact and ground reaction forces, linear and angular velocity of rigid bodies, distance vectors to target positons/locations, etc. To tranform vectors we use the representation $\rho_ {\mathbb{R}_ {d}}$. While to transform [pseudo-vectors](https://en.wikipedia.org/wiki/Pseudovector#:~:text=In%20physics%20and%20mathematics%2C%20a,of%20the%20space%20is%20changed) (or axial-vectors) we use the representation $\rho_ {\mathbb{R}_ {d,pseudo}}$ (these can represent angular velocities, angular accelerations, etc.). To obtain the orbit of these observations you can do:
```python
- rep_Rd = G.representations['Rd'] # rep_Rd(g) is an orthogonal matrix ∈ R^dxd
- rep_Rd_pseudo = G.representations['Od_pseudo']
-
- v = np.random.rand(3) # Example vector in Ed, assuming d=3. E.g. linear velocity of the base frame.
- w = np.random.rand(3) # Example pseudo-vector in Ed, assuming d=3. E.g. angular velocity of the base frame.
- # The orbit of the vector is a map from group elements g ∈ G to the set of symmetric vectors g·v ∈ R^d
- orbit_v = {g: rep_Rd(g) @ v for g in G.elements}
- # The orbit of the pseudo-vector is a map from group elements g ∈ G to the set of symmetric pseudo-vectors g·w ∈ R^d
- orbit_w = {g: rep_Rd_pseudo(g) @ w for g in G.elements}
+rep_Rd = G.representations['Rd'] # rep_Rd(g) is an orthogonal matrix ∈ R^dxd
+rep_Rd_pseudo = G.representations['Rd_pseudo']
+
+v = np.random.rand(3) # Example vector in R3, E.g. linear velocity of the base frame.
+w = np.random.rand(3) # Example pseudo-vector in Ed, assuming d=3. E.g. angular velocity of the base frame.
+# The orbit of the vector is a map from group elements g ∈ G to the set of symmetric vectors g·v ∈ R^d
+orbit_v = {g: rep_Rd(g) @ v for g in G.elements}
+# The orbit of the pseudo-vector is a map from group elements g ∈ G to the set of symmetric pseudo-vectors g·w ∈ R^d
+orbit_w = {g: rep_Rd_pseudo(g) @ w for g in G.elements}
```
-> As an example you can check the script [robot_symmetry_visualization.py](https://github.com/Danfoa/MorphoSymm/blob/devel/morpho_symm/robot_symmetry_visualization.py), where we use the symmetry representations to generate the animations of all robot in the library (with the same script).
+### Equivariant/Invariant Neural Networks
+
+In robotics, any data-driven application of supervised, unsupervised, and reinforcement learning involves approximating a function. This function maps a set of proprioceptive/exteroceptive observations (e.g., system state, contact state, contact points/forces) to an output set of similar observations (e.g., control position/torque setpoints, linear/angular velocity vector, energy). If the robotic system possesses a symmetry group $\mathbb{G}$, these observations will also feature a symmetry group, making the target function either [$\mathbb{G}$-equivariant or $\mathbb{G}$-invariant](https://youtu.be/03MbWVlbefM?t=1393). In this brief tutorial we will show you how to construct a $\mathbb{G}$-equivariant and a $\mathbb{G}$-invariant neural network to approximate these functions from data.
-### Equivariant Neural Networks
- > _In this section we briefly show how to construct G-equivariant multi-layer perceptron E-MLP architectures. Future tutorials will cover G-equivariant CNNs and GNNs._
+#### $\mathbb{G}$-equivariant$ neural network
+Let's consider the example from [(1)](https://arxiv.org/abs/2302.1043) of approximating the Center of Mass (CoM) momentum from the joint space state observations. That is we want to use a neural network to approximate the function $\mathbf{y} = f(\mathbf{x}) = f(\mathbf{q}_ {js}, \mathbf{v}_ {js})$ for a robot evolving in 3 dimensions, say the robot `solo`. Defining $\mathbf{y} := [\mathbf{l}, \mathbf{k}]^T \subseteq \mathbb{R}^6$ as the CoM momentum linear $\mathbf{l} \in \mathbb{R}^3$ and angular $\mathbf{k} \in \mathbb{R}^3$ momentum, and $\mathbf{x} = (\mathbf{q}_ {js}, \mathbf{v}_ {js}) \\; | \\; \mathbf{q}_ {js} \in \mathcal{Q}_ {js}, \mathbf{v}_ {js} \in \mathcal{T}_ {q}\mathcal{Q}_ {js}$ as the joint space position and velocity generalized coordinates.
-Let's consider the example from [(1)](https://arxiv.org/abs/2302.1043) of approximating the Center of Mass (CoM) momentum from the joint-space state measurements. That is we want to use a neural network to approximate the function $`
-\mathbf{y} = f(\mathbf{x}) = f(\mathbf{q}_{js}, \mathbf{v}_{js})
-`$ for a robot evolving in 3 dimensions, say the robot `solo`. Defining $`\mathbf{y} := [\mathbf{l}, \mathbf{k}]^T \subseteq \mathbb{R}^6`$ as the CoM momentum linear $`\mathbf{l} \in \mathbb{R}^3`$ and angular $`\mathbf{k} \in \mathbb{R}^3`$ momentum, and $`
-\mathbf{x} = (\mathbf{q}_{js}, \mathbf{v}_{js}) \;|\; \mathbf{q}_{js} \in \mathrm{Q}_J, \;\mathbf{v}_{js} \in T_{q}\mathrm{Q}_J
-`$ as the joint-space position and velocity generalized coordinates.
+
+
+
+
+
+
+
+
+
+
-For this example, you can build an equivariant MLP as follows:
+To construct a G-equivariant architecture you need to:
+1. Identify the representations of the observations in your input and output spaces.
+2. Define the input and output [`FieldType`](https://quva-lab.github.io/escnn/api/escnn.nn.html#field-type) instances using the representations of each geometric object.
+3. Instanciate a [EMLP](https://github.com/Danfoa/MorphoSymm/blob/fc42a19654a9385b1037d1a2678aa95829a47a06/morpho_symm/nn/EMLP.py#L14) architecture with the input and output `FieldType` instances. This class handles the appropiate parametrization of the hidden layer group representations and activation functions.
```python
import numpy as np
@@ -191,54 +433,41 @@ from morpho_symm.nn.EMLP import EMLP
from morpho_symm.utils.robot_utils import load_symmetric_system
# Load robot instance and its symmetry group
-initialize(config_path="morpho_symm/cfg/supervised/robot", version_base='1.3')
-robot_name = 'solo' # or any of the robots in the library (see `/morpho_symm/cfg/robot`)
-robot, G = load_symmetric_system(robot_name=robot_name)
+robot, G = load_symmetric_system(robot_name='solo-k4')
# We use ESCNN to handle the group/representation-theoretic concepts and for the construction of equivariant neural networks.
gspace = escnn.gspaces.no_base_space(G)
# Get the relevant group representations.
-rep_QJ = G.representations["Q_js"] # Used to transform joint-space position coordinates q_js ∈ Q_js
-rep_TqQJ = G.representations["TqQ_js"] # Used to transform joint-space velocity coordinates v_js ∈ TqQ_js
-rep_R3 = G.representations["Od"] # Used to transform the linear momentum l ∈ R3
-rep_R3_pseudo = G.representations["Od_pseudo"] # Used to transform the angular momentum k ∈ R3
-rep_E3 = G.representations["Ed"] # Homogenous transformation matrix
-
-x = np.random.rand(3) #
-g = G.sample()
-g_x = rep_EG(g) @ np.concatenate([x, np.ones(1)]) # Transform x with the representation of the Euclidean group
-
-e, gs, gr, gt = G.elements
-A = rep_R3(gs) # 3x3 matrix.
+rep_Qjs = G.representations["Q_js"] # Used to transform joint space position coordinates q_js ∈ Q_js
+rep_TqQjs = G.representations["TqQ_js"] # Used to transform joint space velocity coordinates v_js ∈ TqQ_js
+rep_R3 = G.representations["Rd"] # Used to transform the linear momentum l ∈ R3
+rep_R3_pseudo = G.representations["Rd_pseudo"] # Used to transform the angular momentum k ∈ R3
+
# Define the input and output FieldTypes using the representations of each geometric object.
-# Representation of x := [q, v, base_vel, base_ang_vel] ∈ Q_js x TqQ_js x R3 x R3 => ρ_X_js(g) := ρ_Q_js(g) ⊕ ρ_TqQ_js(g) | g ∈ G
-in_field_type = FieldType(gspace, [rep_QJ, rep_TqQJ, rep_R3, rep_R3_pseudo])
-# Representation of y := [l, k] ∈ R3 x R3 => ρ_Y_js(g) := ρ_O3(g) ⊕ ρ_O3pseudo(g) | g ∈ G
+# Representation of x := [q_js, v_js] ∈ Q_js x TqQ_js => ρ_X_js(g) := ρ_Q_js(g) ⊕ ρ_TqQ_js(g) | g ∈ G
+in_field_type = FieldType(gspace, [rep_Qjs, rep_TqQjs])
+# Representation of y := [l, k] ∈ R3 x R3 => ρ_Y_js(g) := ρ_R3(g) ⊕ ρ_R3pseudo(g) | g ∈ G
out_field_type = FieldType(gspace, [rep_R3, rep_R3_pseudo])
-out_field_type = FieldType(gspace, [G.trivial_representation])
# Construct the equivariant MLP
model = EMLP(in_type=in_field_type,
out_type=out_field_type,
- num_layers=5, # Input layer + 3 hidden layers + output/head layer
- num_hidden_units=128, # Number of hidden units per layer
+ num_layers=5, # Input layer + 3 hidden layers + output/head layer
+ num_hidden_units=128, # Number of hidden units per layer
activation=escnn.nn.ReLU, # Activarions must be `EquivariantModules` instances
- with_bias=True # Use bias in the linear layers
+ with_bias=True # Use bias in the linear layers
)
print(f"Here is your equivariant MLP \n {model}")
```
-____________________________________
-In summary, to construct a G-equivariant architecture you need to:
-1. Identify the types of geometric objects in your input and output spaces.
-2. Identify the representations of each geometric object.
-3. Define the input and output `FieldType` instances using the representations of each geometric object.
## How to cite us?
-If you find this repository or the [paper](https://scholar.google.it/scholar?q=on+discrete+symmetries+of+robotic+systems:+a+data-driven+and+group-theoretic+analysis&hl=en&as_sdt=0&as_vis=1&oi=scholart) useful, please cite us as:
+If you find this repository or any our our papers relevant please cite us as:
+
+### [On discrete symmetries of robotics systems: A group-theoretic and data-driven analysis](https://danfoa.github.io/MorphoSymm/)
```
@INPROCEEDINGS{Ordonez-Apraez-RSS-23,
- AUTHOR = {Daniel F Ordonez-Apraez AND Martin, Mario AND Antonio Agudo AND Francesc Moreno},
+ AUTHOR = {Daniel F Ordo{\~n}ez-Apraez AND Martin, Mario AND Antonio Agudo AND Francesc Moreno},
TITLE = {{On discrete symmetries of robotics systems: A group-theoretic and data-driven analysis}},
BOOKTITLE = {Proceedings of Robotics: Science and Systems},
YEAR = {2023},
@@ -247,6 +476,16 @@ If you find this repository or the [paper](https://scholar.google.it/scholar?q=o
DOI = {10.15607/RSS.2023.XIX.053}
}
```
+### [Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modeling](https://danfoa.github.io/DynamicsHarmonicsAnalysis/)
+
+```
+@article{ordonez2023dynamics,
+ title={Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modelling},
+ author={Ordo{\~n}ez-Apraez, Daniel and Kostic, Vladimir and Turrisi, Giulio and Novelli, Pietro and Mastalli, Carlos and Semini, Claudio and Pontil, Massimiliano},
+ journal={arXiv preprint arXiv:2312.07457},
+ year={2023}
+}
+```
## Contributing
@@ -260,26 +499,13 @@ It utilizes the [robot_descriptions.py](https://github.com/robot-descriptions/ro
third-party robotics packages. This package provides a convenient interface for loading URDF files into GUI
visualization tools, robot dynamics packages (such as Pinocchio), and physics simulators. To add a new robotic system to our library
1. The system URDF must be contributed to robot_descriptions.py.
-2. The corresponding robot configuration file should be added to `cfg/supervised/robot/` describing the system' symmetry group and joint-space representation generators, should also be added.
+2. The corresponding robot configuration file should be added to `cfg/supervised/robot/` describing the system' symmetry group and joint space representation generators, should also be added.
In summary, we support:
- [x] Loading of URDF files in `pybullet` and `pinocchio` through `robot_descriptions.py`
- [x] Visualization of robot Discrete Morphological Symmetries in `pybullet`. Other physics simulators and visualization tools will come soon.
-- [x] Utility functions to define symmetry representations of proprioceptive and exteroceptive measurements.
-- [x] Construction of equivariant neural networks processing proprioceptive and exteroceptive measurements, using the `escnn` library.
-
-#### Computer Graphics
-
-The field of computer graphics does not widely employs URDF descriptions for the definition of dynamical systems. Although covering different description standards is within the goal of this repository,
-for now, our main objective is:
-
-- [ ] Integration of [STAR](https://star.is.tue.mpg.de/) model in the library, to automatically process sequence of data and obtain symmetric sequences.
- By defining the sagittal symmetry of all the model parameters. This will enable the use of DMSs in all applications of human motion prediction, shape reconstruction, etc.
- If you are interested in contributing to this effort, please contact us.
-- [ ] Integration of Motion Capture (MoCap) data formats. Including `.fbx`, `.bvh`, and `.c3d`.
-
-#### Computational Biology
+- [x] Utility functions to define symmetry representations of proprioceptive and exteroceptive observations.
+- [x] Construction of equivariant neural networks processing proprioceptive and exteroceptive observations, using the `escnn` library.
+- [x] Use abstract harmonic analysis to decompose motion trajectories and recorded proprioceptive and exteroceptive observations into isotypic components.
-For computational biology and bio-mechanics, we believe the most relevant format to provide support for is:
-- [ ] Coordinate 3D files `.c3d` format.
diff --git a/docs/index.html b/docs/index.html
index e526b1c..6bfbc76 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -19,10 +19,10 @@
+ content="On discrete symmetries of robotic systems: A data-driven and group-theoretic analysis, Robotics Science and Systems RSS-2023. This project studies Morphological Symmetries in Robotic Systems, these are Discrete (or finite) symmetry groups of the state space of a robotic and dynamical systems These symmetries are usefull for data-augmentation and equivariant function approximation. Morphological Symmetries (MorphoSymm, Morpho Symm, Morphosymm)">
+ content="On discrete symmetries of robotic systems: A data-driven and group-theoretic analysis, Robotics Science and Systems RSS-2023. This project studies Morphological Symmetries in Robotic Systems, these are Discrete (or finite) symmetry groups of the state space of a robotic and dynamical systems These symmetries are usefull for data-augmentation and equivariant function approximation. Morphological Symmetries (MorphoSymm, Morpho Symm, Morphosymm)"/>
@@ -32,7 +32,7 @@
+ content="On discrete symmetries of robotic systems: A data-driven and group-theoretic analysis, Robotics Science and Systems RSS-2023. This project studies Morphological Symmetries in Robotic Systems, these are Discrete (or finite) symmetry groups of the state space of a robotic and dynamical systems These symmetries are usefull for data-augmentation and equivariant function approximation. Morphological Symmetries (MorphoSymm, Morpho Symm, Morphosymm)">
@@ -40,7 +40,7 @@
+ content="discrete morphological symmetries, robotics, computer graphics, symmetries, equivariance, morphosymm, MorphoSymm, morpho symm, dynamics harmonics analysis, harmonic analysis, locomotion, control, equivariant neural networks, invariant neural networks, group theory">
@@ -200,8 +200,9 @@
Symmetries of robotic systems
@@ -213,7 +214,7 @@
Symmetries of robotic systems
@@ -257,8 +258,10 @@
Caley Diagrams of Morphological Symmetries
@@ -267,8 +270,9 @@
@@ -277,8 +281,8 @@
diff --git a/docs/static/images/atlas/caley_diagram.svg b/docs/static/images/atlas/caley_diagram.svg
new file mode 100644
index 0000000..2277241
--- /dev/null
+++ b/docs/static/images/atlas/caley_diagram.svg
@@ -0,0 +1,132 @@
+
+
+
+
diff --git a/docs/static/images/data_augmentation.svg b/docs/static/images/data_augmentation.svg
new file mode 100644
index 0000000..2267189
--- /dev/null
+++ b/docs/static/images/data_augmentation.svg
@@ -0,0 +1,586 @@
+
+
+
+
diff --git a/docs/static/images/mini_cheetah/caley_diagram.svg b/docs/static/images/mini_cheetah/caley_diagram.svg
index 4e6bf2a..9d93adf 100644
--- a/docs/static/images/mini_cheetah/caley_diagram.svg
+++ b/docs/static/images/mini_cheetah/caley_diagram.svg
@@ -2,9 +2,9 @@
+ inkscape:current-layer="layer7"
+ fit-margin-top="0"
+ fit-margin-left="0"
+ fit-margin-right="0"
+ fit-margin-bottom="0" />
+
+
+
+ inkscape:label="Layer 3"
+ transform="translate(-19.21255,0.04444756)" />
+ inkscape:label="Layer 2"
+ transform="translate(-19.21255,0.04444756)">
+ style="display:none"
+ transform="translate(-19.21255,0.04444756)">
-
-
-
-
-
-
+ inkscape:label="Solo"
+ transform="translate(-19.21255,0.04444756)"
+ style="display:none" />
+ style="display:inline"
+ transform="translate(-19.21255,0.04444756)">
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ inkscape:label="Atlas"
+ transform="translate(-19.21255,0.04444756)"
+ style="display:none" />
diff --git a/docs/static/images/mini_cheetah/dha.svg b/docs/static/images/mini_cheetah/dha.svg
new file mode 100644
index 0000000..1a7cb50
--- /dev/null
+++ b/docs/static/images/mini_cheetah/dha.svg
@@ -0,0 +1,12282 @@
+
+
+
+
diff --git a/docs/static/images/solo/caley_diagram.svg b/docs/static/images/solo/caley_diagram.svg
new file mode 100644
index 0000000..d7bef02
--- /dev/null
+++ b/docs/static/images/solo/caley_diagram.svg
@@ -0,0 +1,943 @@
+
+
+
+
diff --git a/docs/static/images/solo/equivariant_function.svg b/docs/static/images/solo/equivariant_function.svg
new file mode 100644
index 0000000..33423de
--- /dev/null
+++ b/docs/static/images/solo/equivariant_function.svg
@@ -0,0 +1,550 @@
+
+
diff --git a/docs/static/images/solo/equivariant_nn.svg b/docs/static/images/solo/equivariant_nn.svg
new file mode 100644
index 0000000..00a24fe
--- /dev/null
+++ b/docs/static/images/solo/equivariant_nn.svg
@@ -0,0 +1,5455 @@
+
+
+
+
diff --git a/morpho_symm/cfg/config_visualization.yaml b/morpho_symm/cfg/config_visualization.yaml
index 8528d28..ccb22a9 100644
--- a/morpho_symm/cfg/config_visualization.yaml
+++ b/morpho_symm/cfg/config_visualization.yaml
@@ -25,12 +25,3 @@ hydra:
formatters:
simple:
format: '[%(levelname)s][%(name)s] %(message)s'
-# handlers:
-# console:
-# class: logging.StreamHandler
-# formatter: simple
-# stream: ext://sys.stdout
-# root:
-# handlers: [console]
-
-# disable_existing_loggers: false
diff --git a/morpho_symm/cfg/robot/mini_cheetah.yaml b/morpho_symm/cfg/robot/mini_cheetah.yaml
index 038b918..5ec51ea 100644
--- a/morpho_symm/cfg/robot/mini_cheetah.yaml
+++ b/morpho_symm/cfg/robot/mini_cheetah.yaml
@@ -1,5 +1,6 @@
defaults:
- base_robot
+ - _self_
name: mini_cheetah
angle_sweep: 0.7
diff --git a/morpho_symm/data/contact_dataset/__init__.py b/morpho_symm/data/contact_dataset/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/morpho_symm/data/contact_dataset/umich_contact_dataset.py b/morpho_symm/data/contact_dataset/umich_contact_dataset.py
index cf6a17e..20ee894 100644
--- a/morpho_symm/data/contact_dataset/umich_contact_dataset.py
+++ b/morpho_symm/data/contact_dataset/umich_contact_dataset.py
@@ -70,7 +70,7 @@ def __init__(self, data_name, label_name, window_size, robot_cfg,
# ----
self.device = device
- self.contact_state_freq = self.get_class_frequency()
+ self.contact_state_freq = slabelelf.get_class_frequency()
self.rep_in, self.rep_out = self.get_in_out_symmetry_groups_reps(robot_cfg)
self.augment = augment
@@ -316,6 +316,7 @@ def compute_accuracy(self, dataloader, model):
def mat2numpy_split(train_val_data_path: pathlib.Path, test_data_path: pathlib.Path, save_path: pathlib.Path,
train_ratio=0.7, val_ratio=0.15):
"""Load data from .mat file, concatenate into numpy array, and save as train/val/test.
+
Inputs:
- data_pth: path to mat data folder
- save_pth: path to numpy saving directory.
diff --git a/morpho_symm/data/mini_cheetah/__init__.py b/morpho_symm/data/mini_cheetah/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/air_jumping_gait.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/air_jumping_gait.mat
new file mode 100644
index 0000000..04fe3df
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/air_jumping_gait.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/air_walking_gait.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/air_walking_gait.mat
new file mode 100644
index 0000000..05ab80c
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/air_walking_gait.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/asphalt_road.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/asphalt_road.mat
new file mode 100644
index 0000000..65d8850
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/asphalt_road.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_difficult_slippery.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_difficult_slippery.mat
new file mode 100644
index 0000000..c45a35b
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_difficult_slippery.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_galloping.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_galloping.mat
new file mode 100644
index 0000000..4504a00
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_galloping.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_left_circle.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_left_circle.mat
new file mode 100644
index 0000000..f76d7fe
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_left_circle.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_pronking.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_pronking.mat
new file mode 100644
index 0000000..630b812
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_pronking.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_right_circle.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_right_circle.mat
new file mode 100644
index 0000000..f681b69
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/concrete_right_circle.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/forest.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/forest.mat
new file mode 100644
index 0000000..c64714c
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/forest.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/grass.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/grass.mat
new file mode 100644
index 0000000..873c63e
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/grass.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/middle_pebble.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/middle_pebble.mat
new file mode 100644
index 0000000..be9d91b
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/middle_pebble.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/old_asphalt_road.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/old_asphalt_road.mat
new file mode 100644
index 0000000..f5aae45
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/old_asphalt_road.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/rock_road.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/rock_road.mat
new file mode 100644
index 0000000..8cb7d74
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/rock_road.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/sidewalk.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/sidewalk.mat
new file mode 100644
index 0000000..8942640
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/sidewalk.mat differ
diff --git a/morpho_symm/data/mini_cheetah/umich_dataset/mat/small_pebble.mat b/morpho_symm/data/mini_cheetah/umich_dataset/mat/small_pebble.mat
new file mode 100644
index 0000000..624f29a
Binary files /dev/null and b/morpho_symm/data/mini_cheetah/umich_dataset/mat/small_pebble.mat differ
diff --git a/morpho_symm/nn/EquivariantModules.py b/morpho_symm/nn/EquivariantModules.py
index 4bcbae4..9054718 100644
--- a/morpho_symm/nn/EquivariantModules.py
+++ b/morpho_symm/nn/EquivariantModules.py
@@ -6,7 +6,7 @@
from escnn.nn import EquivariantModule, FieldType, GeometricTensor
from torch.nn import Parameter
-from morpho_symm.utils.rep_theory_utils import isotypic_decomp_representation
+from morpho_symm.utils.abstract_harmonics_analysis import isotypic_decomp_representation
class IsotypicBasis(EquivariantModule):
diff --git a/morpho_symm/robot_harmonic_decomposition.py b/morpho_symm/robot_harmonic_decomposition.py
index 932bb59..5050164 100644
--- a/morpho_symm/robot_harmonic_decomposition.py
+++ b/morpho_symm/robot_harmonic_decomposition.py
@@ -3,21 +3,21 @@
import hydra
import numpy as np
-from escnn.group import Group, Representation
+from escnn.group import directsum
from omegaconf import DictConfig
from pynput import keyboard
-from scipy.spatial.transform import Rotation
-from tqdm import tqdm
-import morpho_symm
-from morpho_symm.data.DynamicsRecording import DynamicsRecording
-from morpho_symm.robot_symmetry_visualization_dynamic import load_mini_cheetah_trajs
+from morpho_symm.utils.abstract_harmonics_analysis import decom_signal_into_isotypic_components
from morpho_symm.utils.algebra_utils import matrix_to_quat_xyzw
from utils.pybullet_visual_utils import change_robot_appearance, spawn_robot_instances
from utils.robot_utils import load_symmetric_system
+import morpho_symm
+from morpho_symm.data.DynamicsRecording import DynamicsRecording
+from morpho_symm.robot_symmetry_visualization_dynamic import load_mini_cheetah_trajs
from morpho_symm.robots.PinBulletWrapper import PinBulletWrapper
from morpho_symm.utils.pybullet_visual_utils import configure_bullet_simulation
+from morpho_symm.utils.rep_theory_utils import irreps_stats
def on_key_press(key):
@@ -32,6 +32,7 @@ def on_key_press(key):
# Ignore special keys without a char attribute
pass
+
# Set up the listener and the list to store new characters
new_command = []
num_pressed = []
@@ -39,190 +40,188 @@ def on_key_press(key):
listener.start()
-def generate_dof_motions(robot: PinBulletWrapper, angle_sweep=0.5):
- """TODO: In construction."""
- if robot.robot_name == 'mini_cheetah':
- recordings_path = Path(morpho_symm.__file__).parent / 'data/contact_dataset/training_splitted/mat_test'
- recordings = load_mini_cheetah_trajs(recordings_path)
- recording_name = 'forest'
- recording = recordings[recording_name]
- # timestamps = recording['control_time'].flatten()
- q_js_t = recording['q'] # Joint space positions
- # v_js_t = recording['qd'] # Joint space velocities
- base_ori_t = recording['imu_rpy']
- # ground_reaction_forces = recording['F']
- # base_acc = recording['imu_acc']
- # base_ang_vel = recording['imu_omega']
- # feet_contact_states = recording['contacts']
-
- q = []
- q0, _ = robot.pin2sim(robot._q_zero, np.zeros(robot.nv))
- for q_js, base_ori in zip(q_js_t, base_ori_t):
- # Define the recording base configuration.
- q_rec = np.concatenate([q0[:7], q_js])
- v_rec = np.zeros(robot.nv)
- # Just for Mit Cheetah.
- q_t, _ = robot.sim2pin(q_rec - q0, v_rec)
- q.append(q_t)
- return np.asarray(q)
- else:
- n_dof = robot.nq - 7
- q0, _ = robot.get_init_config(False)
- phases = np.random.uniform(0, 2 * np.pi, n_dof)[..., None]
- amplitudes = np.random.uniform(0, angle_sweep, n_dof)[..., None]
- max_period, min_period = 6, 3
- periods = np.random.randint(min_period, max_period, n_dof)[..., None]
- q_period = np.lcm.reduce(periods).item()
- t = np.linspace(0, q_period, q_period * 10)[None, ...]
- q_js = q0[7:, None] + 0 * np.sin(2 * np.pi * (1 / periods) * t + phases) * amplitudes
- q_base = q0[:7, None] * np.ones_like(t)
- q = np.concatenate([q_base, q_js], axis=0).T
- return q
-
-
-
-@hydra.main(config_path='cfg', config_name='config_visualization', version_base='1.1')
+def get_motion_trajectory(robot: PinBulletWrapper, recording_name=None, angle_sweep=0.5):
+ # Load a trajectory of motion and measurements from the mini-cheetah robot
+ recordings_path = Path(
+ morpho_symm.__file__).parent / f'data/{robot.robot_name}/raysim_recordings/flat/forward_minus_0_4/n_trajs=1-frames=7771-train.pkl'
+ dyn_recordings = DynamicsRecording.load_from_file(recordings_path)
+ return dyn_recordings
+ #
+ # if robot.robot_name == 'mini_cheetah':
+ # recordings_path = Path(morpho_symm.__file__).parent / 'data/contact_dataset/training_splitted/mat_test'
+ # recordings = load_mini_cheetah_trajs(recordings_path)
+ # recording_name = 'forest'
+ # recording = recordings[recording_name]
+ # # timestamps = recording['control_time'].flatten()
+ # q_js_t = recording['q'] # Joint space positions
+ # # v_js_t = recording['qd'] # Joint space velocities
+ # base_ori_t = recording['imu_rpy']
+ # # ground_reaction_forces = recording['F']
+ # # base_acc = recording['imu_acc']
+ # # base_ang_vel = recording['imu_omega']
+ # # feet_contact_states = recording['contacts']
+ #
+ # q = []
+ # q0, _ = robot.pin2sim(robot._q_zero, np.zeros(robot.nv))
+ # for q_js, base_ori in zip(q_js_t, base_ori_t):
+ # # Define the recording base configuration.
+ # q_rec = np.concatenate([q0[:7], q_js])
+ # v_rec = np.zeros(robot.nv)
+ # # Just for Mit Cheetah.
+ # q_t, _ = robot.sim2pin(q_rec - q0, v_rec)
+ # q.append(q_t)
+ # return np.asarray(q)
+ # else:
+ # n_dof = robot.nq - 7
+ # q0, _ = robot.get_init_config(False)
+ # phases = np.random.uniform(0, 2 * np.pi, n_dof)[..., None]
+ # amplitudes = np.random.uniform(0, angle_sweep, n_dof)[..., None]
+ # max_period, min_period = 6, 3
+ # periods = np.random.randint(min_period, max_period, n_dof)[..., None]
+ # q_period = np.lcm.reduce(periods).item()
+ # t = np.linspace(0, q_period, q_period * 10)[None, ...]
+ # q_js = q0[7:, None] + 0 * np.sin(2 * np.pi * (1 / periods) * t + phases) * amplitudes
+ # q_base = q0[:7, None] * np.ones_like(t)
+ # q = np.concatenate([q_base, q_js], axis=0).T
+ # return q
+
+
+def fix_SO3_reprojection(R_iso):
+ x = R_iso[0, :]
+ y = R_iso[1, :]
+ z = R_iso[2, :]
+ x_norm, y_norm, z_norm = np.linalg.norm(x), np.linalg.norm(y), np.linalg.norm(z)
+ if not np.isclose(x_norm, 1.0):
+ x = np.array([1, 0, 0])
+ if not np.isclose(y_norm, 1.0):
+ y = np.array([0, 1, 0])
+ if not np.isclose(z_norm, 1.0):
+ z = np.array([0, 0, 1])
+ R_iso = np.stack([x, y, z], axis=0)
+ return R_iso
+
+
+@hydra.main(config_path='cfg', config_name='config_visualization', version_base='1.2')
def main(cfg: DictConfig):
- """Visualize the effect of DMSs transformations in 3D animation.
-
- This script visualizes the DMSs transformations on robot state and on proprioceptive and exteroceptive measurements.
- """
- cfg.robot.seed = cfg.robot.seed if cfg.robot.seed >= 0 else np.random.randint(0, 1000)
- np.random.seed(cfg.robot.seed)
- # Get robot instance, along with representations of the symmetry group on the Euclidean space (in which the robot
- # base B evolves in) and Joint Space (in which the internal configuration of the robot evolves in).
robot, G = load_symmetric_system(robot_cfg=cfg.robot, debug=cfg.debug)
- assert isinstance(G, Group)
- assert np.all(rep_name in G.representations for rep_name in ['Ed', 'Q_js', 'TqQ_js']), \
- f"Group {G} should have representations for Ed, Q_js and TqQ_js, found: {list(G.representations.keys())}"
rep_Q_js = G.representations['Q_js']
- rep_QJ_iso = Representation(G, name="Q_js_iso", irreps=rep_Q_js.irreps, change_of_basis=np.eye(rep_Q_js.size))
- rep_TqJ = G.representations['TqQ_js']
- rep_Ed = G.representations['Ed']
+ rep_TqQ_js = G.representations['TqQ_js']
+ rep_SO3_flat = G.representations['SO3_flat']
+ rep_R3 = G.representations['R3']
+ rep_R3_pseudo = G.representations['R3_pseudo']
+ # Define the representation for the entire state configuration.
+ rep_state = directsum([rep_Q_js, rep_TqQ_js, rep_SO3_flat, rep_R3, rep_R3_pseudo])
# Load main robot in pybullet.
pb = configure_bullet_simulation(gui=cfg.gui, debug=cfg.debug)
- robot.configure_bullet_simulation(pb)
- n_dof = robot.nq - 7
+ q0, dq0 = robot.get_init_config(random=False)
+ robot.configure_bullet_simulation(pb, base_pos=q0[:3], base_ori=q0[3:7])
if cfg.robot.tint_bodies: change_robot_appearance(pb, robot)
- q0, dq0 = robot.get_init_config(random=True, angle_sweep=cfg.robot.angle_sweep, fix_base=cfg.robot.fix_base)
- orientation_0 = matrix_to_quat_xyzw(Rotation.from_euler('xyz', [0, 0, np.pi / 2]).as_matrix())
- base_pos = q0[:3]
- q0[3:7] = orientation_0
- robot.reset_state(q0, dq0)
-
- # Determine the number of isotypic components of the Joint-Space (JS) vector space.
- # This is equivalent to the number of unique irreps of the JS representation.
- iso_comp = {} # TODO: Make a class for a Component.
- mask = []
- for re_irrep_id in rep_Q_js.irreps:
- mask.extend([re_irrep_id] * G.irrep(*re_irrep_id).size)
- for re_irrep_id in rep_Q_js.irreps:
- re_irrep = G.irrep(*re_irrep_id)
- dims = np.zeros(n_dof)
- dims[[i for i, x in enumerate(mask) if x == re_irrep_id]] = 1
- iso_comp[re_irrep] = dims
- # print(f"Re irrep: {re_irrep} - Trivial: {re_irrep.is_trivial()} - Mult: {multiplicities[idx]}")
-
- # For each isotypic component we spawn a robot instance in order to visualize the effect of the decomposition
- n_components = len(iso_comp)
- base_positions = np.asarray([base_pos] * n_components)
- base_positions[:, 0] = -1.0
- base_positions[:, 1] = np.linspace(0, 2.5 * robot.hip_height * n_components, n_components)
- base_positions[:, 1] -= np.max(base_positions[:, 1]) / 2
- # Base positions. Quaternion from (roll=0, pitch=0, yaw=90)
-
- iso_robots = spawn_robot_instances(
- robot, bullet_client=pb, base_positions=base_positions, base_orientations=[orientation_0] * n_components,
- tint=cfg.robot.tint_bodies, alpha=1.0,
- )
- # Load a trajectory of motion and measurements from the mini-cheetah robot
- recordings_path = Path(
- morpho_symm.__file__).parent / 'data/mini_cheetah/raysim_recordings/flat/forward_minus_0_4/n_trajs=1-frames=7771-train.pkl'
- dyn_recordings = DynamicsRecording.load_from_file(recordings_path)
+ # For each isotypic component we spawn a robot instance to visualize the effect of the decomposition
+ unique_irrep_ids, counts, indices = irreps_stats(rep_state.irreps)
+ n_iso_comp = len(unique_irrep_ids) # Number of isotypic subspaces equal to the unique irreps.
+ base_positions = np.asarray([q0[:3]] * n_iso_comp)
+ base_positions[:, 0] = -1.0 # Offset Iso Conf robots to the back.
+ base_positions[:, 1] = np.linspace(0, 2.5 * robot.hip_height * n_iso_comp, n_iso_comp)
+ base_positions[:, 1] -= np.max(base_positions[:, 1]) / 2
+ iso_robots = spawn_robot_instances(robot,
+ bullet_client=pb,
+ base_positions=base_positions,
+ base_orientations=[q0[3:7]] * n_iso_comp,
+ tint=cfg.robot.tint_bodies,
+ alpha=1.0)
# Load and prepare data for visualization
- q_js_t = dyn_recordings.recordings['joint_pos'] # Joint space positions
- v_js_t = dyn_recordings.recordings['joint_vel'] # Joint space velocities
- base_ori_t = dyn_recordings.recordings['base_ori'] # Base orientation
- feet_pos = dyn_recordings.recordings['feet_pos'] # Feet positions [x,y,z] w.r.t base frame
- # ground_reaction_forces = recording['F']
- # feet_contact_states = recording['contacts']
- # Prepare representations acting on proprioceptive and exteroceptive measurements.
- rep_kin_three = dyn_recordings.obs_representations['gait'] # Contact state is a 4D vector.
- rep_grf = dyn_recordings.obs_representations['ref_feet_pos'] #
-
- q0, _ = robot.pin2sim(robot._q0, np.zeros(robot.nv))
- traj_q = generate_dof_motions(robot, angle_sweep=cfg.robot.angle_sweep * 2)
- traj_q_js = traj_q[:, 7:]
- # Add offset if needed
-
- # Go from basis of JS spawned by the generalized coordinates to the basis where isotypic components are separated.
- # rep_QJ = Q @ rep_QJ_iso @ Q_inv
- Q, Q_inv = rep_Q_js.change_of_basis, rep_Q_js.change_of_basis_inv
- # Separate JS trajectory into isotypic components.
-
- qj2iso, iso2qj = 'qj2iso', 'iso2qj'
- mode = qj2iso
- print(f"Mode changed to {mode}")
- g_idx = 0
-
- traj_q_iso = (Q_inv @ traj_q_js.T).T # if mode == qj2iso else traj_q_js
-
- t, dt = 0, 1
+ dyn_recordings = get_motion_trajectory(robot, recording_name=cfg.recording_name)
+ q_js_t = dyn_recordings.recordings['joint_pos'][0]
+ v_js_t = dyn_recordings.recordings['joint_vel'][0]
+ R_flat_t = dyn_recordings.recordings['base_ori_R_flat'][0]
+ base_vel_t = dyn_recordings.recordings['base_vel'][0]
+ base_and_vel_t = dyn_recordings.recordings['base_ang_vel'][0]
+ dt = dyn_recordings.dynamics_parameters['dt']
+ traj_length = q_js_t.shape[0]
+ signal_time = np.arange(0, traj_length * dt, dt)
+ # Define state trajectory.
+ state_traj = np.concatenate([q_js_t, v_js_t, R_flat_t, base_vel_t, base_and_vel_t], axis=1)
+
+ def get_obs_from_state(state: np.array, state_rep):
+ """Auxiliary function to extract the different observations of the state."""
+ assert state.shape[0] == state_rep.size, f"Expected state of size {state_rep.size} but got {state.shape[0]}"
+ q_js_end = robot.nq - 7
+ v_js_end = q_js_end + (robot.nv - 6)
+ q_js = state[:q_js_end]
+ v_js = state[q_js_end:v_js_end]
+ R_end = v_js_end + 9
+ R_flat = state[v_js_end:R_end]
+ R = R_flat.reshape(3, 3)
+ base_vel = state[R_end:R_end + 3]
+ base_ang_vel = state[R_end + 3:]
+ return q_js, v_js, R, base_vel, base_ang_vel
+
+ comp_iso_basis, comp_canonical_basis = decom_signal_into_isotypic_components(state_traj, rep_state)
+ iso_robots = {irrep_id: iso_robot for irrep_id, iso_robot in zip(comp_canonical_basis.keys(), iso_robots)}
+ iso_robots_pos = {irrep_id: pos for irrep_id, pos in zip(comp_canonical_basis.keys(), base_positions)}
+ idx = 0
+ fps = 30 # Control the animation update time.
+ t_last_update = 0
+ g, g_prev = G.identity, G.identity
while True:
- if t >= len(traj_q_js):
- t = 0
- # for q_js, q_iso in tqdm(zip(traj_q_js, traj_q_iso), total=len(traj_q_js), desc="playback"):
- q_js, q_iso = traj_q_js[t], traj_q_iso[t]
- g = G.elements[g_idx]
-
- # Apply selected symmetry action
- q, dq = robot.get_state()
- g_q_js = np.real(rep_Q_js(g) @ q_js)
- g_q = np.concatenate((q[:7], g_q_js)).astype(float)
- robot.reset_state(g_q, dq)
-
- components_q_js = []
- for iso_robot, (re_irrep, dims) in zip(iso_robots, iso_comp.items()):
- q, dq = iso_robot.get_state()
- # Get point in isotypic component and describe it in the basis of generalized coordinates.
- q_iso_masked = q_iso * dims
- # Transform back to generalized coordinates.
- q_js_comp = np.real(Q @ q_iso_masked)
- components_q_js.append(q_js_comp)
- # Apply selected symmetry action
- g_q_js_comp = np.real(rep_Q_js(g) @ q_js_comp)
- # Set the robot to desired state.
- g_q = np.concatenate((q[:7], g_q_js_comp))
- iso_robot.reset_state(g_q, dq)
-
- # Get real robot generalized positions.
- q_iso_rec = sum(components_q_js)
- if mode == qj2iso:
- rec_error = q_js - q_iso_rec
- assert np.allclose(np.abs(rec_error), 0), f"Reconstruction error {rec_error}"
- elif mode == iso2qj:
- q_js = q_iso_rec
- else:
- raise NotImplementedError()
-
- t += dt
- time.sleep(0.05)
-
+ if idx >= traj_length:
+ idx = 0
+ # t = signal_time[idx]
+ # if t - t_last_update < 1 / fps: # Dont update visualization too often.
+ # idx += dt
+ # continue
+ # t_last_update = t
+
+ state = state_traj[idx]
+ q_js, v_js, base_SO3, base_vel, base_ang_vel = get_obs_from_state(state, rep_state)
+ base_ori = matrix_to_quat_xyzw(base_SO3)
+ q = np.concatenate((q0[:3], base_ori, q_js))
+ v = np.concatenate((base_vel, base_ang_vel, v_js))
+ robot.reset_state(q, v)
+
+ # if g != g_prev: # Apply symmetry transformation if required.
+ # state = rep_state(g) @ state
+ # for irrep_id, state_comp in comp_canonical_basis.items():
+ # comp_canonical_basis[irrep_id] = rep_state(g) @ state_comp
+ # g_prev = g
+
+ for irrep_id, state_comp in comp_canonical_basis.items():
+ iso_robot = iso_robots[irrep_id]
+ q_js_iso, v_js_iso, R_iso, base_vel_iso, base_ang_vel_iso = get_obs_from_state(state_comp[idx], rep_state)
+ R_iso_reformated = np.eye(3) # fix_SO3_reprojection(R_iso)
+ base_ori_iso = matrix_to_quat_xyzw(R_iso_reformated)
+ q_iso = np.concatenate((iso_robots_pos[irrep_id], base_ori_iso, q_js_iso))
+ v_iso = np.concatenate((base_vel_iso, base_ang_vel_iso, v_js_iso))
+ iso_robot.reset_state(q_iso, v_iso)
+ # components_q_js = []
+ # for iso_robot, (re_irrep, dims) in zip(iso_robots, iso_comp.items()):
+ # q, dq = iso_robot.get_state()
+ # # Get point in isotypic component and describe it in the basis of generalized coordinates.
+ # q_iso_masked = q_iso * dims
+ # # Transform back to generalized coordinates.
+ # q_js_comp = np.real(Q @ q_iso_masked)
+ # components_q_js.append(q_js_comp)
+ # # Apply selected symmetry action
+ # g_q_js_comp = np.real(rep_Q_js(g) @ q_js_comp)
+ # # Set the robot to desired state.
+ # g_q = np.concatenate((q[:7], g_q_js_comp))
+ # iso_robot.reset_state(g_q, dq)
+
+ idx += 1
+ # ========================================================================
# Process new keyboard commands.
if new_command:
keys = new_command.copy()
new_command.clear()
- if keys == ['t']:
+ if keys == ['p']:
dt = 1 if dt == 0 else 0
- if keys == ['m']:
- mode = qj2iso if mode == iso2qj else iso2qj
- print(f"Mode changed to {mode}")
if num_pressed:
if num_pressed[0] < G.order():
g_idx = num_pressed[0]
- print(f"Group element selected {G.elements[g_idx]}")
+ g = G.elements[g_idx]
+ print(f"Group element selected {g}")
else:
print(f"Group element {num_pressed[0]} is larger than group order...ignoring")
num_pressed.clear()
diff --git a/morpho_symm/robot_harmonic_decomposition_mini_cheetah.py b/morpho_symm/robot_harmonic_decomposition_mini_cheetah.py
index 686ded9..3dc29b3 100644
--- a/morpho_symm/robot_harmonic_decomposition_mini_cheetah.py
+++ b/morpho_symm/robot_harmonic_decomposition_mini_cheetah.py
@@ -80,7 +80,7 @@ def generate_dof_motions(robot: PinBulletWrapper, angle_sweep=0.5, recording_nam
-@hydra.main(config_path='cfg', config_name='config_visualization', version_base='1.1')
+@hydra.main(config_path='cfg', config_name='config_visualization', version_base='1.3')
def main(cfg: DictConfig):
"""Visualize the effect of DMSs transformations in 3D animation.
@@ -166,7 +166,7 @@ def main(cfg: DictConfig):
t_idx = 0
time_shift = 1
fps = 30
- make_gif = True
+ make_gif = False
timescale = 1
last_capture_time = time[0]
frames = []
@@ -213,8 +213,6 @@ def main(cfg: DictConfig):
else:
raise NotImplementedError()
-
-
if make_gif and capture_frame:
init_roll_pitch_yaw = ([0], [-30], [90])
roll, pitch, yaw = init_roll_pitch_yaw
@@ -252,8 +250,8 @@ def main(cfg: DictConfig):
if t_idx == len(traj_q_js) - 1:
t_idx = 0
- if t > 4:
- break
+ # if t > 4:
+ # break
pb.disconnect()
if len(frames) > 0:
diff --git a/morpho_symm/robot_symmetry_visualization_dynamic.py b/morpho_symm/robot_symmetry_visualization_dynamic.py
index 4d737c7..bd1734a 100644
--- a/morpho_symm/robot_symmetry_visualization_dynamic.py
+++ b/morpho_symm/robot_symmetry_visualization_dynamic.py
@@ -154,7 +154,7 @@ def update_ground_reaction_forces(pb, feet_pos, forces, contact_state, vectors=N
return vectors
-@hydra.main(config_path='cfg', config_name='config_visualization', version_base='1.1')
+@hydra.main(config_path='cfg', config_name='config_visualization', version_base='1.3')
def main(cfg: DictConfig):
"""Visualize the effect of DMSs transformations in 3D animation.
@@ -197,7 +197,7 @@ def main(cfg: DictConfig):
v_js_t = dyn_recordings.recordings['joint_vel'] # Joint space velocities
base_ori_t = dyn_recordings.recordings['base_ori'][0] # Base orientation
feet_pos = dyn_recordings.recordings['feet_pos'] # Feet positions [x,y,z] w.r.t base frame
- # ground_reaction_forces = recording['F']
+ # ground_reaction_forces = recpitcording['F']
# feet_contact_states = recording['contacts']
# Prepare representations acting on proprioceptive and exteroceptive measurements.
rep_kin_three = dyn_recordings.obs_representations['gait'] # Contact state is a 4D vector.
@@ -260,7 +260,7 @@ def main(cfg: DictConfig):
# Display contact terrain
# g_contact_state = np.rint(np.real(rep_kin_three(g) @ contact_state)).astype(np.uint8)
- # g_prev_contact_state = np.rint(np.real(rep_kin_three(g) @ prev_contact_state)).astype(np.uint8)
+ # g_prev_contact_state = np.rint(np.real(rep_kin_three(g) @ prev_contact_state)).astype(np.uint8)(np.uint8)
# robot_terrain[robot_idx] = update_contacts(pb,
# g_rf_w,
# g_prev_contact_state,
diff --git a/morpho_symm/utils/abstract_harmonics_analysis.py b/morpho_symm/utils/abstract_harmonics_analysis.py
new file mode 100644
index 0000000..914516b
--- /dev/null
+++ b/morpho_symm/utils/abstract_harmonics_analysis.py
@@ -0,0 +1,153 @@
+import functools
+from collections import OrderedDict
+
+import numpy as np
+from escnn.group import Representation, directsum
+
+from morpho_symm.utils.algebra_utils import permutation_matrix
+
+
+def decom_signal_into_isotypic_components(signal: np.ndarray, rep: Representation):
+ """Decompose a signal into its isotypic components.
+
+ This function takes a signal and a group representation and returns the signal decomposed into its isotypic
+ components, in the isotypic basis and the original basis.
+
+ Args:
+ signal: (... , rep.size) array of signals to be decomposed.
+ rep: (Representation) Representation of the vector space in which the signal lives in.
+
+ Returns:
+ iso_comp_signals: (OrderedDict) Dictionary of isotypic components of the signal in the isotypic basis.
+ iso_comp_signals_orig_basis: (OrderedDict) Dictionary of isotypic components of the signal in the original
+ basis.
+ """
+ rep_iso = isotypic_decomp_representation(rep)
+ Q_iso2orig = rep_iso.change_of_basis # Change of basis from isotypic basis to original basis
+ Q_orig2iso = rep_iso.change_of_basis_inv # Change of basis from original basis to isotypic basis
+ assert signal.shape[-1] == rep.size, f"Expected signal shape to be (..., {rep.size}) got {signal.shape}"
+
+ signal_iso = np.einsum('...ij,...j->...i', Q_orig2iso, signal)
+
+ isotypic_representations = rep_iso.attributes['isotypic_reps']
+
+ # Compute the dimensions of each isotypic subspace
+ cum_dim = 0
+ iso_comp_dims = {}
+ for irrep_id, iso_rep in isotypic_representations.items():
+ iso_space_dims = range(cum_dim, cum_dim + iso_rep.size)
+ iso_comp_dims[irrep_id] = iso_space_dims
+ cum_dim += iso_rep.size
+
+ # Separate the signal into isotypic components, by masking the signal outside of each isotypic subspace
+ iso_comp_signals = OrderedDict()
+ for irrep_id, _ in isotypic_representations.items():
+ iso_dims = iso_comp_dims[irrep_id]
+ iso_comp_signals[irrep_id] = signal_iso[..., iso_dims]
+
+ iso_comp_signals_orig_basis = OrderedDict()
+ # Compute the signals of each isotypic component in the original basis
+ for irrep_id, _ in isotypic_representations.items():
+ iso_dims = iso_comp_dims[irrep_id]
+ Q_isocomp2orig = Q_iso2orig[:, iso_dims] # Change of basis from isotypic component basis to original basis
+ iso_comp_signals_orig_basis[irrep_id] = np.einsum('...ij,...j->...i',
+ Q_isocomp2orig,
+ iso_comp_signals[irrep_id])
+
+ # Check that the sum of the isotypic components is equal to the original signal
+ rec_signal = np.sum([iso_comp_signals_orig_basis[irrep_id] for irrep_id in isotypic_representations.keys()], axis=0)
+ assert np.allclose(rec_signal, signal), \
+ f"Reconstructed signal is not equal to original signal. Error: {np.linalg.norm(rec_signal - signal)}"
+
+ return iso_comp_signals, iso_comp_signals_orig_basis
+
+
+def isotypic_decomp_representation(rep: Representation) -> Representation:
+ """Returns a representation in a "symmetry enabled basis" (a.k.a Isotypic Basis).
+
+ Takes a representation with an arbitrary basis (i.e., arbitrary change of basis and an arbitrary order of
+ irreducible representations in the escnn Representation) and returns a new representation in which the basis
+ is changed to a "symmetry enabled basis" (a.k.a Isotypic Basis). That is a representation in which the
+ vector space is decomposed into a direct sum of Isotypic Subspaces. Each Isotypic Subspace is a subspace of the
+ original vector space with a subspace representation composed of multiplicities of a single irreducible
+ representation. In oder words, each Isotypic Subspace is a subspace with a subgroup of symmetries of the original
+ vector space's symmetry group.
+
+ Args:
+ rep (Representation): Input representation in any arbitrary basis.
+
+ Returns: A `Representation` with a change of basis exposing an Isotypic Basis (a.k.a symmetry enabled basis).
+ The instance of the representation contains an additional attribute `isotypic_subspaces` which is an
+ `OrderedDict` of representations per each isotypic subspace. The keys are the active irreps' ids associated
+ with each Isotypic subspace.
+ """
+ symm_group = rep.group
+ potential_irreps = rep.group.irreps()
+ isotypic_subspaces_indices = {irrep.id: [] for irrep in potential_irreps}
+
+ for pot_irrep in potential_irreps:
+ cur_dim = 0
+ for rep_irrep_id in rep.irreps:
+ rep_irrep = symm_group.irrep(*rep_irrep_id)
+ if rep_irrep == pot_irrep:
+ isotypic_subspaces_indices[rep_irrep_id].append(list(range(cur_dim, cur_dim + rep_irrep.size)))
+ cur_dim += rep_irrep.size
+
+ # Remove inactive Isotypic Spaces
+ for irrep in potential_irreps:
+ if len(isotypic_subspaces_indices[irrep.id]) == 0:
+ del isotypic_subspaces_indices[irrep.id]
+
+ # Each Isotypic Space will be indexed by the irrep it is associated with.
+ active_isotypic_reps = {}
+ for irrep_id, indices in isotypic_subspaces_indices.items():
+ irrep = symm_group.irrep(*irrep_id)
+ multiplicities = len(indices)
+ active_isotypic_reps[irrep_id] = Representation(group=rep.group,
+ irreps=[irrep_id] * multiplicities,
+ name=f'IsoSubspace {irrep_id}',
+ change_of_basis=np.identity(irrep.size * multiplicities),
+ supported_nonlinearities=irrep.supported_nonlinearities)
+
+ # Impose canonical order on the Isotypic Subspaces.
+ # If the trivial representation is active it will be the first Isotypic Subspace.
+ # Then sort by dimension of the space from smallest to largest.
+ ordered_isotypic_reps = OrderedDict(sorted(active_isotypic_reps.items(), key=lambda item: item[1].size))
+ if symm_group.trivial_representation.id in ordered_isotypic_reps.keys():
+ ordered_isotypic_reps.move_to_end(symm_group.trivial_representation.id, last=False)
+
+ # Required permutation to change the order of the irreps. So we obtain irreps of the same type consecutively.
+ oneline_permutation = []
+ for irrep_id, iso_rep in ordered_isotypic_reps.items():
+ idx = isotypic_subspaces_indices[irrep_id]
+ oneline_permutation.extend(idx)
+ oneline_permutation = np.concatenate(oneline_permutation)
+ P_in2iso = permutation_matrix(oneline_permutation)
+
+ Q_iso = rep.change_of_basis @ P_in2iso.T
+ rep_iso_basis = directsum(list(ordered_isotypic_reps.values()),
+ name=rep.name + '-Iso',
+ change_of_basis=Q_iso)
+
+ iso_supported_nonlinearities = [iso_rep.supported_nonlinearities for iso_rep in ordered_isotypic_reps.values()]
+ rep_iso_basis.supported_nonlinearities = functools.reduce(set.intersection, iso_supported_nonlinearities)
+ rep_iso_basis.attributes['isotypic_reps'] = ordered_isotypic_reps
+
+ return rep_iso_basis
+
+
+def isotypic_basis(representation: Representation, multiplicity: int = 1, prefix=''):
+ rep_iso = isotypic_decomp_representation(representation)
+
+ iso_reps = OrderedDict()
+ iso_range = OrderedDict()
+
+ start_dim = 0
+ for iso_irrep_id, reg_rep_iso in rep_iso.attributes['isotypic_reps'].items():
+ iso_reps[iso_irrep_id] = directsum([reg_rep_iso] * multiplicity, name=f"{prefix}_IsoSpace{iso_irrep_id}")
+ iso_range[iso_irrep_id] = range(start_dim, start_dim + iso_reps[iso_irrep_id].size)
+ start_dim += iso_reps[iso_irrep_id].size
+
+ assert rep_iso.size * multiplicity == sum([iso_rep.size for iso_rep in iso_reps.values()])
+
+ return iso_reps, iso_range # Dict[key:id_space -> value: rep_iso_space]
diff --git a/morpho_symm/utils/algebra_utils.py b/morpho_symm/utils/algebra_utils.py
index 891b922..6ad228a 100644
--- a/morpho_symm/utils/algebra_utils.py
+++ b/morpho_symm/utils/algebra_utils.py
@@ -69,9 +69,6 @@ def append_dictionaries(dict1, dict2, recursive=True):
return result
-
-
-
def slugify(value, allow_unicode=False):
"""Taken from github.com/django/django/blob/master/django/utils/text.py.
diff --git a/morpho_symm/utils/rep_theory_utils.py b/morpho_symm/utils/rep_theory_utils.py
index dbe4f71..54129c9 100644
--- a/morpho_symm/utils/rep_theory_utils.py
+++ b/morpho_symm/utils/rep_theory_utils.py
@@ -1,12 +1,10 @@
-import functools
import itertools
-from collections import OrderedDict
from typing import Callable, Dict, List, Union
import networkx as nx
import numpy as np
-from escnn.group import CyclicGroup, DihedralGroup, DirectProductGroup, Group, GroupElement, Representation
-from escnn.group.representation import Representation, directsum
+from escnn.group import CyclicGroup, DihedralGroup, DirectProductGroup, Group, GroupElement
+from escnn.group.representation import Representation
from networkx import Graph
from scipy.linalg import block_diag
@@ -110,97 +108,6 @@ def irreps_stats(irreps_ids):
return unique_ids, counts, indices
-def isotypic_decomp_representation(rep: Representation) -> Representation:
- """Returns a representation in a "symmetry enabled basis" (a.k.a Isotypic Basis).
-
- Takes a representation with an arbitrary basis (i.e., arbitrary change of basis and an arbitrary order of
- irreducible representations in the escnn Representation) and returns a new representation in which the basis
- is changed to a "symmetry enabled basis" (a.k.a Isotypic Basis). That is a representation in which the
- vector space is decomposed into a direct sum of Isotypic Subspaces. Each Isotypic Subspace is a subspace of the
- original vector space with a subspace representation composed of multiplicities of a single irreducible
- representation. In oder words, each Isotypic Subspace is a subspace with a subgroup of symmetries of the original
- vector space's symmetry group.
-
- Args:
- rep (Representation): Input representation in any arbitrary basis.
-
- Returns: A `Representation` with a change of basis exposing an Isotypic Basis (a.k.a symmetry enabled basis).
- The instance of the representation contains an additional attribute `isotypic_subspaces` which is an
- `OrderedDict` of representations per each isotypic subspace. The keys are the active irreps' ids associated
- with each Isotypic subspace.
- """
- symm_group = rep.group
- potential_irreps = rep.group.irreps()
- isotypic_subspaces_indices = {irrep.id: [] for irrep in potential_irreps}
-
- for pot_irrep in potential_irreps:
- cur_dim = 0
- for rep_irrep_id in rep.irreps:
- rep_irrep = symm_group.irrep(*rep_irrep_id)
- if rep_irrep == pot_irrep:
- isotypic_subspaces_indices[rep_irrep_id].append(list(range(cur_dim, cur_dim + rep_irrep.size)))
- cur_dim += rep_irrep.size
-
- # Remove inactive Isotypic Spaces
- for irrep in potential_irreps:
- if len(isotypic_subspaces_indices[irrep.id]) == 0:
- del isotypic_subspaces_indices[irrep.id]
-
- # Each Isotypic Space will be indexed by the irrep it is associated with.
- active_isotypic_reps = {}
- for irrep_id, indices in isotypic_subspaces_indices.items():
- irrep = symm_group.irrep(*irrep_id)
- multiplicities = len(indices)
- active_isotypic_reps[irrep_id] = Representation(group=rep.group,
- irreps=[irrep_id] * multiplicities,
- name=f'IsoSubspace {irrep_id}',
- change_of_basis=np.identity(irrep.size * multiplicities),
- supported_nonlinearities=irrep.supported_nonlinearities)
-
- # Impose canonical order on the Isotypic Subspaces.
- # If the trivial representation is active it will be the first Isotypic Subspace.
- # Then sort by dimension of the space from smallest to largest.
- ordered_isotypic_reps = OrderedDict(sorted(active_isotypic_reps.items(), key=lambda item: item[1].size))
- if symm_group.trivial_representation.id in ordered_isotypic_reps.keys():
- ordered_isotypic_reps.move_to_end(symm_group.trivial_representation.id, last=False)
-
- # Required permutation to change the order of the irreps. So we obtain irreps of the same type consecutively.
- oneline_permutation = []
- for irrep_id, iso_rep in ordered_isotypic_reps.items():
- idx = isotypic_subspaces_indices[irrep_id]
- oneline_permutation.extend(idx)
- oneline_permutation = np.concatenate(oneline_permutation)
- P_in2iso = permutation_matrix(oneline_permutation)
-
- Q_iso = rep.change_of_basis @ P_in2iso.T
- rep_iso_basis = directsum(list(ordered_isotypic_reps.values()),
- name=rep.name + '-Iso',
- change_of_basis=Q_iso)
-
- iso_supported_nonlinearities = [iso_rep.supported_nonlinearities for iso_rep in ordered_isotypic_reps.values()]
- rep_iso_basis.supported_nonlinearities = functools.reduce(set.intersection, iso_supported_nonlinearities)
- rep_iso_basis.attributes['isotypic_reps'] = ordered_isotypic_reps
-
- return rep_iso_basis
-
-
-def isotypic_basis(representation: Representation, multiplicity: int = 1, prefix=''):
- rep_iso = isotypic_decomp_representation(representation)
-
- iso_reps = OrderedDict()
- iso_range = OrderedDict()
-
- start_dim = 0
- for iso_irrep_id, reg_rep_iso in rep_iso.attributes['isotypic_reps'].items():
- iso_reps[iso_irrep_id] = directsum([reg_rep_iso] * multiplicity, name=f"{prefix}_IsoSpace{iso_irrep_id}")
- iso_range[iso_irrep_id] = range(start_dim, start_dim + iso_reps[iso_irrep_id].size)
- start_dim += iso_reps[iso_irrep_id].size
-
- assert rep_iso.size * multiplicity == sum([iso_rep.size for iso_rep in iso_reps.values()])
-
- return iso_reps, iso_range # Dict[key:id_space -> value: rep_iso_space]
-
-
def escnn_representation_form_mapping(
G: Group, representation: Union[Dict[GroupElement, np.ndarray], Callable[[GroupElement], np.ndarray]]
):
diff --git a/morpho_symm/utils/robot_utils.py b/morpho_symm/utils/robot_utils.py
index cd33655..bfca389 100644
--- a/morpho_symm/utils/robot_utils.py
+++ b/morpho_symm/utils/robot_utils.py
@@ -17,7 +17,7 @@
import morpho_symm
from morpho_symm.robots.PinBulletWrapper import PinBulletWrapper
from morpho_symm.utils.algebra_utils import gen_permutation_matrix
-from morpho_symm.utils.rep_theory_utils import group_rep_from_gens
+from morpho_symm.utils.rep_theory_utils import escnn_representation_form_mapping, group_rep_from_gens
from morpho_symm.utils.pybullet_visual_utils import (
change_robot_appearance,
configure_bullet_simulation,
@@ -60,7 +60,9 @@ def get_escnn_group(cfg: DictConfig):
def load_symmetric_system(
- robot_cfg: Optional[DictConfig] = None, robot_name: Optional[str] = None, debug=False
+ robot_cfg: Optional[DictConfig] = None,
+ robot_name: Optional[str] = None,
+ debug=False
) -> [PinBulletWrapper, escnn.group.Group]:
"""Utility function to get the symmetry group and representations of a robotic system defined in config.
@@ -112,58 +114,56 @@ def load_symmetric_system(
# Select the field for the representations.
rep_field = float if robot_cfg.rep_fields.lower() != 'complex' else complex
- # Get the dimensions of the spaces Q_js and TqQ_js
- dimQ_js, dimTqQ_js = robot.nq - 7, robot.nv - 6
-
- # Joint-Space Q_js (generalized position coordinates)
- rep_Q_js = {G.identity: np.eye(dimQ_js, dtype=rep_field)}
- # Check a representation for each generator is provided
- assert len(robot_cfg.permutation_Q_js) == len(robot_cfg.reflection_Q_js) >= len(G.generators), \
- f"Not enough representation provided for the joint-space `Q_js`. " \
- f"Found {len(robot_cfg.permutation_TqQ_js)} but symmetry group {G} has {len(G.generators)} generators."
-
- # Generate ESCNN representation of generators
- for g_gen, perm, refx in zip(G.generators, robot_cfg.permutation_Q_js, robot_cfg.reflection_Q_js):
- assert len(perm) == dimQ_js == len(refx), \
- f"Dimension of joint-space position coordinates dim(Q_js)={robot.n_js} != dim(rep_Q_JS): {len(refx)}"
- refx = np.array(refx, dtype=rep_field)
- rep_Q_js[g_gen] = gen_permutation_matrix(oneline_notation=perm, reflections=refx)
- # Generate the entire group
- rep_Q_js = group_rep_from_gens(G, rep_Q_js)
-
- # Joint-Space Tangent bundle TqQ_js (generalized velocity coordinates)
- rep_TqQ_js = {G.identity: np.eye(dimTqQ_js, dtype=rep_field)}
- if dimQ_js == dimTqQ_js: # If position and velocity coordinates have the same dimensions
- rep_TqQ_js = rep_Q_js
- else:
- # Check a representation for each generator is provided
- assert robot_cfg.permutation_TqQ_js is not None and robot_cfg.reflection_TqQ_js is not None, \
- f"No representations provided for the joint-space tangent bundle rep_TqQ_js of {robot_name}"
- assert len(robot_cfg.permutation_TqQ_js) == len(robot_cfg.reflection_TqQ_js) >= len(G.generators), \
- f"Not enough representation provided for the joint-space tangent bundle `TqQ_js`. " \
- f"Found {len(robot_cfg.permutation_TqQ_js)} but symmetry group {G} has {len(G.generators)} generators."
-
+ reps_in_cfg = [k.split('permutation_')[1] for k in robot_cfg if "permutation" in k]
+ for rep_name in reps_in_cfg:
+ perm_list = list(robot_cfg[f'permutation_{rep_name}'])
+ rep_dim = len(perm_list[0])
+ reflex_list = list(robot_cfg[f'reflection_{rep_name}'])
+ assert len(perm_list) == len(reflex_list), \
+ f"Found different number of permutations and reflections for {rep_name}"
+ assert len(perm_list) >= len(G.generators), \
+ f"Found {len(perm_list)} element reps for {rep_name}, Expected {len(G.generators)} generators for {G}"
# Generate ESCNN representation of generators
- for g_gen, perm, refx in zip(G.generators, robot_cfg.permutation_TqQ_js, robot_cfg.reflection_TqQ_js):
- assert len(perm) == dimTqQ_js == len(refx), \
- f"Dimension of joint-space position coordinates dim(Q_js)={robot.n_js} != dim(rep_Q_JS): {len(refx)}"
+ gen_rep = {}
+ for h, perm, refx in zip(G.generators, perm_list, reflex_list):
+ assert len(perm) == len(refx) == rep_dim
refx = np.array(refx, dtype=rep_field)
- rep_TqQ_js[g_gen] = gen_permutation_matrix(oneline_notation=perm, reflections=refx)
+ gen_rep[h] = gen_permutation_matrix(oneline_notation=perm, reflections=refx)
# Generate the entire group
- rep_TqQ_js = group_rep_from_gens(G, rep_TqQ_js)
- rep_TqQ_js.name = 'TqQ_js'
+ rep = group_rep_from_gens(G, rep_H=gen_rep)
+ rep.name = rep_name
+ G.representations.update({rep_name: rep})
- rep_Q_js.name = 'Q_js'
+ rep_Q_js = G.representations['Q_js']
+ rep_TqQ_js = G.representations.get('TqQ_js', None)
+ rep_TqQ_js = rep_Q_js if rep_TqQ_js is None else rep_TqQ_js
+ dimQ_js, dimTqQ_js = robot.nq - 7, robot.nv - 6
+ assert dimQ_js == rep_Q_js.size
+ assert dimTqQ_js == rep_TqQ_js.size
# Create the representation of isometries on the Euclidean Space in d dimensions.
- generate_euclidean_space_representations(G) # This adds `O3` and `E3` representations to the group.
+ rep_R3, rep_E3, rep_R3pseudo, rep_E3pseudo = generate_euclidean_space_representations(G) # This adds `O3` and `E3` representations to the group.
+
+ # Define the representation of the rotation matrix R that transforms the base orientation.
+ rep_rot_flat = {}
+ for h in G.elements:
+ rep_rot_flat[h] = np.kron(rep_R3(h), rep_R3(~h).T)
+ rep_rot_flat = escnn_representation_form_mapping(G, rep_rot_flat)
+ rep_rot_flat.name = "SO3_flat"
# Add representations to the group.
- G.representations.update(Q_js=rep_Q_js, TqQ_js=rep_TqQ_js)
+ G.representations.update(Q_js=rep_Q_js,
+ TqQ_js=rep_TqQ_js,
+ R3=rep_R3,
+ E3=rep_E3,
+ R3_pseudo=rep_R3pseudo,
+ E3_pseudo=rep_E3pseudo,
+ SO3_flat=rep_rot_flat)
+
return robot, G
-def generate_euclidean_space_representations(G: Group) -> Representation:
+def generate_euclidean_space_representations(G: Group) -> tuple[Representation]:
"""Generate the E3 representation of the group G.
This representation is used to transform all members of the Euclidean Space in 3D.
@@ -200,26 +200,23 @@ def generate_euclidean_space_representations(G: Group) -> Representation:
else:
raise NotImplementedError(f"Group {G} not implemented yet.")
+ # Representation of unitary/orthogonal transformations in d dimensions.
+ rep_R3.name = "R3"
+
# We include some utility symmetry representations for different geometric objects.
# We define a Ed as a (d+1)x(d+1) matrix representing a homogenous transformation matrix in d dimensions.
rep_E3 = rep_R3 + G.trivial_representation
rep_E3.name = "E3"
- # Representation of unitary/orthogonal transformations in d dimensions.
- rep_R3.name = "R3"
# Build a representation of orthogonal transformations of pseudo-vectors.
# That is if det(rep_O3(h)) == -1 [improper rotation] then we have to change the sign of the pseudo-vector.
# See: https://en.wikipedia.org/wiki/Pseudovector
- psuedo_gens = {h: -1 * rep_R3(h) if np.linalg.det(rep_R3(h)) < 0 else rep_R3(h) for h in G.generators}
- psuedo_gens[G.identity] = np.eye(3)
- rep_R3pseudo = group_rep_from_gens(G, psuedo_gens)
+ pseudo_gens = {h: -1 * rep_R3(h) if np.linalg.det(rep_R3(h)) < 0 else rep_R3(h) for h in G.generators}
+ pseudo_gens[G.identity] = np.eye(3)
+ rep_R3pseudo = group_rep_from_gens(G, pseudo_gens)
rep_R3pseudo.name = "R3_pseudo"
- # Representation of quaternionic transformations in d dimensions.
- # TODO: Add quaternion representation
- # quat_gens = {h: Rotation.from_matrix(rep_O3(h)).as_quat() for h in G.generators}
- # quat_gens[G.identity] = np.eye(4)
- # TODO: Add quaternion pseudo representation
- # rep_O3pseudo = group_rep_from_gens(G, psuedo_gens)
+ rep_E3pseudo = rep_R3pseudo + G.trivial_representation
+ rep_E3pseudo.name = "E3_pseudo"
- G.representations.update(Rd=rep_R3, Ed=rep_E3, Rd_pseudo=rep_R3pseudo)
+ return rep_R3, rep_E3, rep_R3pseudo, rep_E3pseudo