Skip to content
This repository has been archived by the owner on Dec 19, 2024. It is now read-only.

decouple Keypoint parsing logic from visualization #165

Open
zook-unity opened this issue Apr 7, 2021 · 4 comments
Open

decouple Keypoint parsing logic from visualization #165

zook-unity opened this issue Apr 7, 2021 · 4 comments
Labels
enhancement New feature or request

Comments

@zook-unity
Copy link

Why you need this feature:
[Is your feature request related to a problem? Please describe in details]

Keypoints are a data type that the Perception SDK produces but does not have any easy-to-use python parsing support. This limits my ability to train models on data generated from the Perception SDK as I need to manually write the parsing logic and maintain this logic myself. For example: if I get keypoint data from @salehe-ee's PeopleSansPeople project I need to write my own code to parse that data into a format for model training.

Parsing keypoints is currently embedded in the visualization plotting logic (ex: https://github.com/Unity-Technologies/datasetinsights/blob/master/datasetinsights/stats/visualization/keypoints_plot.py).

Describe the solution you'd like:
[A clear and concise description of what you want to happen.]

A python class for storing keypoint data with methods to parse input keypoint data produced from the Perception SDK (at least preview-0.8). Ideally also provide the ability to produce COCO-formatted JSON output from the python class.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

@zook-unity zook-unity added the enhancement New feature or request label Apr 7, 2021
@salehe-ee
Copy link
Contributor

This issue is related to the following one:
#159

We plan on consolidating support for parsing and visualization when we have a finalized schema and logic for annotation JSONs from the Perception SDK.

For the time being, there is this decoupled solution we have for parsing and visualization of keypoint and 2D bounding box annotations from [email protected]:
https://github.cds.internal.unity3d.com/salehe-erfanianebadi/perception2coco

@zook-unity
Copy link
Author

We plan on consolidating support for parsing and visualization when we have a finalized schema and logic for annotation JSONs from the Perception SDK.

Is this the schema design you are referring to?
https://github.com/Unity-Technologies/perception-service/blob/main/protos/randomization_schema.proto

If not, which design is pending?

@salehe-ee
Copy link
Contributor

This is the JSON schema I was referring to:
https://github.com/Unity-Technologies/com.unity.perception/blob/55707425050d34256266415e0c07592ac0761be6/com.unity.perception/Documentation~/Schema/Synthetic_Dataset_Schema.md

It might be subject to some significant changes in future based on user feedback.

@zook-unity
Copy link
Author

zook-unity commented Apr 9, 2021

ah, okay. it feels like we can define the desired python objects and then handle the decoupling by isolating the methods that parse a perception payload into those objects. the output to COCO format can then remain the same as it would only depend on the python objects.

ex:

@dataclass
class BBox2D:
    x: float
    y: float
    width: float
    height: float

    @property
    def area(self) -> float:
        return self.width * self.height

    def to_coco(self) -> List[float]:
        return [self.x, self.y, self.width, self.height]

    def from_perception(json: str):
        return BBox2D(json["x"], json["y"], json["width"], json["height"])

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants