-
Notifications
You must be signed in to change notification settings - Fork 9
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #53 from EstherSuchitha/main
Two Papers Added
- Loading branch information
Showing
6 changed files
with
175 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
--- | ||
layout: project-page-new | ||
title: "Identification and Learning-Based Control of an End-Effector for Targeted Throwing" | ||
authors: | ||
- name: Haasith Venkata Sai Pasala | ||
sup: 1 | ||
- name: Nagamanikandan Govindan | ||
sup: 1 | ||
- name: Samarth Brahmbhatt | ||
sup: 1 | ||
affiliations: | ||
- name: Robotics Research Center, IIIT Hyderabad, India | ||
link: https://robotics.iiit.ac.in | ||
sup: 1 | ||
permalink: /publications/2024/Haasith_Identification/ | ||
abstract: "Drones and mobile manipulators that can throw ob-jects in addition to picking and placing them can be highly useful in industrial automation, warehouse environments, search and | ||
rescue operations, and sports training. However, predominant end-effectors primarily cater to grasping functions, neglecting the throwing aspect. Currently, throwing is achieved by fast whole-arm | ||
motion (Zeng et al., 2020), an approach that raises concerns regard-ing safety and energy efficiency. Additionally, targeted throwing poses several challenges due to the uncertainties in model pa- | ||
rameters and unmodelled dynamics. This letter presents a new end-effector mechanism that can grasp and then place or throw an object using stored elastic energy. The instantaneous release of this stored energy propels the grasped object into projectile motion, facilitating its placement in a desired target location which can lie beyond the reachable workspace of the robot arm. We describe the mechanical design of the end-effector, its simulation model, a system identification method to fit model parameters, and a data-driven residual learning framework. The residual model predicts control input residuals arising from model uncertainties, improving targeted throwing accuracy even with unseen objects. Experiments conducted with our robot arm mounted end-effector show the efficacy of our end-effector mechanism and associated algorithms for targeted throwing." | ||
#project_page: https://constrained-grasp-diffusion.github.io/ | ||
paper: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10681644 | ||
#code: https://github.com/vishal-2000/EDMP | ||
#supplement: https://clipgraphs.github.io/static/pdfs/Supplementary.pdf | ||
#video: https://www.youtube.com/watch?v=ITo8rMInatk&feature=youtu.be | ||
#iframe: https://www.youtube.com/embed/ITo8rMInatk | ||
#demo: https://anyloc.github.io/#interactive_demo | ||
|
||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
--- | ||
layout: project-page-new | ||
title: "Revisit Anything: Visual Place Recognition via Image Segment Retrieval" | ||
authors: | ||
- name: Kartik Garg* | ||
sup: 1 | ||
- name: Sai Shubodh Puligilla* | ||
sup: 2 | ||
- name: Shishir Kolathaya | ||
sup: 1 | ||
- name: Madhava Krishna | ||
sup: 2 | ||
- name: Sourav Garg | ||
sup: 3 | ||
affiliations: | ||
- name: Indian Institute of Science (IISc), Bengaluru, India | ||
link: https://iisc.ac.in/ | ||
sup: 1 | ||
- name: Robotics Research Center, IIIT Hyderabad, India | ||
link: https://robotics.iiit.ac.in | ||
sup: 2 | ||
- name: University of Adelaide, Australia | ||
link: https://www.adelaide.edu.au | ||
sup: 3 | ||
permalink: /publications/2024/Kartik_Revisit/ | ||
abstract: "Accurately recognizing a revisited place is crucial for embodied agents to localize and navigate. This requires visual representations to be distinct, de-spite strong variations in camera viewpoint and scene appearance. Existing vi-sual place recognition pipelines encode the whole image and search for matches.This poses a fundamental challenge in matching two images of the same place | ||
captured from different camera viewpoints: the similarity of what overlaps can be dominated by the dissimilarity of what does not overlap. We address this by encoding and searching for image segments instead of the whole images. We propose to use open-set image segmentation to decompose an image into ‘mean- ingful’ entities (i.e., things and stuff). This enables us to create a novel image representation as a collection of multiple overlapping subgraphs connecting a segment with its neighboring segments, dubbed SuperSegment. Furthermore, to efficiently encode these SuperSegments into compact vector representations, we propose a novel factorized representation of feature aggregation. We show that re-trieving these partial representations leads to significantly higher recognition re- | ||
call than the typical whole image based retrieval. Our segments-based approach, dubbed SegVLAD, sets a new state-of-the-art in place recognition on a diverse selection of benchmark datasets, while being applicable to both generic and task-specialized image encoders. Finally, we demonstrate the potential of our method to “revisit anything” by evaluating our method on an object instance retrieval task, | ||
which bridges the two disparate areas of research: visual place recognition and object-goal navigation, through their common aim of recognizing goal objects specific to a place." | ||
project_page: https://revisit-anything.github.io/ | ||
paper: https://arxiv.org/pdf/2409.18049 | ||
code: https://github.com/AnyLoc/revisit-anything | ||
#supplement: https://clipgraphs.github.io/static/pdfs/Supplementary.pdf | ||
#video: https://www.youtube.com/watch?v=ITo8rMInatk&feature=youtu.be | ||
#iframe: https://www.youtube.com/embed/ITo8rMInatk | ||
#demo: https://anyloc.github.io/#interactive_demo | ||
|
||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
--- | ||
layout: project-page-new | ||
title: "Leveraging Cycle-Consistent Anchor Points for Self-Supervised RGB-D Registration" | ||
authors: | ||
- name: Siddharth Tourani∗§ | ||
sup: 1 | ||
- name: Jayaram Reddy† | ||
sup: 2 | ||
- name: Sarvesh Thakur† | ||
sup: 2 | ||
- name: K Madhava Krishna† | ||
sup: 2 | ||
- name: Muhammad Haris Khan§ | ||
sup: 3 | ||
- name: N Dinesh Reddy‡ | ||
sup: 4 | ||
affiliations: | ||
- name: Computer Vision and Learning Lab, University of Heidelberg | ||
link: https://hci.iwr.uni-heidelberg.de/vislearn/ | ||
sup: 1 | ||
- name: Robotics Research Center, IIIT Hyderabad, India | ||
link: https://robotics.iiit.ac.in | ||
sup: 2 | ||
- name: MBZUAI | ||
link: https://mbzuai.ac.ae/ | ||
sup: 3 | ||
- name: Amazon | ||
link: https://www.aboutamazon.com/ | ||
sup: 4 | ||
permalink: /publications/2024/Siddharth_Leveraging/ | ||
abstract: "With the rise in consumer depth cameras, a wealth of unlabeled RGB-D data has become available. This prompts the question of how to utilize this data for geometric reasoning of scenes. While many RGB-D registration methods rely on geometric and feature-based similarity, we take a different approach. We use cycle-consistent keypoints as salient points to enforce spatial coherence constraints during matching, improving correspondence accuracy. Additionally, we introduce a novel pose block that combines a GRU recurrent unit with transformation synchronization, blending historical | ||
and multi-view data. Our approach surpasses previous selfsupervised registration methods on ScanNet and 3DMatch, even outperforming some older supervised methods. We also integrate our components into existing methods, showing their effectiveness." | ||
#project_page: https://dataplan-hrc.github.io/ | ||
paper: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10610738 | ||
#code: https://github.com/dataplan-hrc/DaTAPlan | ||
#supplement: https://dataplan-hrc.github.io/assets/AnticipateNCollab_SupplementaryMaterial-1.pdf | ||
#video: https://www.youtube.com/watch?v=QW5VCDIgXus | ||
#iframe: https://www.youtube.com/embed/QW5VCDIgXus | ||
#demo: https://anyloc.github.io/#interactive_demo | ||
|
||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
--- | ||
layout: project-page-new | ||
title: "Design and Analysis of a Modular Flapping Wing Robot with a Swappable Powertrain Module" | ||
authors: | ||
- name: Snehit Gupta∗ | ||
sup: 1 | ||
- name: K P Rithwik | ||
sup: 1 | ||
- name: Kurva Prashanth | ||
sup: 1 | ||
- name: Avijit Ashe | ||
sup: 1 | ||
- name: Harikumar Kandath¶ | ||
sup: 1 | ||
affiliations: | ||
- name: Robotics Research Center, IIIT Hyderabad, India | ||
link: https://robotics.iiit.ac.in | ||
sup: 1 | ||
permalink: /publications/2024/Snehit_Design/ | ||
abstract: "Flapping-wing robots (FWR) have domain-specific applications, where the lack of a fast-rotating propeller makes them safer when operating in complex environments with human | ||
proximity. However, most existing research in flapping-wing robots focuses on improving range/endurance or increasing pay-load capacity. This paper proposes a modular powertrain-based flapping-wing robot as a versatile solution to a mission-specific priority switch between payload or range for the same FWR. As | ||
the flapping frequency and stroke amplitude directly influence the flight characteristics of the FWR, we exploit this relation when designing our swappable powertrain with different motor-gearbox combinations and 4-bar crank lengths to obtain the desired frequency and amplitude. We calculate initial estimates for default configuration and simulate it using pterasoftware. We then fabricate two powertrain modules - a default configuration with a higher flapping frequency for payload purposes and an extended-range configuration with a tandem propeller for higher flight velocity for longer range and endurance. To verify the results, we compare the flight test data of both power train configurations using the same FWR platform." | ||
#project_page: https://dataplan-hrc.github.io/ | ||
paper: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10633033 | ||
#code: https://github.com/dataplan-hrc/DaTAPlan | ||
#supplement: https://dataplan-hrc.github.io/assets/AnticipateNCollab_SupplementaryMaterial-1.pdf | ||
#video: https://youtu.be/vqJ58n77Bsc | ||
#iframe: https://www.youtube.com/embed/QW5VCDIgXus | ||
#demo: https://anyloc.github.io/#interactive_demo | ||
|
||
--- |