From 3b8ab7fa1c8d231c12d04e9033f9587d5cd9680b Mon Sep 17 00:00:00 2001 From: Amit Parekh <7276308+amitkparekh@users.noreply.github.com> Date: Mon, 8 Jul 2024 13:53:27 +0100 Subject: [PATCH] chore: add citation file See if it works --- CITATION.cff | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 CITATION.cff diff --git a/CITATION.cff b/CITATION.cff new file mode 100644 index 0000000..a485819 --- /dev/null +++ b/CITATION.cff @@ -0,0 +1,32 @@ +cff-version: 1.2.0 +title: &title "Investigating the Role of Instruction Variety and Task Difficulty in Robotic Manipulation Tasks" +message: "If you use this sofrware, please cite the software and the paper" +authors: &authors + - given-names: Amit + family-names: Parekh + email: amit.parekh@hw.ac.uk + affiliation: Heriot-Watt University + - given-names: Nikolas + family-names: Vitsakis + email: nv2006@hw.ac.uk + affiliation: Heriot-Watt University + - given-names: Alessandro + family-names: Suglia + email: a.suglia@hw.ac.uk + affiliation: Heriot-Watt University + - given-names: Ioannis + family-names: Konstas + email: i.konstas@hw.ac.uk + affiliation: Heriot-Watt University +date-released: 2024-07-04 +references: + - type: article + authors: *authors + title: *title + year: 2024 + journal: arXiv + url: https://arxiv.org/abs/2407.03967 + +abstract: >- + Evaluating the generalisation capabilities of multimodal models based solely on their performance on out-of-distribution data fails to capture their true robustness. This work introduces a comprehensive evaluation framework that systematically examines the role of instructions and inputs in the generalisation abilities of such models, considering architectural design, input perturbations across language and vision modalities, and increased task complexity. The proposed framework uncovers the resilience of multimodal models to extreme instruction perturbations and their vulnerability to observational changes, raising concerns about overfitting to spurious correlations. By employing this evaluation framework on current Transformer-based multimodal models for robotic manipulation tasks, we uncover limitations and suggest future advancements should focus on architectural and training innovations that better integrate multimodal inputs, enhancing a model's generalisation prowess by prioritising sensitivity to input content over incidental correlations. +license: MIT