Skip to content

Latest commit

 

History

History
68 lines (49 loc) · 3.29 KB

README.md

File metadata and controls

68 lines (49 loc) · 3.29 KB

Explore until Confident: Efficient Exploration for Embodied Question Answering

Allen Z. Ren, Jaden Clark, Anushri Dixit, Masha Itkina, Anirudha Majumdar, Dorsa Sadigh

Princeton University, Stanford University, Toyota Research Institute

Project webpage

Installation

Set up the conda environment (Linux, Python 3.9):

conda env create -f environment.yml
conda activate explore-eqa
pip install -e .

Install the latest version of Habitat-Sim (headless with no Bullet physics) with:

conda install habitat-sim headless -c conda-forge -c aihabitat

Set up Prismatic VLM with the submodule:

cd prismatic-vlms && pip install -e .

Download the train split (hm3d-train-habitat-v0.2.tar) of the HM3D dataset here. You will be asked to request for access first.

(Optional) For running CLIP-based exploration:

cd CLIP && pip install -e .

Dataset

We release the HM-EQA dataset, which includes 500 questions about 267 scenes from the HM-3D dataset. They are available in data/.

Usage

First specify scene_data_path in the config files with the path to the downloaded HM3D train split, and specify hf_token to be your Hugging Face user access token.Running the script below for the first time will download the VLM model, which assumes access to a GPU with sufficient VRAM for the chosen VLM.

Run our method (VLM-semantic exploration) in Habitat-Sim:

python run_vlm_exp.py -cf cfg/vlm_exp.yaml

Run CLIP-based exploration in Habitat-Sim:

python run_clip_exp.py -cf cfg/clip_exp.yaml

Load a scene (with the question from our dataset) in Habitat-Sim:

python test_scene.py -cf cfg/test_scene.yaml

Scripts

We also share a few scripts that might be helpful:

Acknowledgement

The CLIP-based exploration uses the CLIP multi-scale relevancy extractor from Semantic Abstraction.