CoWs on Pasture:
Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
Paper | Code & Data
For robots to be generally useful, they must be able
to find arbitrary objects described by people even without expensive navigation training on in-domain data.
We explore these capabilities in a unified setting: language-driven zero-shot object navigation (L-ZSON). Inspired by
the recent success of open-vocabulary models for image
classification, we investigate a straightforward framework,
CLIP on Wheels (CoW), to adapt open-vocabulary models
to this task without fine-tuning. To better evaluate L-ZSON,
we introduce the Pasture benchmark, which considers
finding uncommon objects, objects described by spatial and
appearance attributes, and hidden objects described relative to visible objects. We conduct an in-depth
empirical study by directly deploying 21 CoW baselines across
Habitat, RoboTHOR, and Pasture. In total, we evaluate over 90k navigation episodes and find that (1) CoW
baselines often struggle to leverage language descriptions,
but are proficient at finding uncommon objects. (2) A simple CoW, with CLIP-based object localization and classical
exploration---and no additional training---matches the navigation efficiency of a state-of-the-art ZSON
method trained
for 500M steps on Habitat MP3D data. This same CoW
provides a 15.6 percentage point improvement in success
over a state-of-the-art RoboTHOR ZSON model.
CoW Overview
Here we give an overview of CoW, which is a simple L-ZSON baseline, that does not require any navigation training.
Trajectory Visualization
Team
Bibtex
@article {gadre2022cow,
title={CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation},
author={Gadre, Samir Yitzhak and Wortsman, Mitchell and Ilharco, Gabriel and Schmidt, Ludwig and Song, Shuran},
journal={CVPR},
year={2023}
}
Acknowledgements
We would like to thank Jessie Chapman, Cheng Chi, Huy Ha, Zeyi Liu, Sachit Menon, and Sarah Pratt for valuable feedback. We would also like to thank Luca Weihs for technical help with AllenAct and Cheng Chi for help speeding up code. This work was supported in part by NSF CMMI-2037101, NSF IIS-2132519, and an Amazon Research Award. SYG is supported by a NSF Graduate Research Fellowship. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.
Contact
If you have any questions, please contact Samir.