Papers
arxiv:2408.15980

In-Context Imitation Learning via Next-Token Prediction

Published on Aug 28
· Submitted by mlfu7 on Aug 29
Authors:
,
,
,

Abstract

We explore how to enhance next-token prediction models to perform in-context imitation learning on a real robot, where the robot executes new tasks by interpreting contextual information provided during the input phase, without updating its underlying policy parameters. We propose In-Context Robot Transformer (ICRT), a causal transformer that performs autoregressive prediction on sensorimotor trajectories without relying on any linguistic data or reward function. This formulation enables flexible and training-free execution of new tasks at test time, achieved by prompting the model with sensorimotor trajectories of the new task composing of image observations, actions and states tuples, collected through human teleoperation. Experiments with a Franka Emika robot demonstrate that the ICRT can adapt to new tasks specified by prompts, even in environment configurations that differ from both the prompt and the training data. In a multitask environment setup, ICRT significantly outperforms current state-of-the-art next-token prediction models in robotics on generalizing to unseen tasks. Code, checkpoints and data are available on https://icrt.dev/

Community

Paper author Paper submitter

TL;DR: We approach in-context, multi-task imitation learning on a physical robot as a next-token prediction problem. We train a causal transformer on concatenated robot trajectories. During testing, the model can execute a new task in a different environment configuration without fine-tuning by being prompted with raw robot trajectories collected via human teleoperation that perform the new task.

Website: https://icrt.dev/
Code, checkpoints, dataset: https://github.com/Max-Fu/icrt

·

Congrats @mlfu7 ! I opened https://github.com/Max-Fu/icrt/issues/1 for some small improvements

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.15980 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.15980 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.15980 in a Space README.md to link it from this page.

Collections including this paper 2