Skip to content

Transformer model for motion prediction via redundancy reduction

Notifications You must be signed in to change notification settings

KIT-MRT/red-motion

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RedMotion: Motion Prediction via Redundancy Reduction arXiv

TL;DR: Transformer model for motion prediction that incorporates two types of redundancy reduction.

Overview

Model architecture

RedMotion model. Our model consists of two encoders. The trajectory encoder generates an embedding for the past trajectory of the current agent. The road environment encoder generates sets of local and global road environment embeddings as context. All embeddings are fused via cross-attention to yield trajectory proposals per agent.

Getting started

This Colab notebook shows how to create a dataset, run inference and visualize the predicted trajectories.

Prepare waymo open motion prediction dataset

Register and download the dataset (version 1.0) from here. Clone this repo and use the prerender script as described in their readme.

Acknowledgements

The local attention (Beltagy et al., 2020) and cross-attention (Chen et al., 2021) implementations are from lucidrain's vit_pytorch library.

About

Transformer model for motion prediction via redundancy reduction

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%