Back to all posts

March 16, 2023

Driving Research Forward: The Waymo Open Dataset Updates and 2023 Challenges

  • , technology

ThumbnailWaymo’s recent technology advancements and rapid expansion across Phoenix, San Francisco, and Los Angeles wouldn’t be possible without the underlying innovative research that helps drive our progress forward. As a pioneer in the AV industry, we have continuously contributed to the research community through publishing and expanding the Waymo Open Dataset — one of the largest and most diverse autonomous driving datasets ever released. Today, we are excited to announce the latest expansion of the dataset with even more sensor data and labels. We are also launching our new and exciting 2023 Waymo Open Dataset Challenges -- this is our 4th annual Challenge edition, and we invite all researchers to participate!

Since launching the Waymo Open Dataset in 2019, over 36,000 researchers around the world have used it to explore new approaches and make important progress in computer vision, behavior prediction, and broader machine learning topics. They have published over 1,300 academic papers, including research on point-based 3D object detection, end-to-end modeling for trajectory prediction, and bird’s-eye-view representation from multiple camera images, as well as work in fields beyond autonomous driving. We have also utilized the dataset in collaboration with Google Brain to create a new benchmark for Causal Agents — agents whose presence influences human drivers’ behavior in any format — aiding the research community in building more reliable models for motion forecasting.

Our 2023 Waymo Open Dataset Challenges include:
  • 2D Video Panoptic Segmentation Challenge: Given a panoramic video of camera images, produce a set of panoptic segmentation labels for each pixel in each image, where each object in the scene is tracked across cameras and over time.
  • Pose Estimation Challenge: Given one or more lidar range images and the associated camera images, predict 3D keypoints for pedestrians and cyclists in the scene.
  • Motion Prediction Challenge: Predict the future positions of multiple agents in the scene given 1 second of past history. We have made lidar sensor data available as a model input.
  • Sim Agents Challenge: Produce sim agent models that control agents in the scene, which will be evaluated against the goal of being human-like. This is the first competition on simulated agents, a fundamental but relatively underexplored topic in autonomous driving, now available to the research community.
First-place winners in each of the four Challenges will receive $10,000 in Google Cloud credits. Additionally, teams with the best performing or noteworthy submissions may be invited to present their work at the Workshop on Autonomous Driving at CVPR in June 2023. The 2023 Waymo Open Dataset Challenges close at 11:59 PM Pacific on May 23, 2023, but the leaderboards will remain open for future submissions. The Challenges’ rules and eligibility criteria are available here.

In addition to the 2023 Challenges, we’re releasing newer versions of the Perception and Motion datasets, as well as introducing a new dataset structure, the Modular dataset.

Perception Dataset:
  • We have added the road graph definition for all perception segments, enabling research on roadgraph reconstruction. Also, with this release, both the Perception and Motion datasets will have corresponding roadgraph and lidar available, which can enable research on end-to-end learning for prediction.
  • Given the large number and size of features now included in the dataset, we’re adopting a new modular data format and an accompanying python library, which enables users to download and use different features individually. This new format will make it easier for users to efficiently access and explore our data.
  • We have added a new feature for 2D video panoptic segmentation, which provides masks for each image designating how many cameras cover each pixel. This can be used to compute the weighted Segmentation and Tracking Quality metric.
Motion Dataset:
  • We have added lidar sensor data for all segments. Compared to our Perception dataset, this drastically increases the amount of lidar segments available, enabling further long-tail, semi-supervised and unsupervised perception research. Furthermore, lidar can provide rich signal about the behavior and intents of road users, opening opportunities for end-to-end perception and behavior modeling.
  • We have also added driveways to the dataset.
Making rich datasets openly available has a compounding effect on innovation, and we are committed to continue evolving the Waymo Open Dataset. We hope these newest expansions help tackle a wider set of research questions across autonomous driving and machine learning.

Good luck to all our 2023 Challenge participants! And let us know if you have ideas to make our dataset even more impactful in the future.