Waymo Open Dataset Updates and Challenges 2023

Waymo introduces a dataset extension brimming with sensor data and labels.

Waymo presents regularly records available for research. Because autonomous driving alone does not bring much. However, the data is not provided to competitors. Waymo also launched its fourth Waymo Open Dataset Challenges 2023, to which researchers are invited.

Since the Waymo Open Dataset was launched in 2019, over 36,000 researchers worldwide have used it to AI basic research and make important advances in computer vision, behavioral prediction, and other machine learning topics. The research community has published over 1,300 articles related to this, such as point-based 3D object detection, end-to-end modeling for trajectory prediction and bird’s-eye view rendering – as a composite of multiple camera images. It wasn’t just about autonomous driving.

Waymo used the data set with Google Brain to create a new benchmark for “causal agents” – agents are used to build motion models for its prediction.

The Waymo Open Dataset Challenges 2023 includes

2D Video Panoptic Segmentation Challenge: Object tracking in the scene.
Pose Estimation Challenge: Lidar-based motion prediction.
Motion Prediction Challenge: Predict future positions of multiple agents using last second’s data
Sim Agents Challenge: Crafting Sim Agent Models. This is the first competition for simulated agents.

First place finishers in each of the four challenges will receive $10,000 in Google Cloud credit. Additionally, the most successful teams are invited to present their work at the autonomous driving workshop at CVPR in June 2023.

The 2023 Waymo Open Dataset Challenges will end on May 23, 2023 at 11:59 p.m. Pacific Time.

Alongside the 2023 challenges, we are also releasing newer versions of the Perception and Motion datasets, as well as a new dataset structure, the Modular dataset. The Perception data set, which among other things has a new function for 2D video panopticon segmentation. The motion dataset with lidar data of all segments.

Source of participation information

Go to Source