Spring Dataset and Benchmark Logo

Spring: L. Mehl, J. Schmalfuss, A. Jahedi, Y. Nalivayko, A. Bruhn — University of Stuttgart

RobustSpring: J. Schmalfuss, V. Oei, L. Mehl, M. Bartsch, S. Agnihotri, M. Keuper, A. Bruhn

Not logged in | Login | Create Account

Welcome to the Spring and RobustSpring datasets and evaluation benchmark for stereo, optical flow, and scene flow estimation, including robustness evaluation under 20 realistic image corruptions!

Download
Spring
dataset

Download
RobustSpring
dataset

📄

Spring
Paper
 

📄

RobustSpring
Paper
 

📄

Spring
Suppl.
Material

📄

Spring
Paper
(ArXiv)

Download
Spring
sample

View
Video
 

</>

Code
(Blender)
 

</>

Code
(Website)
 

Teaser image
The Spring dataset consists of high-resolution left and right stereo images (1920×1080px). It also contains full scene flow data with 4× super-resolution (3840×2160px). For stereo/depth estimation, left-to-right and right-to-left disparity is given for every frame (see second row). For scene flow estimation, the dataset provides disparity change for left and right as well as forward and backward direction (see third row). For optical flow estimation, the Spring dataset contains left and right, forward and backward optical flow (see last row).
RobustSpring overview
RobustSpring is a novel image corruption benchmark for optical flow, scene flow and stereo. It evaluates 20 image corruptions including blurs, color changes, noises, quality degradations, and weather, applied to stereo video data from Spring. For comprehensive robustness evaluations on all three tasks, RobustSpring's image corruptions are integrated in time, stereo and depth where applicable.

News

Paper

If you make use of our dataset or benchmark results, please cite our Spring and RobustSpring papers:

@InProceedings{Mehl2023_Spring,
    author    = {Lukas Mehl and Jenny Schmalfuss and Azin Jahedi and Yaroslava Nalivayko and Andr\'es Bruhn},
    title     = {Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo},
    booktitle = {Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2023}
}
@misc{Schmalfuss2025_RobustSpring,
    author={Jenny Schmalfuss and Victor Oei and Lukas Mehl and Madlen Bartsch and Shashank Agnihotri and Margret Keuper and Andr\'es Bruhn},
    title={RobustSpring: Benchmarking Robustness to Image Corruptions for Optical Flow, Scene Flow and Stereo}, 
    eprint={2505.09368},
    archivePrefix={arXiv}
    year={2025},
}

Further benchmarks

There are many benchmarks that have been pushing forward research in the domains of motion estimation and stereo. Most notable examples are the Middlebury optical flow and stereo benchmark, the KITTI 2012 optical flow and stereo benchmark, the Sintel optical flow benchmark, KITTI 2015 as the first benchmark for scene flow, optical flow and stereo, the ETH3D stereo benchmark, the HD1K benchmark, and the VIPER optical flow benchmark. As a great addition to existing benchmarks, the Robust Vision Challenge ranks algorithms according to their cross-benchmark generalization.