Welcome to the Spring dataset and evaluation benchmark for stereo, optical flow and scene flow estimation!
train_flow_FW_right.zip
, train_disp2_FW_right.zip
and train_maps.zip
. All other files and the test split data is unaffected. We thank Sander Gielisse for notifying us!If you make use of our dataset or benchmark results, please cite our paper:
@InProceedings{Mehl2023_Spring,
author = {Lukas Mehl and Jenny Schmalfuss and Azin Jahedi and Yaroslava Nalivayko and Andr\'es Bruhn},
title = {Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo},
booktitle = {Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}
There are many benchmarks that have been pushing forward research in the domains of motion estimation and stereo. Most notable examples are the Middlebury optical flow and stereo benchmark, the KITTI 2012 optical flow and stereo benchmark, the Sintel optical flow benchmark, KITTI 2015 as the first benchmark for scene flow, optical flow and stereo, the ETH3D stereo benchmark and the VIPER optical flow benchmark. As a great addition to existing benchmarks, the Robust Vision Challenge ranks algorithms according to their cross-benchmark generalization. Unfortunately, the HD1K benchmark seems to be offline at the time of writing.