SceneNet RGB-D generator randomly samples and positions objects, and runs a physics simulation to produce a scene description. Then a camera trajectory is generated using OpenGL z-buffer collision detection according to the two-body simulation. Finally the renderer (a modified version of the Opposite Renderer, using the OptiX framework) renders the trajectory, outputting rgb, depth, and instance mask.
SceneNet RGB-D generator provides perfect camera poses and depth data, allowing investigation into geometric computer vision problems such as optical flow, camera pose estimation, and 3D scene labelling tasks. Random sampling permits virtually unlimited scene configurations.
John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J Davison. SceneNet RGB-D: Can 5M Synthetic Images Beat Generic ImageNet Pre-training on Indoor Segmentation?. International Conference on Computer Vision (ICCV), 2017
The SceneNet RGB-D generator is available through the link on the right and is free to be used for non-commercial purposes. Full terms and conditions which govern its use are detailed here.
A dataset generated using SceneNet RGB-D generator is available through a link on the right.
News 2018-2024
Contact us
Dyson Robotics Lab at Imperial
William Penney Building
Imperial College London
South Kensington Campus
London
SW7 2AZ
Telephone: +44 (0)20 7594-7756
Email: iosifina.pournara@imperial.ac.uk