Visual paths

We define a “visual path” as the video sequence captured by a moving person in executing a journey along a particular physical path.

A sample path (Corridor 1, C1) illustrating the multiple passes through the same space. Each of these passes represents a sequence that is either stored in a database, or represents the queries that are submitted against previous journeys. In the assistive context, the user at point A could be a blind or partially sighted user, and he or she would benefit from solutions to the association problem of a query journey relative to previous “journey experiences” along roughly the same path, crowdsourced by N users that may be sighted.
A sample path (Corridor 1, C1) illustrating the multiple passes through the same space. Each of these passes represents a sequence that is either stored in a database, or represents the queries that are submitted against previous journeys. In the assistive context, the user at point A could be a blind or partially sighted user, and he or she would benefit from solutions to the association problem of a query journey relative to previous “journey experiences” along roughly the same path, crowdsourced by N users that may be sighted.

RSM dataset details

    A summary of the dataset with thumbnails.
    Table 1: A summary of the dataset with thumbnails. Also available as an online spreadsheet.
  • 60 videos
  • 6 corridors
  • 3.05 km
  • 10 total passes/corridor
    • 5 passes/corridor with Nexus 4 @1920 x 1080, 1280 x 720, 24-30 fps
    • 5 passes/corridor with Google Glass @ 1280 x 720, 30 fps
  • 90, 302 frames with positional ground-truth

Table 1 summarizes the acquisition. As can be seen, the length of the sequences varies within some corridors, due to a combination of different walking speeds and/or different frame rates. Lighting also varied, due to a combination of daylight/night-time acquisitions, and occasional prominent windows that represent strong lighting sources in certain parts of some corridors. Differences were also observable in some videos from one pass to another, due to the presence of activities (such as cleaning, shifting of furniture) and the occasional appearance of people.

Synthetic RSM dataset details

Synthetic corridor generated using the Unity engine.

  • 7 passes

Browse and download

Code

The localisation from visual paths code is on Bitbucket: https://bitbucket.org/josemrivera/localisation-from-visual-paths

If you use our code, please cite our respective publications (see below)

Publications

  • J. Rivera-Rubio, I. Alexiou, and A. A. Bharath, “Appearance-based indoor localization: a comparison of patch descriptor performance,” Pattern recognition letters, vol. 66, pp. 109-117, 2015. doi:10.1016/j.patrec.2015.03.003
    [Download PDF]
  • J. Rivera-Rubio, I. Alexiou, L. Dickens, R. Secoli, E. Lupu, and A. A. Bharath, “Associating locations from wearable cameras,” in Proceedings of the british machine vision conference, 2014, p. 13 pages.
    [Download PDF]

Acknowledgements

The dataset hosting is supported by the EPSRC V&L Net Pump-Priming Grant 2013-1 awarded to Dr Riccardo Secoli, Jose Rivera-Rubio and Anil A. Bharath.

Contact us

Prof Anil Anthony Bharath
a.bharath@imperial.ac.uk

Telephone
+44 (0)20 7594 5463

Address
Room 4.12
Department of Bioengineering
Royal School of Mines Building
Imperial College London