The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Head of Group

Dr Stamatia (Matina) Giannarou

411 Bessemer Building
South Kensington Campus

+44 (0) 20 7594 8904

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

Citation

BibTex format

@inproceedings{Huang:2022:10.1007/978-3-031-16449-1_2,
author = {Huang, B and Zheng, J-Q and Nguyen, A and Xu, C and Gkouzionis, I and Vyas, K and Tuch, D and Giannarou, S and Elson, DS},
doi = {10.1007/978-3-031-16449-1_2},
pages = {13--22},
publisher = {SPRINGER INTERNATIONAL PUBLISHING AG},
title = {Self-supervised depth estimation in laparoscopic image using 3D geometric consistency},
url = {http://dx.doi.org/10.1007/978-3-031-16449-1_2},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Depth estimation is a crucial step for image-guided intervention in robotic surgery and laparoscopic imaging system. Since per-pixel depth ground truth is difficult to acquire for laparoscopic image data, it is rarely possible to apply supervised depth estimation to surgical applications. As an alternative, self-supervised methods have been introduced to train depth estimators using only synchronized stereo image pairs. However, most recent work focused on the left-right consistency in 2D and ignored valuable inherent 3D information on the object in real world coordinates, meaning that the left-right 3D geometric structural consistency is not fully utilized. To overcome this limitation, we present M3Depth, a self-supervised depth estimator to leverage 3D geometric structural information hidden in stereo pairs while keeping monocular inference. The method also removes the influence of border regions unseen in at least one of the stereo images via masking, to enhance the correspondences between left and right images in overlapping areas. Extensive experiments show that our method outperforms previous self-supervised approaches on both a public dataset and a newly acquired dataset by a large margin, indicating a good generalization across different samples and laparoscopes.
AU - Huang,B
AU - Zheng,J-Q
AU - Nguyen,A
AU - Xu,C
AU - Gkouzionis,I
AU - Vyas,K
AU - Tuch,D
AU - Giannarou,S
AU - Elson,DS
DO - 10.1007/978-3-031-16449-1_2
EP - 22
PB - SPRINGER INTERNATIONAL PUBLISHING AG
PY - 2022///
SN - 0302-9743
SP - 13
TI - Self-supervised depth estimation in laparoscopic image using 3D geometric consistency
UR - http://dx.doi.org/10.1007/978-3-031-16449-1_2
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000867568000002&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=a2bf6146997ec60c407a63945d4e92bb
UR - https://link.springer.com/chapter/10.1007/978-3-031-16449-1_2
ER -

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location