The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Head of Group

Dr Stamatia (Matina) Giannarou

411 Bessemer Building
South Kensington Campus

+44 (0) 20 7594 8904

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

No results found

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Huang B, Nguyen A, Wang S, Wang Z, Mayer E, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2022,

    Simultaneous depth estimation and surgical tool segmentation in laparoscopic images

    , IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 335-338, ISSN: 2576-3202

    Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.

  • Journal article
    Maier-Hein L, Eisenmann M, Sarikaya D, Maerz K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Noetzel D, Kenngott HG, Kikinis R, Muendermann L, Navab N, Onogur S, Ross T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ueckert F, Mueller-Stich BP, Jannin P, Speidel Set al., 2022,

    Surgical data science-from concepts toward clinical translation

    , MEDICAL IMAGE ANALYSIS, Vol: 76, ISSN: 1361-8415
  • Book chapter
    Davids J, Lam K, Nimer A, Gianarrou S, Ashrafian Het al., 2022,

    AIM in Medical Education

    , Artificial Intelligence in Medicine, Pages: 319-340

    Artificial intelligence (AI) is making a global impact on various professions ranging from commerce to healthcare. This section looks at how it is beginning and will continue to impact other areas such as medical education. The multifaceted yet socrato-didactic methods of education need to evolve to cater for the twenty-firstcentury medical educator and trainee. Advances in machine learning and artificial intelligence are paving the way to new discoveries in medical education delivery. Methods This chapter begins by introducing the broad concepts of AI that are relevant to medical education and then addresses some of the emerging technologies employed to directly cater for aspects of medical education methodology and innovations to streamline education delivery, education assessments, and education policy. It then builds on this to further explore the nature of new artificial intelligence concepts for medical education delivery, educational assessments, and clinical education research discovery in a PRISMAguided systematic review and meta-analysis. Results Results from the meta-analysis showed improvement from using either AI alone or with conventional education methods compared to conventional methods alone. A significant pooled weighted mean difference ES estimate of ES 4.789; CI 1.9-7.67; p 1/4 0.001, I2 1/4 93% suggests a 479% learner improvement across domains of accuracy, sensitivity to performing educational tasks, and specificity. Significant amount of bias between studies was identified and a model to reduce bias is proposed. Conclusion AI in medical education shows considerable promise in domains of improving learners’ outcomes; this chapter rounds off its discussion with the role of AI in simulation methodologies and performance assessments for medical education, highlighting areas where it could augment how we deliver training.

  • Book chapter
    Tukra S, Lidströmer N, Ashrafian H, Gianarrou Set al., 2022,

    AI in Surgical Robotics

    , Artificial Intelligence in Medicine, Pages: 835-854

    The future of surgery is tightly knit with the evolution of artificial intelligence (AI) and its thorough involvement in surgical robotics. Robotics long ago became an integral part of the manufacturing industry. The area of healthcare though adds several more layers of complication. In this chapter we elaborate a broad range of issues to be dealt with when a robotic system enters the surgical theater and interacts with human surgeons - from overcoming the limitations of minimally invasive surgery to the enhancement of performance in open surgery. We present the latest from the fields of cognitive surgical robots, focusing on proprioception, intraoperative decision-making, and, ultimately, autonomy. More specifically, we discuss how AI has advanced the research field of surgical tool tracking, haptic feedback and tissue interaction sensing, advanced intraoperative visualization, robot-assisted task execution, and finally land in the crucial development of context-aware decision support.

  • Conference paper
    Huang B, Tuch D, Vyas K, Giannarou S, Elson Det al., 2022,

    Self-supervised monocular laparoscopic images depth estimation leveraging interactive closest point in 3D to enable image-guided radioguided surgery

    , European Molecular Imaging Meeting
  • Conference paper
    Xu C, Roddan A, Davids J, Weld A, Xu H, Giannarou Set al., 2022,

    Deep Regression with Spatial-Frequency Feature Coupling and Image Synthesis for Robot-Assisted Endomicroscopy

    , 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 157-166, ISSN: 0302-9743
  • Conference paper
    Tukra S, Giannarou S, 2022,

    Stereo Depth Estimation via Self-supervised Contrastive Representation Learning

    , 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 604-614, ISSN: 0302-9743
  • Journal article
    Cartucho J, Wang C, Huang B, Elson DS, Darzi A, Giannarou Set al., 2021,

    An enhanced marker pattern that achieves improved accuracy in surgical tool tracking

    , Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, Vol: 10, Pages: 1-9, ISSN: 2168-1163

    In computer assisted interventions (CAI), surgical tool tracking is crucial for applications such as surgical navigation, surgical skill assessment, visual servoing, and augmented reality. Tracking of cylindrical surgical tools can be achieved by printing and attaching a marker to their shaft. However, the tracking error of existing cylindrical markers is still in the millimetre range, which is too large for applications such as neurosurgery requiring sub-millimetre accuracy. To achieve tool tracking with sub-millimetre accuracy, we designed an enhanced marker pattern, which is captured on images from a monocular laparoscopic camera. The images are used as input for a tracking method which is described in this paper. Our tracking method was compared to the state-of-the-art, on simulation and ex vivo experiments. This comparison shows that our method outperforms the current state-of-the-art. Our marker achieves a mean absolute error of 0.28 [mm] and 0.45 [°] on ex vivo data, and 0.47 [mm] and 1.46 [°] on simulation. Our tracking method is real-time and runs at 55 frames per second for 720×576 image resolution.

  • Conference paper
    Huang B, Zheng J-Q, Nguyen A, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2021,

    Self-supervised generative adverrsarial network for depth estimation in laparoscopic images

    , International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer, Pages: 227-237

    Dense depth estimation and 3D reconstruction of a surgical scene are crucial steps in computer assisted surgery. Recent work has shown that depth estimation from a stereo image pair could be solved with convolutional neural networks. However, most recent depth estimation models were trained on datasets with per-pixel ground truth. Such data is especially rare for laparoscopic imaging, making it hard to apply supervised depth estimation to real surgical applications. To overcome this limitation, we propose SADepth, a new self-supervised depth estimation method based on Generative Adversarial Networks. It consists of an encoder-decoder generator and a discriminator to incorporate geometry constraints during training. Multi-scale outputs from the generator help to solve the local minima caused by the photometric reprojection loss, while the adversarial learning improves the framework generation quality. Extensive experiments on two public datasets show that SADepth outperforms recent state-of-the-art unsupervised methods by a large margin, and reduces the gap between supervised and unsupervised depth estimation in laparoscopic images.

  • Journal article
    Davids J, Makariou S-G, Ashrafian H, Darzi A, Marcus HJ, Giannarou Set al., 2021,

    Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation

    , WORLD NEUROSURGERY, Vol: 149, Pages: E669-E686, ISSN: 1878-8750

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1306&limit=10&page=3&respub-action=search.html Current Millis: 1732204196165 Current Time: Thu Nov 21 15:49:56 GMT 2024

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location