We use perceptual methods, AI, and frugal robotics innovation to deliver transformative diagnostic and treatment solutions.

Head of Group

Dr George Mylonas

B415B Bessemer Building
South Kensington Campus

+44 (0)20 3312 5145

YouTube ⇒ HARMS Lab

What we do

The HARMS lab leverages perceptually enabled methodologies, artificial intelligence, and frugal innovation in robotics (such as soft surgical robots) to deliver transformative solutions for diagnosis and treatment. Our research is driven by both problem-solving and curiosity, aiming to build a comprehensive understanding of the actions, interactions, and reactions occurring in the operating room. We focus on using robotic technologies to facilitate procedures that are not yet widely adopted, particularly in endoluminal surgery, such as advanced treatments for gastrointestinal cancer.

Meet the team

No results found

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Miyashita K, Oude Vrielink T, Mylonas G, 2018,

    A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 659-669, ISSN: 1861-6429

    PURPOSE: Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. METHODS: A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. RESULTS: 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. CONCLUSION: The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  • Conference paper
    Avila Rencoret FB, Mylonas G, Elson D, 2018,

    Robotic wide-field optical biopsy endoscopy

    , OSA Biophotonics Congress 2018, Publisher: OSA publishing

    This paper describes a novel robotic framework for wide-field optical biopsy endoscopy, characterizes in vitro its spatial and spectral resolution, real time hyperspectral tissue classification, and demonstrates its feasibility on fresh porcine cadaveric colon.

  • Conference paper
    Avila Rencoret FB, Mylonas GP, Elson D, 2018,

    Robotic Wide-Field Optical Biopsy Imaging For Flexible Endoscopy

    , 26th International Congress of the European Association for Endoscopic Surgery (EAES)
  • Conference paper
    Elson D, Avila Rencoret F, Mylonas G, 2018,

    Robotic Wide-Field Optical Biopsy Imaging for Flexible Endoscopy (Gerhard Buess Technology Award)

    , 26th Annual International EAES Congress
  • Conference paper
    Zhao M, Oude Vrielink T, Elson D, Mylonas Get al., 2018,

    Endoscopic TORS-CYCLOPS: A Novel Cable-driven Parallel Robot for Transoral Laser Surgery

    , 26th Annual International EAES Congress
  • Journal article
    Ashraf H, Sodergren M, Merali N, Mylonas G, Singh H, Darzi Aet al., 2017,

    Eye-tracking technology in medical education: A systematic review

    , Medical Teacher, Vol: 40, Pages: 62-69, ISSN: 0142-159X

    Background: Eye-tracking technology is an established research tool within allied industries such as advertising, psychology and aerospace. This review aims to consolidate literature describing the evidence for use of eye-tracking as an adjunct to traditional teaching methods in medical education.Methods: A systematic literature review was conducted in line with STORIES guidelines. A search of EMBASE, OVID MEDLINE, PsycINFO, TRIP database, and Science Direct was conducted until January 2017. Studies describing the use of eye-tracking in the training, assessment, and feedback of clinicians were included in the review.Results: Thirty-three studies were included in the final qualitative synthesis. Three studies were based on the use of gaze training, three studies on the changes in gaze behavior during the learning curve, 17 studies on clinical assessment and six studies focused on the use of eye-tracking methodology as a feedback tool. The studies demonstrated feasibility and validity in the use of eye-tracking as a training and assessment method.Conclusions: Overall, eye-tracking methodology has contributed significantly to the training, assessment, and feedback practices used in the clinical setting. The technology provides reliable quantitative data, which can be interpreted to give an indication of clinical skill, provide training solutions and aid in feedback and reflection. This review provides a detailed summary of evidence relating to eye-tracking methodology and its uses as a training method, changes in visual gaze behavior during the learning curve, eye-tracking methodology for proficiency assessment and its uses as a feedback tool.

  • Journal article
    Kogkas AA, Darzi A, Mylonas GP, 2017,

    Gaze-contingent perceptually enabled interactions in the operating theatre.

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 12, Pages: 1131-1140, ISSN: 1861-6410

    PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre.

  • Conference paper
    Oude Vrielink TJC, Darzi, Mylonas G, 2016,

    microCYCLOPS: A Robotic System for Microsurgical Applications

    , 6th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery (CRAS 2016)
  • Conference paper
    Avila-Rencoret F, Oude Vrielink T, Elson DS, Mylonas Get al., 2016,

    EndoSDR: Concurrent Endoscopic Screening, Diagnosis, and Removal of GI cancers (prize winner)

    , Business Engineering and Surgical Technologies Innovation Symposium (BEST)
  • Conference paper
    Avila Rencoret FB, Elson D, Mylonas G, 2016,

    A Robotic Hyperspectral Scanning Framework for Endoscopy

    , CRAS - Workshop on Computer/Robot Assisted Surgery

    Gastrointestinal (GI) endoscopy is the gold-standard procedure for detection and treatment of dysplastic lesions and early stage GI cancers. Despite its proven effectiveness, its sensitivity remains suboptimal due to the subjective nature of the examination, which is substantially reliant on human-operator skills. For bowel cancer, colonoscopy can miss up to 22% of dysplastic lesions, with even higher miss rates for small (<5 mm diameter) and flat lesions. We propose a robotic hyperspectral (HS) scanning framework that aims to improve the sensitivity of GI endoscopy by automated scanning and real-time classification of wide tissue areas based on their HS features. A “hot-spot” map is generated to highlight dysplastic or cancerous lesions for further scrutiny or concurrent resection. The device works as an add-on accessory to any conventional endoscope, and to our knowledge, is the first of its kind. This paper focuses on characterising its optical resolution on rigid and deformable colon phantoms. We report for the first time 2D and 3D wide-area reconstruction of endoscopic HS data with sub-millimetre optical resolution. The current setup, compatible with the anatomical dimensions of the colon, could allow the identification of flat and small pre-cancerous lesions that are currently missed. The proposed framework will lay the foundations towards the next generation of augmented reality endoscopy while increasing its sensitivity and specificity.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1305&limit=10&page=7&respub-action=search.html Current Millis: 1734849725010 Current Time: Sun Dec 22 06:42:05 GMT 2024

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location