EP4536124A1 - Plateforme de guidage et d'apprentissage numérique pour la microchirurgie de la rétine et du vitré - Google Patents
Plateforme de guidage et d'apprentissage numérique pour la microchirurgie de la rétine et du vitréInfo
- Publication number
- EP4536124A1 EP4536124A1 EP23738273.4A EP23738273A EP4536124A1 EP 4536124 A1 EP4536124 A1 EP 4536124A1 EP 23738273 A EP23738273 A EP 23738273A EP 4536124 A1 EP4536124 A1 EP 4536124A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- surgical
- surgical tool
- tissue
- visual
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
- A61B34/76—Manipulators having means for providing feel, e.g. force or tactile feedback
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
- A61B2034/254—User interfaces for surgical systems being adapted depending on the stage of the surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/301—Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
- A61B2090/306—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure using optical fibres
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/371—Surgical systems with images on a monitor during operation with simultaneous use of two cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/373—Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
- A61B2090/3735—Optical coherence tomography [OCT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/20—Surgical microscopes characterised by non-optical aspects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/007—Methods or devices for eye surgery
- A61F9/008—Methods or devices for eye surgery using laser
- A61F2009/00861—Methods or devices for eye surgery using laser adapted for treatment at a particular location
- A61F2009/00863—Retina
Definitions
- Cataract extraction with lens implantation and vitrectomy procedures are among the most frequently performed ophthalmic surgeries in the United States and abroad. Although these procedures are generally considered safe and effective, surgical complications remain a cause of postoperative visual loss, including but not limited to retinal detachment, macular edema, intraocular bleeding, glaucoma, and permanent loss of vision.
- FIG. 1 illustrates a block diagram of an example implementation of an image-guided surgical system (IGSS) according to this disclosure
- FIG. 3 illustrates a feedback loop formed by the components of the IGSS according to this disclosure
- FIG. 4 illustrates a method for developing and displaying image-guided tools for cataract and/or vitreoretinal surgical procedures according to this disclosure
- FIG. 5 illustrates an example display of an augmented image displayed during the capsulorhexis phase of a cataract surgical procedure according to this disclosure
- FIG. 6 illustrates an example display of an augmented image displayed during the phacoemulsification phase of a cataract surgical procedure according to this disclosure
- FIG. 8 illustrates an example display of an augmented image displayed when no surgical instrument is inserted into the pupil during a cataract surgical procedure according to this disclosure.
- FIGS. 9A and 9B show an example of image enhancement according to this disclosure.
- FIG. 10 shows an example of concurrent tool and tissue tracking according to this disclosure.
- FIG. 11 shows an example of tool and tissue tracking with automated laser control according to this disclosure.
- FIG. 13 shows another example of concurrent tool and tissue tracking with collision avoidance according to this disclosure.
- FIG. 14 shows an example of a system using a surgical procedure template according to this disclosure
- Ophthalmic microsurgery entails the use of mechanical and motorized instruments to manipulate delicate intraocular tissues.
- Great care must be afforded to tissue- instrument interactions, as damage to delicate intraocular structures such as the neurosensory retina, optic nerve, lens capsule, iris, and corneal endothelium can result in significant visual morbidity.
- the surgical guidance system described herein provides a feedback loop whereby the location of a surgical instrument in relation to delicate tissues (e g., ocular tissues) and the effect of instrument-tissue interactions can be used to guide surgical maneuvers.
- the Al model may intraoperatively identify the location and size of the pupil for tracking and segmentation, the surgical instruments used in the procedure that have entered the anterior chamber, capsular bag, vitreous body and posterior segment, and other physical components of the eye and/or instruments being used, as well as the surgical phase being performed and other details about the procedure itself.
- the Al model may provide augmented visual images to the surgeon in real time that show or identify the surgical instruments' location in the intraocular compartment, the phase of the surgical procedure, a suggested idealized tool path or other course of action to be performed by the surgeon using a tracked instrument, such as a path or location of an incision, target tissue or fragment to be cut or removed, site for application of laser energy, or the like, or other functional recommendation, and other information and features that may aid the surgeon during the surgical procedure.
- a system as disclosed herein also may display a surgical instrument's location, surgical phase, and/or other informational features in conjunction with or over visual images from the imaging system, preferably in real time.
- the IGSS 100 display device 110 may also display quantitative or qualitative information to the surgeon such as for example, movement or acceleration of surgical instruments and tissues, fluidic parameters such as turbulence, tissue mobility, and chamber instability, warnings regarding potentially unsafe conditions, such as deviation of a surgical instrument out of the field of view of the surgeon or imaging system, imminent collision of a surgical instrument with an intraocular structure or other tissue, likelihood of damage to or removal of tissues or structures intended to be preserved intact, or conditions of turbulent flow associated with surgical complications.
- quantitative or qualitative information to the surgeon such as for example, movement or acceleration of surgical instruments and tissues, fluidic parameters such as turbulence, tissue mobility, and chamber instability, warnings regarding potentially unsafe conditions, such as deviation of a surgical instrument out of the field of view of the surgeon or imaging system, imminent collision of a surgical instrument with an intraocular structure or other tissue, likelihood of damage to or removal of tissues or structures intended to be preserved intact, or conditions of turbulent flow associated with surgical complications.
- the imaging system 102 may be integrated with the computer processor 106 and the display device 110.
- the display device 110 may be a part of a stand-alone imaging system, not an integral part of the IGSS 100, that may be interfaced and used with an existing imaging system 102.
- the display device 110 may also include one or more display devices 110, audio speakers 112, haptic 114 or other feedback device for receiving augmented images and other feedback developed by the IGSS 100.
- an Al model may use an Al model that utilizes a deep-learning neural network, such as a region-based convolutional neural network (R-CNN), a convolutional neural network (CNN), a segmentation network (SN), or the like, to augment visual images from the imaging system 102.
- a deep-learning neural network such as a region-based convolutional neural network (R-CNN), a convolutional neural network (CNN), a segmentation network (SN), or the like
- R-CNN region-based convolutional neural network
- CNN convolutional neural network
- SN segmentation network
- an Al model may be implemented in a deep learning neural network (NN) module 120 stored in memory device 108.
- the memory device 108 also may store a processor operating system that is executed by the processor 106, as well as a computer vison system interface 125 for constructing augmented images.
- the processor 106 When executing the stored processor operating system, the processor 106 is arranged to obtain the visual images of the surgical field from imaging system 102 and output augmented image data, using data provided by the NN 120.
- the augmented image data is converted to augmented images by the computer vision interface 125 and fed back to the surgeon on the display device 110.
- the processor 106 may also provide other forms of feedback to the surgeon such as audio alerts or warnings to the speaker 112, or vibrations or rumbles generated by the haptic system 114 to a surface of the imaging system 102 or to the surgical instrument 116.
- the audio warnings and vibrations alerting the surgeon of the surgical instrument 116 associated with a potential for suboptimal execution or complications such as for example, unintended deviation into a particular location or plane during the surgical procedure.
- FIG. 2 illustrates a region-based convolutional neural network (R-CNN) algorithm 200 that can be used to develop augmented visual images.
- R-CNN region-based convolutional neural network
- the R-CNN algorithm then generates region proposals 220 using an edge box algorithm 230.
- the R-CNN algorithm can produce at least 2000 region proposals.
- the individual region proposals 240 are fed into a convolutional neural network (CNN) 250 that acts as a feature extractor where the output dense layer consists of the features extracted from the input image 210.
- the extracted features identify the presence of the object within the selected region proposal generating the output 260 of the surgical phase being performed.
- the display device 110 may take the form of one or more viewfinders such as the oculars of an SM or DSM, a high-definition display monitor, or a head-mounted display (such as those used for augmented reality and/or virtual reality systems), and the like.
- viewfinders such as the oculars of an SM or DSM, a high-definition display monitor, or a head-mounted display (such as those used for augmented reality and/or virtual reality systems), and the like.
- various operational features of the surgical instrumentation 116 may be automatically adjusted by an automated or semi-automated process as disclosed herein.
- the IGSS 100 may adjust the power to an ultrasonic phacoemulsification probe during emulsification of the lens nucleus during cataract surgery.
- the power driving the ultrasonics may be reduced, modulated, or shut-off in the event that suboptimal or high-risk conditions occur, or if the surgical instrument 116 exhibits unintended deviation into a particular location or plane, automatically, semi-automatically, and/or responsive to user input.
- the fluidics controller used in a cataract surgical system used with phacoemulsification probes or irrigation-aspiration probes, and used to aspirate emulsified lens particles, lens material, or intraocular fluids may be automatically modulated by the feedback system to alter the vacuum generated by an associated vacuum pump based on detected changes in the behavior of tissues, surgical instruments, or other parameters of the surgical procedure.
- the vacuum produced by the pump may be increased when the aspiration instrument is in the center of the surgical field removing hardened emulsified lens particles or decreased as it enters the softer outer lens cortex, in order to optimize instrument function.
- embodiments disclosed herein may allow for adjusting the vacuum, flow, cutting rate, or duty cycle of a vitrectomy probe during pars plana vitrectomy.
- the parameters of the vitrectomy probe may be modulated or shut-off in the event that suboptimal or high-risk conditions occur, if the surgical instrument exhibits unintended deviation into a particular location or plane, or the like.
- the feedback from the processing subsystem may also be directly applied (not shown) to the surgical instrument 116 simultaneously as the augmented visual image is displayed to the surgeon 122.
- haptic feedback such as for example, vibration or rumble could be sent to the surgical instrument 116 held in the surgeon's hand, providing tactile feedback as a warning to the surgeon.
- the surgical instrument 116 can be automatically retracted from the area of concern by the motorized instrument manipulators or prevented from ingress into a particular zone of risk or prohibited location or plane.
- resistance to movement of the surgical instrument could be induced by the haptic system 114 to prevent movement of the surgical instrument 116 into a particular zone of risk or prohibited location or plane.
- in parallel focus assessment may be performed in which feedback on the optimal image focus may be provided to the surgeon, or directly to the SM, DM, or other imaging device in order to assess or optimize image focus.
- out-of-focus objects may be detected uiing computer vision and/or neural network algorithms of the network- detected and segmented surgical instruments and tissues, providing feedback to the surgeon to facilitate optimal visualization, or directly to visualization instrumentation.
- visual images in the form of digital image data from the imaging system 102 is input into the NN module 120 of the Al model.
- the NN module 120 may analyze the digital image data to determine the tissue boundaries and/or layers, so that the data output by the Al model indicates the tissue boundaries/layers that may be added to the raw image data for display to the surgeon.
- the displayed tissue boundaries/layers assisting the surgeon in avoiding contact between the surgical instrument 116 and sensitive tissues.
- the Al model may also provide instrument guidance for spatial orientation and/or optimizing instrument parameters related to function such as aspiration and associated vacuum/flow rates, ultrasound parameters, etc.
- the NN module 120 algorithms of this embodiment use datasets, with training performed in a supervised or self-supervised manner, which include both the source visual image and the segmented images.
- the trained Al model implementing the NN module 120 uses digital image data from the imaging system, labelled by experts, as a training set, in the case of supervised learning.
- the Al model may also be used to pre-process the input visual images received from the imaging system 102.
- the Pre-processing of the input images improves image resolution for the surgeon for use in real time, as a form of image enhancement to allow the surgeon to appreciate details of the image that may otherwise be obscured or not apparent in un-processed imaging.
- FIG. 4 illustrates a flow chart depicting a method 460 that implements an example process as disclosed herein for cataract and/or vitreoretinal surgical procedures employing an image-guided tool of the present disclosure.
- the processor 106 receives digital image data representing visual images captured in real-time from the imaging system 102.
- the digital image data along with the Al trained data from the NN module 120 is input to the processor 106 in step 464.
- the processor 106 identifies and outputs data identifying region proposals for the pupil's location and area as described earlier in the discussion for R-CNN.
- step 470 augmented visual images are constructed by the computer vision systems interface 125.
- the augmented visual images are output to the surgeon's SM eyepiece or the display device 110. Additionally, feedback signals may also be output including haptic and/or audible signals applied to the haptic system 114 or speaker 112.
- the method described by steps 462-470 are made for each image frame captured from the imaging system 102, always acquiring the last available frame from the imaging system 102 at a minimum video streaming rate of 60 frames per second, or other frame rates that mat be suitable.
- the feedback returned by the imaging system may include guidance for the optimal creation of a capsulorrhexis, including size, location, centration, and symmetry of the rhexis as is seen in FIG. 5.
- Capsulorrhexis parameters and guidance recommendations may be altered in real time based upon real-time image-based feedback on the evolving capsulorrhexis as executed during the surgery.
- the augmented visual image 500 provides an identification of the phase of the surgical procedure 510, based on the type of surgical instrument 540 used in the procedure.
- Image 500 displays a rhexis template 550 and guidance instructions 530 to guide and instruct the surgeon for adjustment of the rhexis diameter as features for this surgical phase.
- Further features include visual feedback of the pupil boundary 520 where local contrast enhancement may be applied.
- FIG. 5 illustrates the surgical instrument 540 penetrating the outer tissue of the eye, either the sclera or the cornea.
- the tracking of turbulent flow during this surgical phase uses visual feedback from the NN module 120 or from computer vision techniques that estimate turbulent flow, the movement and tracking of the surgical instruments and lens fragments.
- Biomarkers associated with surgical risk may also be detected as features and information provided to the surgeon in this surgical phase For example, rapid changes in pupillary size, reverse pupillary block, trampoline movements of the iris, spider sign of the lens capsule, and a change in the fundus red reflex may be identified and provided as feedback in real time to the surgeon.
- instrument positioning associated with surgical risk such as decentration of the tip of the phacoemulsification needle outside of the central zone, duction movements of the globe away from primary gaze during surgery and patient movement relative to the surgical instruments may be identified and provided as feedback to the surgeon as either visual warning images, or haptic and audio alarms in real-time.
- FIG. 7 displays the augmented image 700 for cortex removal. Based on the instrument 720 used, feedback information is presented to the surgeon including the procedure phase 710. Instrument 720 movement warnings and motion sensitivity thresholding 730 is also provided to aid in the removal of cortical fibers.
- the augmented image 700 can have contrast equalization applied in the form of a local image enhancement 740 where local contrast enhancement is applied. As is seen in FIG. 7, a visual cue 740 is applied within and around the area of the pupil.
- the CNN recognizes the image 800 phase being performed as “idle” as shown at 810 of FIG. 8.
- Embodiments disclosed herein also may provide general enhancement of video and other images provided to a surgeon during a procedure.
- FIGS. 9A and 9B show an example of enhancement applied to an image captured during a surgery as disclosed herein.
- FIG. 9A shows an unenhanced image captured from a DSM, in which some retinal features are indistinct in the surgeon's view.
- the image may be, for example, a still frame of a continuous video feed available for use by a system as disclosed herein during the performance of a surgical operation.
- FIG. 9B shows the same view after being processed using artificial intelligence-based image enhancement technology as disclosed herein.
- a surgical enhancement system and process as disclosed herein may use contrast-limited adaptive histogram equalization to generate an image as shown in FIG. 9B from the image shown in FIG. 9A.
- Other techniques and tools may be used.
- the enhancement improves the resolution and visualization of retinal features and tissues, thereby providing the surgeon with an improved view of the surgical field.
- the enhanced image may be injected into the surgeon's view in the SM, DSM, or other visualization system to facilitate visualization during surgery. As previously disclosed, the enhanced image may be provided in real time during performance of the surgery.
- an Al-based IGSS system as disclosed herein may allow for automated control of laser treatment that is originally placed or initiated by a surgeon.
- FIG. 11 shows another example of tool and tissue tracking according to the present disclosure that uses automated laser control.
- an Al-based automated image segmentation and recognition is used to identify surgical instruments, including an endolaser probe 1110 and, concurrently the optic nerve 1120, and laser spots 1130 as they are applied. Each identified component is segmented as previously disclosed, and the system may continuously track them within the surgical frame in real time. Identifying laser spots at the margin of the break as they are applied provides the foundation for semi-automated delivery of the retinal laser.
- the surgeon may manually direct the endolaser probe and associated aiming beam at the margin of a retinal break.
- the system then automatically detects the margin of the retinal break where the laser is to be applied to seal the break, and delivers the laser in a discrete or continuous pattern.
- the system automatically identifies the margin of the retinal break where the laser has been applied adequately.
- Feedback to the laser delivery system prevents subsequent delivery of additional laser to previously treated sites.
- the aiming beam traverses to the untreated retina at the margin of the break, the feedback causes in re-activation of the laser to achieve retinopexy. In this way, efficient, targeted laser delivery is achieved via aiming of the laser probe by the surgeon in combination with automatic laser-guided delivery by the Al-based system as disclosed.
- the network identifies areas of overlap between the aiming beam and the template 1250, and then delivers a laser spot at the overlap.
- the individual laser-induced retinal spots are recognized and registered by the Al system, as noted by the template points changing from black to green 1255.
- a feedback loop to the laser delivery system as previously disclosed herein allows for rapid, semi-automated panretinal laser photocoagulation.
- the laser is only activated when there is overlap with the laser aiming beam and template, allowing the surgeon to continuously sweep the laser aiming beam throughout the intended target area and still achieve the desired pattern and spacing of thermal-laser-induced spots. In this way, efficient, targeted scatter laser delivery is achieved via aiming of the laser probe by the surgeon combined with laser-guided delivery by the Al-network.
- Real-time tissue and tool tracking during surgery as disclosed herein allows for a number of surgical applications, including but not limited to collision avoidance, in which a warning or other signal is provided to the surgeon if a surgical instrument or tool comes into proximity or contact with a tissue or structure that has been identified as an exclusion zone.
- the retinal surface may be defined as a zone of exclusion relative to the vitrectomy probe, fiberoptic endoilluminator, laser probe, forceps, scissors, picks, cannulas, needles, intraoperative OCT probe, or endoscopes, in order to prevent damage from contact with surgical instruments.
- Automated continuous tool and tissue tracking also has applications in surgical training and surgical data science.
- Quantitative assessment of the path and behavior of instruments and tissues during surgery confers the potential to apply analytics to the execution of surgery to assess performance, progress in surgical training, conduct predictive analytics on surgical risk/complications, and to provide insights into the behaviors and patterns of surgeons possessing varying degrees of ability. Identifying patterns and features associated with successful execution and outcomes, as well as features associated with risk or complications, may further serve to inform network performance and improve the capabilities of guidance systems.
- a mobile and bullous retina is more easily unintentionally aspirated by the vitrectomy probe, and high amplitude- or variable mobility of a detached retina may be assessed by the system as features that would modulate the threshold for proximity alert.
- increasing vacuum or flow via the fluidics module would similarly result in modulation of the threshold for proximity warning.
- activation of a proximity alert may result in feedback directly to the surgical device control systems, so that surgical parameters such as vacuum, flow, duty cycle, or on/off functionality of a guillotine cutter or similar device is modulated via direct feedback from the image-processing network or the Al-based system as a whole.
- surgical parameters such as vacuum, flow, duty cycle, or on/off functionality of a guillotine cutter or similar device is modulated via direct feedback from the image-processing network or the Al-based system as a whole.
- a feedback loop directly between the image-processing system and the surgical instrumentation system may modulate instrument performance. Such feedback may occur rapidly, in some cases equally quickly or more quickly than the limits of human reaction time, thereby potentially averting unintended consequences of tissue-instrument interactions.
- instrument positioning associated with surgical risk such as decentration of the tip of a surgical instrument outside of the visualized zone, proximal to tissues defined as exclusion zones, or patient movement relative to the surgical instruments may be identified and provided as feedback to the surgeon as visual warning images, haptic feedback, audio alarms, or the like, or combinations thereof; or the feedback may be provided directly to the surgical instrumentation in real-time as previously disclosed.
- a customizable digital “caliper” is generated by the network and injected into or overlay ed onto the image view provided to the surgeon.
- the system may generate two calipers 1410, 1420, showing the two conventional incision radii at 3.5mm and 4.0mm, respectively.
- this arrangement may be more accurate and repeatable than conventional techniques.
- incision(s) it also allows for the incision(s) to be made at any point around the caliper rings, allowing the surgeon to position the tools in a desirable position, for example where it can be held most steady, where access to the surgical site is most convenient, safest, or allowing for the maximum degrees of freedom of surgical instrument handling, or the like.
- tools also may be identified and tracked, including via overlay labels or the like, such as the cannula 1435.
- Digital calipers as showm in this example may be used more generally for other procedures, to help identify target sites for surgical incisions and placement of microsurgical trocars.
- embodiments disclosed herein include methods of operating a surgical system.
- Such a method may include features such as receiving a series of visual images from an imaging system of a surgical field; extracting one or more regions of interest in the surgical field using information provided by an Al model based on the series of visual images; identifying a surgical tool in the region(s) of interest; identifying a tissue element in the region(s) of interest; tracking the relative placement of the surgical tool and the tissue element; and providing feedback to a human operator of the system based on the relative placement of the surgical tool and the tissue element.
- the images may be received and/or processed in real time.
- the Al model, the computer processor, or a combination thereof may further be configured to identify the surgical tool in the region(s) of interest; identify a tissue element in the region(s) of interest; track the relative placement of the surgical tool and the tissue element; and provide feedback to a human operator of the system based on the relative placement of the surgical tool and the tissue element.
- the surgical tool may be communicatively coupled to the processor and configured to operate automatically based upon a signal received from the processor.
- the system also may include a template library storing surgical templates, each of which provides augmentation data for one or more surgical procedures. Templates may include, for example, one or more placements of the surgical tool to perform a surgical procedure; a visual indication of regions for application of a laser treatment; a visual indication of suitable placement for an incision or surgical removal of a portion of tissue; or any combination thereof.
- the system may generate and provide an augmentation of the images provided to the surgeon, which may include, for example, an overlay on the series of images defined by a surgical template selected from the template library.
- the augmentation may include one or more of a visual label identifying the surgical tool, a label identifying the tissue element, or a combination thereof, a proximity warning indicating that the surgical tool is too close to the tissue element; and an indication that the surgical tool is misplaced.
- the surgical tool may include a haptic feedback mechanism, for example to provide feedback on the procedure being performed based on the identification and tracking of the placement of the tool.
- various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium.
- computer readable program code includes any type of computer code, including source code, object code, and executable code.
- computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
- application and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code).
- program refers to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code).
- communicate as well as derivatives thereof, encompasses both direct and indirect communication.
- the term “or” is inclusive, meaning and/or.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Robotics (AREA)
- Artificial Intelligence (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un outil et un procédé guidés par image pour des interventions chirurgicales ophtalmiques. Le modèle d'intelligence artificielle (IA) développe des caractéristiques d'image opératoire sur la base des instruments chirurgicaux utilisés dans la région d'intérêt et de la phase de l'intervention chirurgicale qui est effectuée. Des images visuelles augmentées sont ensuite construites, lesquelles comprennent l'image visuelle en temps réel et les caractéristiques d'image avec des caractéristiques supplémentaires déterminées par le système.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263349069P | 2022-06-04 | 2022-06-04 | |
| PCT/US2023/024456 WO2023235629A1 (fr) | 2022-06-04 | 2023-06-05 | Plateforme de guidage et d'apprentissage numérique pour la microchirurgie de la rétine et du vitré |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4536124A1 true EP4536124A1 (fr) | 2025-04-16 |
Family
ID=95065030
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23738273.4A Pending EP4536124A1 (fr) | 2022-06-04 | 2023-06-05 | Plateforme de guidage et d'apprentissage numérique pour la microchirurgie de la rétine et du vitré |
Country Status (1)
| Country | Link |
|---|---|
| EP (1) | EP4536124A1 (fr) |
-
2023
- 2023-06-05 EP EP23738273.4A patent/EP4536124A1/fr active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220346884A1 (en) | Intraoperative image-guided tools for ophthalmic surgery | |
| US20230301727A1 (en) | Digital guidance and training platform for microsurgery of the retina and vitreous | |
| EP3920858B1 (fr) | Système chirurgical guidé par image | |
| US11628019B2 (en) | Method for generating a reference information item of an eye, more particularly an optically displayed reference rhexis, and ophthalmic surgical apparatus | |
| KR20190096986A (ko) | 안과 수술을 위한 적응적 영상 정합 | |
| US12171499B2 (en) | Eye surgery surgical system having an OCT device and computer program and computer-implemented method for continuously ascertaining a relative position of a surgery object | |
| US10993838B2 (en) | Image processing device, image processing method, and image processing program | |
| US20250186258A1 (en) | Producing cuts in the interior of the eye | |
| WO2023209550A1 (fr) | Tonomètre sans contact et techniques de mesure pour une utilisation avec des outils chirurgicaux | |
| Shin et al. | Semi-automated extraction of lens fragments via a surgical robot using semantic segmentation of OCT images with deep learning-experimental results in ex vivo animal model | |
| US20230320899A1 (en) | Control apparatus, control method, program, and ophthalmic surgical system | |
| US20160296375A1 (en) | System and method for producing assistance information for laser-assisted cataract operation | |
| WO2023235629A1 (fr) | Plateforme de guidage et d'apprentissage numérique pour la microchirurgie de la rétine et du vitré | |
| EP4536124A1 (fr) | Plateforme de guidage et d'apprentissage numérique pour la microchirurgie de la rétine et du vitré | |
| US20200345449A1 (en) | Near infrared illumination for surgical procedure | |
| EP4633556A1 (fr) | Système d'opération chirurgicale ophtalmique, programme informatique et procédé de fourniture d'informations d'évaluation relatives au guidage d'un instrument chirurgical | |
| Wang et al. | Reimagining partial thickness keratoplasty: An eye mountable robot for autonomous big bubble needle insertion | |
| CN118574585A (zh) | 用于机器人显微外科手术的力反馈 | |
| US20250082412A1 (en) | Ophthalmic surgical robot | |
| US12402962B2 (en) | Robot manipulator for eye surgery tool | |
| EP4609839A1 (fr) | Système robotique chirurgical et commande de système robotique chirurgical | |
| US20250009556A1 (en) | Method and system for cataract tissue sensing | |
| JP2018175790A (ja) | 情報処理装置、情報処理方法及びプログラム | |
| WO2025229483A1 (fr) | Images augmentées pour faciliter une chirurgie ophtalmique | |
| JP2022115321A5 (fr) |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250103 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) |