[go: up one dir, main page]

WO2025176376A1 - Système de formation médicale et procédé de formation médicale - Google Patents

Système de formation médicale et procédé de formation médicale

Info

Publication number
WO2025176376A1
WO2025176376A1 PCT/EP2025/050466 EP2025050466W WO2025176376A1 WO 2025176376 A1 WO2025176376 A1 WO 2025176376A1 EP 2025050466 W EP2025050466 W EP 2025050466W WO 2025176376 A1 WO2025176376 A1 WO 2025176376A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
training system
instrument
seeking
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/050466
Other languages
English (en)
Inventor
Lars VEUM
Marcel Hohl
Nicolas IMHOF
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VIRTAMED AG
Original Assignee
VIRTAMED AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VIRTAMED AG filed Critical VIRTAMED AG
Publication of WO2025176376A1 publication Critical patent/WO2025176376A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/285Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/30Anatomical models
    • G09B23/32Anatomical models with moving parts

Definitions

  • the present invention relates to a medical training system for medical procedures, especially for surgical operations, and a method for medical training.
  • Medical procedures especially surgical operations, such as endoscopy, arthroscopy, laparoscopy, and other minimally invasive surgery applications, can be trained in medical training systems, also called medical training simulators, which simulate the medical procedure setup.
  • the users namely trainees, physicians, and surgeons, learn to master the indirect hand-eye coordination required by the manipulation of medical instrumentation, such as endoscope or an ultrasound probe, in addition to the conventional medical instruments and procedures.
  • Computerized medical training simulators enable the users to develop and improve their practice in a virtual reality environment before practicing in the real world-operation room.
  • Box Trainers typically consist of a box with at least one insert designed to mimic specific anatomical features or surgical scenarios.
  • the inserts are tangible and represents, for example, cavities, incisions, or pathways for tangible instruments to navigate.
  • the Box Trainers are used for training specific skills, such as performing specific movements.
  • WO 2018/209274 A1 and US 10 902 745 B2 describe such Box Trainers.
  • medical training systems provide a mixed reality scenario where the user jointly interacts with real objects in a physical environment and in a related virtual environment.
  • These simulators include a screen for displaying the virtual reality and a human anatomy model in real size, such as a joint model or an organ model, as well as a tangible medical procedure instrument.
  • the model is adapted with sensors and mobile members for guiding, tracking, and controlling the medical instrument operation within the anatomy model.
  • Such medical training systems are for example disclosed in WO 2014/041491 A1 , US 2015/0325151 A1 , and WO 2020/164829 A1.
  • Some medical training systems provide training in the use of such teleoperated medical treating systems.
  • some medical training systems can be coupled to a surgeon console instead of the actual other system components, to provide a surgeon with a simulation performing the procedure.
  • the surgeon can learn how simulated instruments respond to manipulation of the console controls, i.e. , of the master control input devices.
  • the user sees a virtual operation site and a simulated medical procedure setup.
  • Such a medical training system is for example described in EP 3 084 747 B1.
  • Other training systems providing training in the use of teleoperated medical treating systems comprise a surgeon control with master input devices as a stand-alone-solution, i.e., without being coupled to a real teleoperated medical treating system.
  • the RoboS of the applicant is such a robotic surgery simulator, allowing independent learning by mirroring surgery robotic consoles.
  • the inventive medical training system comprises a simulator assembly configured to perform at least one medical procedure by using an instrument, preferably a tangible and/or a virtual instrument, in a simulated medical procedure setup, the simulator assembly being manually operated by a user.
  • the system further comprises a control unit providing the simulated medical procedure setup, monitoring at least the seeking path of the instrument, preferably the tangible and/or the virtual instrument, moved by the user during the simulated medical procedure.
  • the control unit is capable of differentiating between the seeking path and a retracting path of the instrument, preferably the tangible and/or the virtual instrument, and the control unit is capable of using only the seeking path of the instrument, preferably the tangible and/or the virtual instrument, in order to assess a proficiency of the user.
  • the simulator assembly is a tangible simulator assembly.
  • the instrument is a tangible or virtual instrument.
  • the system monitors more than the seeking path, preferably the seeking and the retracting path.
  • the seeking path is the way taken by an instrument until it reaches a place or region of interest.
  • the retracting path is the way taken by an instrument when leaving the place or region of interest.
  • This retracting path can also be a path which has not the purpose to go back to a starting point so that it can be a remaining path as well.
  • the retracting path can also be called retracting or remaining path
  • the medical training system defines a volume within the final point is positioned and a point of origin at the instrument, preferably at the tangible instrument and/or at the virtual instrument, wherein the initial point is defined as point where the point of origin enters the volume.
  • the medical training system may be any type of system for training medical procedures, especially surgical operations.
  • the medical training system is a simulator for minimal invasive surgery, preferably for robotic surgery, and/or for laparoscopy.
  • Figure 2 shows a measurement of an instrument path according to the state of the art
  • Figure 15 shows a part of a real image displayed the real screen after having completed the task of figure 6 in an enlarged and rotated view
  • the training system shown in figure 1 is a mixed reality scenario medical training system.
  • the basic elements are known in the state of the art.
  • the mixed reality scenario medical training system comprises a main body with a control unit 1 , preferably arranged within a main body, a display 2, a human anatomy model 3 in real size and tangible instruments 40, 41.
  • the human anatomy model 3 is usually representing only a specific part of a human body.
  • figure 1 an insufflated abdomen is shown for laparoscopy training.
  • the tangible instruments to be held in the hands 40 correspond to real instruments used in laparoscopy.
  • the pedals 41 are used for activation of electrocauterization elements and other device mechanism controls.
  • the tangible instruments 40, 41 are other types of medical instruments.
  • the tangible instruments 40, 41 have input sensors 44, 45 implemented within, for the acquisition of position, orientation, other degrees of freedom, and relative position to the other instruments and the anatomical body 3.
  • the anatomical body 3 also contains input sensors 44, 45 within for acquisition of parameters.
  • the input sensors connect to the control unit 1 for data acquisition. In figure 1 , some exemplary placements of sensors 44, 45 are shown.
  • the control unit 1 controls the virtual views displayed on the screen or display 2 based on the activities of the user, i.e. , his use of the tangible instruments 40, 41.
  • the virtual views are generated by the values of the input sensors in tangible instruments 40, 41 , and anatomical body 3 that are controlled and manipulated by the user.
  • On the screen or display 2 a real or a virtual picture of the human anatomical part is shown as well as the position and movements of the instrument activated by the trainee or user.
  • Reference 43 in figure 1 refers to a virtual instrument visualizing the movement of the tangible instrument 40 on the screen 2.
  • the control unit 1 has also input means for user input, such as his information about his skills and experiences.
  • the input means are not shown in figure 1.
  • the control unit 1 has means of acquiring input data.
  • the control unit 1 has a memory for storing data, for example means of storing data sets for an optimal path and for data sets obtained from the sensors.
  • the control unit 1 is connectable to a cloud. In these embodiments, the control unit 1 may not comprise an own memory.
  • a training system simulating a teleoperated medical treating system can also be used, especially a robotic surgery system, such as a system simulating for example the well-known da Vinci® system and/or the well- known MMI Symani® Surgical System.
  • FIG. 4 shows such a training system simulating a teleoperated medical treating system in a schematic representation.
  • This system comprises a control unit 1 located in a main body, a screen or display 2 and tangible instruments, such as handheld instruments 40 and pedals 41 , as well.
  • tangible instruments such as handheld instruments 40 and pedals 41 , as well.
  • the tangible instruments have not the same shape or size as the instruments used on the patient during robotic surgery.
  • the tangible instruments of the training system correspond to the master control input devices of the teleoperated medical treating systems or they are at least similar to them providing the same or at least some of the functionalities of the master control input devices.
  • a tablet 20 is arranged on top of the main body 10 of the Box Trainer. At least one tangible instrument 40 is connected with a wire 400 with the tablet 20. In addition or alternatively, wireless connection is possible as well.
  • At least one sensor 44 is located in or at the at least one through-opening 42 and/or at the at least one tangible task place 46.
  • At least one camera 45 of the tablet 20 is capable of viewing into the interior of the main body 10. Additional cameras or other optical sensors may be present at other places, detecting the interior or the outside of the Box Trainer as well.
  • the tablet 20 with its touchscreen allows input means for user input and as control unit of the Box Trainer.
  • the tablet 2 also displays the virtual instrument 43 visualizing the movements of the tangible instrument 40 as well as virtual task places 47 referring to the tangible task places 46.
  • the tablet 2 also provides the control unit of the training system.
  • the Box Trainer comprises a screen, a control unit and user input means instead of the tablet.
  • a program can be run on the tablet 20 which allows to replace the tangible instrument 40 shown in figure 5a by the user's finger or fingers.
  • the user can move his finger or his fingers on the touchscreen in order to activate, for example to move, the virtual instrument in the virtual medical scenario displayed on the touchscreen.
  • the finger itself is in this case the tangible instrument and the touchscreen the at least one sensor.
  • the tablet can also be used as a stand-alone device for medical training.
  • the combination of the tablet with the above mentioned physical medical training systems, especially with the Box Trainer, provides a flexible and portable solution for surgical training, combining tangible surgical trainer inserts with digital interaction.
  • This versatile design provides a hybrid surgical training platform. It utilizes traditional tangible surgical instruments and allows for direct finger interaction on the touch screen, therefore adapting to various training needs and providing a portable, efficient training solution for both novice and advanced users.
  • the control unit 1 of the different training systems mentioned above preferably provides different exercises to train the skills of the user.
  • the user can be a trainee or an experienced person who tries out new treating methods or who wants to define the best way to perform a known method in an upcoming surgery or treatment.
  • the user identifies himself when starting the training system.
  • the control unit 1 will preferably assess the performance of the user in the completed exercises or at least track and/or document the paths taken during the performance.
  • the training system act in the well-known way. This means that the movements of the real or virtual instruments activated by the user are detected, that time is determined, and that other criteria are monitored by the control unit. In the training systems, at least the time used for taking at least a part of the path and/or the length of at least a part of the path taken is monitored.
  • the seeking path and the retracting path is determined and assessed. This is shown in figure 2.
  • the inventive medical training system and with the inventive medical training method however, only the seeking path is assessed, as can be seen in figure 3. This will be described later in the text in more detail.
  • the tangible instruments 40, 41 used in the training system are for example laparoscopic tools, like cameras, graspers, or scissors, or robotic surgery consoles replicas comprising master control input devices, emulating as consoles of a teleoperated medical treating system, such as the da Vinci® Surgical System or the MMI Symani® Surgical System.
  • the software component of the training system is powered by an engine, such as Unity or Unreal.
  • the engine delivers highly accurate physical simulations and rendering models.
  • the engine supports detailed anatomical visualization and dynamic interactions between the virtual instruments and simulated virtual tissues, allowing for realistic deformation, collision detection, and force feedback. These features are preferably further enhanced by physics libraries such as NVIDIA PhysX, enabling lifelike replication of surgical environments.
  • the training system comprises Aurora® electromagnetic sensors from Northern Digital Inc (NDI), renowned for their sub-millimeter spatial accuracy and rapid response times. These electromagnetic input sensors 44 provide high-resolution tracking data critical for precise instrument navigation within the simulated environment.
  • some preferred embodiments comprise optical sensors 45 for tracking the movements of the tangible instruments 40, 41.
  • at least one camera comprising optical input sensors 45, such as an Intel® RealSense camera, is used. The camera with the optical sensors 45 is capable of real-time video capture and analysis.
  • the use of at least one camera with optical sensors 45 augments the training system’s ability to evaluate orientation and spatial movement, ensuring comprehensive data collection for assessment of the proficiency of the user.
  • the sensors 44, 45 are placed at and/or in and/or near the tangible instruments 40, 41 and/or the human anatomy body.
  • the electromagnetic sensors 44 and optical sensors 45 provide high-resolution tracking data to map at least the tangible instrument’s 40, 41 seeking path in three-dimensional space.
  • the system identifies the seeking path as the trajectory from the instrument's starting point to a predefined target, such as a specific anatomical region or surgical site.
  • the seeking path is a or the critical component of the training system's proficiency assessment, i.e. the assessment of the user's proficiency.
  • the control unit 1 processes the seeking path using algorithms that differentiate between the seeking and retracting phases of the movement of the tangible instruments 40, 41 . These algorithms apply a seeking-retracting phase determination methodology, which isolates the data corresponding to the seeking phase while excluding or labeling data from the retracting phase. By focusing exclusively on the seeking path, the system eliminates noise introduced by the retracting path, ensuring a precise assessment of user skill and/or providing targeted training for specific upcoming surgeries.
  • the seeking path is further analyzed to calculate its trajectory, which is determined within a defined spherical volume.
  • the path length is numerically calculated using established computational methods, such as the semi-implicit Euler method or equivalent trajectory modeling techniques. This calculation builds the seeking path point by point, accounting for velocity, position, and acceleration data captured from the tangible instrument.
  • the algorithm assigns a proficiency score to the calculated path length based on predefined benchmarks.
  • a maximum score is assigned to an optimal path length, typically determined through methods such as the median or upper quartile of expert path lengths, or through contrasting groups' standard-setting methods that include consequences analysis.
  • a minimum score is assigned to path lengths exceeding a defined threshold, ensuring that the scoring unit reflects both precision and efficiency.
  • the scoring can be applied to individual seeking paths, providing granular feedback on each discrete task performed during the simulated procedure.
  • the scoring unit can assess an accumulation of seeking paths over multiple tasks or an entire session. This cumulative scoring approach evaluates the user’s consistency and overall proficiency, enabling a more comprehensive assessment of skill development over time.
  • the scoring algorithm of the scoring unit can further integrate additional metrics, such as the time taken to complete the seeking paths, the alignment and orientation of the instrument, and the three-dimensional spatial characteristics of the trajectory.
  • the training system ensures both high-fidelity evaluations of individual tasks and holistic assessments of performance across broader training sessions, optimizing preparation for both general and procedure-specific surgical training. Based on this analysis, an objective feedback is given to the user, wherein the feedback highlights deviations.
  • the control unit 1 offers corrective guidance in real time. In other embodiments, the control unit 1 just provides the analysis or offers suggestions for the next try, i.e. the next training session on the same task or on a similar task.
  • the control unit 1 is equipped with a database of optimal seeking paths, derived from expertlevel procedural data.
  • the database may comprise optimal times for performing a path of a specific task and/or optimal length of a path for a specific task and/or optimal two- dimensional or three-dimensional paths and/or optimal instrument orientation for performing a specific task.
  • control unit 1 may comprise the memory and the entire software to perform the training scenarios and the assessment.
  • control unit 1 of the training system is a web-based application, eliminating the need for dedicated software installations. Trainees access the application through a standard web browser, ensuring compatibility with most modern devices.
  • the application employs libraries such as OpenCV for computer vision tasks, enabling real-time recognition and processing of the physical environment captured by the device’s camera. This technology overlays high-fidelity virtual anatomical models, procedural guides, and interactive prompts onto the live feed of a physical anatomical model or surgical dummy.
  • the system detects the incision location, depth, and angle, and overlays corresponding virtual visuals, such as blood flow or tissue layers being revealed.
  • the seeking path is tracked and analyzed using a mobile device’s camera, such as the tablet 20, and integrated algorithms powered by OpenCV.
  • the mobile app identifies the instrument's movement as it transitions from an initial position to a target area, such as a specific organ or incision site, within the mixed reality environment.
  • the seeking path is visually represented on the device’s screen, with virtual overlays illustrating the trajectory and providing real-time feedback on accuracy.
  • Some of these embodiments of the training system employ advanced computer vision techniques to detect instrument movements and classify them into seeking and retracting phases.
  • the seeking path data is analyzed for key performance metrics, including path length, efficiency, and movement precision. These metrics are compared against optimal seeking paths stored in the training system’s centralized database. For example, during a suturing simulation, the system evaluates the instrument’s path as it approaches the needle’s entry point, providing immediate feedback on alignment and trajectory. To enhance user interaction, the mobile app visualizes the seeking path with color-coded overlays that indicate areas of deviation or inefficiency. Haptic feedback reinforces proper technique, such as applying resistance when the instrument deviates from the optimal path. This focus ensures that the seeking path remains the primary driver of skill assessment across a variety of procedures, including open surgeries and minimally invasive techniques.
  • Manipulation within the mixed reality environment is achieved through the device's multipoint touch interface, such as the tablet's 20 touchscreen. Users can interact directly with the screen to perform virtual tasks, such as zooming in on anatomical details, manipulating tissue layers, or highlighting key surgical landmarks.
  • the touch interface also supports gesture-based inputs, allowing for intuitive operations such as virtual suturing, knot tying, or retracting tissue.
  • Haptic feedback from the device enhances the training experience by simulating tactile cues like tissue tension or instrument resistance.
  • the training scenarios offered by the training system cover a wide range of procedures, including both minimally invasive and open surgeries. For example:
  • Scenarios include laparoscopic navigation and robotic- assisted surgeries, where users manipulate both real and virtual instruments, with the system tracking and analyzing their movements.
  • the training system leverages the mobile device’s spatial mapping capabilities, such as depth sensors or LiDAR (if available), to ensure precise alignment of virtual overlays with real-world objects.
  • spatial mapping capabilities such as depth sensors or LiDAR (if available)
  • LiDAR LiDAR
  • the system uses adaptive algorithms within OpenCV to estimate spatial relationships and accurately track real-world instrument interactions.
  • the web app incorporates a centralized database for tracking trainee progress and storing performance metrics. Metrics such as incision precision, path length, instrument efficiency, and suture quality are analyzed in real time and compared against expert-level benchmarks. Trainees receive immediate feedback via visual overlays, haptic cues, and auditory alerts, fostering iterative learning and skill refinement.
  • a medical training task is shown, which may be performed for example by a box trainer or a robotic surgery simulator as mentioned above.
  • a user shall place a virtual object 30 into a recess 31 having the same shape as the virtual object 30.
  • the training session requires that the user moves the virtual object 30 by using a tangible instrument, wherein the display 2 shows the virtual 30 being moved by a virtual instrument 43.
  • the tangible instrument is one of the tangible instruments 40, 41 described above. Activation and/or movement of the tangible instrument 40, 41 causes the virtual instrument 43 of the medical training system to move as well, wherein the virtual instrument 43 holds the virtual object 30 and is capable of releasing the virtual object 30 into the virtual recess 31 when the tangible instrument 40, 41 is activated by the user accordingly.
  • the seeking path until the virtual object 30 is placed within the virtual recess 31 is detected and analyzed, preferably measured.
  • the medical training system used for performing this task may be a robotic surgical training system as described above.
  • the system includes a physical interface for the user to control, such as at least one tangible instrument 40, 41 , a computation device comprising the control unit 1 , the display 2, and a software implementation of a training scenario run by the control unit 1.
  • the training scenario which is the basis of the training session comprises an initial starting point and the final goal task with optional intermediate steps.
  • the final goal task in the example shown in figure 6 is the correct placement of the virtual object 30 into the virtual recess 31.
  • the shape of the second sphere 60 is shown as a round ball. It can have any three- dimensional shape surrounding a volume. It can even be a two-dimensional area, such as a square, a rectangle or a circle, or a single point can be used as indicator as well. If more than one virtual instrument is used, preferably all instruments comprise a separate second sphere 60.
  • control unit 1 does also detect when the second sphere 60 leaves the interior of the first sphere 6.
  • the way between the point of interest, for example the position of the tangible instrument 40, 41 and of the virtual instrument 43 when the task is fulfilled, and the position of the second sphere 60 leaving the first sphere 6 is defining the retracting path.
  • the tracking occurs in three degrees of freedom.
  • the position of the virtual instrument 43 is recorded until the final task is completed, in this example, until the virtual object 30 is placed within the recess 31.
  • the path between the initial point 70 and the final point 71 or point of interest is the seeking path 7 which has been taken in order to fulfill the given task.
  • the initial point 70 forms the first end of the seeking path 7 to be assessed and the final point 71 forms the second end of this seeking path 7.
  • the first and the second spheres 6, 60 are preferably hard-coded, wherein the spheres preferably vary depending on the task to be fulfilled and the instruments to be used.
  • the at least one of the first and second sphere or both spheres 6, 60 are dynamically set, based on parameters of a training scenario chosen or based on former training scenarios already been performed. Some embodiments allow to define at least one of the two spheres, preferably both spheres 6, 60, to be defined based on accumulated user data, preferably after having completed an appropriate statistical analysis. Figure 7 shows both spheres 6, 60.
  • the tracked seeking path can be used for statistical analysis for differentiation, e.g., between different population groups such as expertise levels or educational backgrounds.
  • the length of the chosen seeking path can be determined and compared with a predefined target path length.
  • Figure 7 shows the seeking path 7 which has been taken for placing the virtual object 30 into the virtual recess 61 of figure 6. In figure 6 the process is still in progress and the task is not yet fulfilled.
  • the seeking path 7 is also shown in the screen of the embodiment according to figure 5a.
  • Some embodiments and methods do not only monitor the seeking path but do also include orientation of the tangible instruments 40, 41 and/or virtual instruments 43, adding up to an additional three degrees of freedom. Some embodiments monitor more than one tangible instrument 40, 41 , and/or virtual instruments 43 in the same training session, preferably at the same time, wherein the seeking path of each of these instruments, 40, 41 , 43 is monitored and analyzed. For example, by monitoring more than one instrument, a dominant hand seeking path can be scored differently than the seeking path performed by a nondominant hand.
  • the seeking path is the length of the path between the initial point and the final point. Both points depend on the training system's settings. The settings may depend on at least one of the following: the task to be fulfilled, the tangible or virtual instruments used, the user's previous performance and the user's inputs.
  • the initial point can be varied.
  • the final point can be varied such that it does not have to mark the point where the task is finally fulfilled but it can also mark another point on the way to fulfill the task. This another final point is still located within the first sphere 6 chosen by the training system based on assessment results, predefined settings or by user setting.
  • the point of interest or the final point does not have to be the place of the fulfilment of a final task but may be an important place on the way there.
  • the seeking path to be assessed separately is therefore only the way until this important point, even if the instrument is moved further to fulfill the final task.
  • the further movement of the instrument from this important point to the final task would be considered as retracting way, i.e. retracting from this important point. Therefore, the training system may treat an entire way to a final task as seeking path which has to be assessed separately or only a part of this way.
  • the training system may define the initial point and the final point based on the numerical data achieved from the movement of the virtual instrument in the virtual space and/or from sensors arranged in the real world detecting the movements of the tangible instruments in the real space.
  • Figures 10 to 15 show screenshots of a display of an already physically realized embodiment of the medical training system. The figures do only show a part of the display. It is the training system and the task as described above with regard to figures 6 and 7.
  • the virtual instrument 43 holding the virtual object 30 and especially the second sphere 60 have not yet reached the first sphere 6 surrounding the place of goal, i.e. the virtual recess 31.
  • the tracking of the path has not been activated yet.
  • the second sphere 60 has already penetrated the first sphere 6.
  • the center position of the second sphere 60 entering the first sphere 6 mark the initial point when the tracking started.
  • the virtual object 30 is still not placed within the virtual recess 31 .
  • the control unit 1 still tracks every activation and every movement of the tangible instruments 40, 41 thereby tracking the movement of the virtual instrument 43 shown on the display 2.
  • the area surrounding the virtual recess 31 and/or a background may change its color and/or a visual or an auditory alarm may be given to the user.
  • the monitored seeking path 7 is already shown on the display during the performance of the task or it is shown afterwards, when the task is completed and/or an analysis was performed. In figures 12 and 13, the seeking path 7 is already shown.
  • the second sphere 60 of the virtual instrument 43 leaves the interior of the first sphere 6.
  • the path of the virtual instrument 43 shown on the display 2 is the seeking path 7 only. There is no line between the free first end 70 of the path 7 shown and the retracted virtual instrument 43 in the present position.
  • the retracting path may be monitored and analyzed as well, but separately from the seeking path.
  • the seeking path 7 can be analyzed by the user visually be rotating and/or enlarging the image shown on the display 2.
  • the first end of the path 7, corresponding to the initial point, is marked with reference number 70
  • the second end of the path 7, corresponding to the final point is marked with reference number 71.
  • Figures 16 and 17 show other screenshots of the display of the already physically realized embodiment of the medical training system described in figures 10 to 15. Instead of an abstract task, the training scenario now shows a real medical scenario, which is a salpingotomy for an ectopic pregnancy in the fallopian tubes.
  • the virtual instrument 43 just enters with its second sphere 60, marking the origin point of the real medical instrument used in this medical treatment, the first sphere 6.
  • a second virtual instrument 430 is present, wherein its position and movement is not part of the seeking path assessment performed at this stage.
  • the final point was reached and the assess seeking path 7 is shown. As mentioned above, this seeking path does not have to be the entire way the virtual instrument 43 takes until it has completed all its tasks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Medical Informatics (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Medicinal Chemistry (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Pulmonology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Instructional Devices (AREA)

Abstract

Un système de formation médicale comprend un ensemble simulateur conçu pour effectuer au moins une intervention médicale à l'aide d'un instrument dans une configuration d'intervention médicale simulée, l'ensemble simulateur étant actionné manuellement par un utilisateur. Le système comprend en outre une unité de commande qui procure la configuration d'intervention médicale simulée et qui surveille au moins un trajet de recherche de l'instrument déplacé par l'utilisateur pendant l'intervention médicale simulée. L'unité de commande est capable de reconnaître la différence entre le trajet de recherche et un trajet de recul, et elle utilise uniquement le trajet de recherche de l'instrument afin d'évaluer les compétences de l'utilisateur. Le système de formation médicale selon l'invention permet d'évaluer les compétences d'un utilisateur avec un haut niveau de précision.
PCT/EP2025/050466 2024-02-23 2025-01-09 Système de formation médicale et procédé de formation médicale Pending WO2025176376A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP24159342.5 2024-02-23
EP24159342 2024-02-23

Publications (1)

Publication Number Publication Date
WO2025176376A1 true WO2025176376A1 (fr) 2025-08-28

Family

ID=90057442

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/050466 Pending WO2025176376A1 (fr) 2024-02-23 2025-01-09 Système de formation médicale et procédé de formation médicale

Country Status (1)

Country Link
WO (1) WO2025176376A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023215822A2 (fr) * 2022-05-05 2023-11-09 Virginia Commonwealth University Procédés et systèmes de formation chirurgicale

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023215822A2 (fr) * 2022-05-05 2023-11-09 Virginia Commonwealth University Procédés et systèmes de formation chirurgicale

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHMARRA MAGDALENA KAROLINA: "TrEndo Tracking System Motion Analysis in Minimally Invasive Surgery Proefschrift", 1 January 2009 (2009-01-01), XP093268468, ISBN: 978-90-8891-084-5, Retrieved from the Internet <URL:https://repository.tudelft.nl/file/File_65e9efc3-93e3-41ff-902d-129a2aed477e?preview=1> *
GAUTIER BENJAMIN ET AL: "Real-Time 3D Tracking of Laparoscopy Training Instruments for Assessment and Feedback", FRONTIERS IN ROBOTICS AND AI, vol. 8, 4 November 2021 (2021-11-04), XP093091334, DOI: 10.3389/frobt.2021.751741 *
MAGDALENA K CHMARRA ET AL: "Retracting and seeking movements during laparoscopic goal-oriented movements. Is the shortest path length optimal?", SURGICAL ENDOSCOPY ; AND OTHER INTERVENTIONAL TECHNIQUES OFFICIAL JOURNAL OF THE SOCIETY OF AMERICAN GASTROINTESTINAL AND ENDOSCOPIC SURGEONS (SAGES) AND EUROPEAN ASSOCIATION FOR ENDOSCOPIC SURGERY (EAES), SPRINGER-VERLAG, NE, vol. 22, no. 4, 20 August 2007 (2007-08-20), pages 943 - 949, XP019631613, ISSN: 1432-2218 *

Similar Documents

Publication Publication Date Title
CN112804958B (zh) 指示器系统
Hamza-Lup et al. A survey of visuo-haptic simulation in surgical training
JP6659547B2 (ja) 医療処置トレーニングのためのシミュレータシステム
Rhienmora et al. Intelligent dental training simulator with objective skill assessment and feedback
Chandra et al. A comparison of laparoscopic and robotic assisted suturing performance by experts and novices
Gallagher et al. Virtual reality as a metric for the assessment of laparoscopic psychomotor skills
Kapoor et al. Haptics–Touchfeedback technology widening the horizon of medicine
US5755577A (en) Apparatus and method for recording data of a surgical procedure
KR101108927B1 (ko) 증강현실을 이용한 수술 로봇 시스템 및 그 제어 방법
Long et al. Integrating artificial intelligence and augmented reality in robotic surgery: An initial dvrk study using a surgical education scenario
De Paolis Serious game for laparoscopic suturing training
KR20150004726A (ko) 최소 침습 수술 기량의 평가 또는 개선을 위한 시스템 및 방법
WO2009094621A2 (fr) Dispositifs et procédés permettant la mise en œuvre de procédures chirurgicales endoscopiques et d&#39;instruments associés dans un environnement virtuel
Jarc et al. Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery
Sutton et al. MIST VR, A laparoscopic Surgery Procedures Trainer and Evaluator
KR20110042277A (ko) 증강현실을 이용한 수술 로봇 시스템 및 그 제어 방법
WO2017098506A1 (fr) Système autonome d&#39;évaluation et de formation basé sur des objectifs destiné à la chirurgie laparoscopique
CN115328317A (zh) 一种基于虚拟现实的质控反馈系统及方法
KR100957470B1 (ko) 증강현실을 이용한 수술 로봇 시스템 및 그 제어 방법
Lamata et al. Conceptual framework for laparoscopic VR simulators
Gonzalez-Romo et al. Quantification of motion during microvascular anastomosis simulation using machine learning hand detection
Lamata et al. SINERGIA laparoscopic virtual reality simulator: Didactic design and technical development
Playter et al. A virtual surgery simulator using advanced haptic feedback
Grantner et al. Intelligent Performance Assessment System for Laparoscopic Surgical Box-Trainer
Huang et al. Combining metrics from clinical simulators and sensorimotor tasks can reveal the training background of surgeons

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25701332

Country of ref document: EP

Kind code of ref document: A1