WO2025191612A1 - System and method for providing reality-virtuality continuum platform for efficient simulation training - Google Patents
System and method for providing reality-virtuality continuum platform for efficient simulation trainingInfo
- Publication number
- WO2025191612A1 WO2025191612A1 PCT/IN2025/050362 IN2025050362W WO2025191612A1 WO 2025191612 A1 WO2025191612 A1 WO 2025191612A1 IN 2025050362 W IN2025050362 W IN 2025050362W WO 2025191612 A1 WO2025191612 A1 WO 2025191612A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- trainee
- pseudo
- real
- probe
- clinical procedure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
- A61B8/4254—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors mounted on the probe
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/28—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
- G09B23/30—Anatomical models
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H80/00—ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00681—Aspects not otherwise provided for
- A61B2017/00707—Dummies, phantoms; Devices simulating patient or parts of patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
Definitions
- the present disclosure relates to a field of mixed reality environment. More particularly, the present disclosure relates to providing a system and method for facilitating reality-virtuality continuum platform to enable self-directed simulation training.
- MR-based training integrates elements of virtual reality (VR) and augmented reality (AR) to create an environment where the trainees can practice procedures, interact with virtual objects, and receive real-time guidance.
- VR virtual reality
- AR augmented reality
- One such training scenario may be an ultrasound training which is a widely used imaging tool in modem medicine due to its non-invasive nature, making it a preferred choice for evaluating a variety of medical conditions.
- Ultrasound offers versatility in diagnosing various medical conditions, guiding procedures thus enabling timely treatment of the diagnosed condition.
- performing an ultrasound requires significant expertise and training.
- the accuracy of the ultrasound procedure largely depends on the operator's skill in handling ultrasound probe, interpreting images, and distinguishing normal anatomy from abnormalities. Therefore, healthcare professionals undergo specialized training to develop a certain proficiency in ultrasound techniques before applying them to patients.
- Ultrasound training may involve hardware-based simulations which incorporate use of physical phantoms and simulators that mimic the tactile and mechanical properties of human tissue. These may further include robotic systems as well.
- software-based simulation utilizes computer-generated graphics and algorithms to simulate ultrasound images and scenarios by integrating VR and AR technologies. These hybrid simulation then combines elements of both hardware and software-based approaches to create a more comprehensive training experience.
- the present disclosure provides a system and method for facilitating training for performing a clinical procedure using the mixed reality environment.
- This summary is not an extensive overview and is intended to neither identify key or critical elements nor delineate the scope of such elements. Its purpose is to present some concepts of the described features in a simplified form as a prelude to the more detailed description that is presented later.
- a system to facilitate training, for performing the clinical procedure, using the mixed reality environment may comprise a headset configured to display the mixed reality environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real -world objects to a trainee wearing the headset.
- the one or more real -wo rid objects comprises a pseudo probe configured to emulate the tactile characteristics of a real -word probe.
- the system may comprise a pseudo phantom configured to simulate anatomical structures encountered during diagnostics.
- the system may comprise a control unit configured to continuously receive, from one or more sensors, real-time data indicative of manipulation of the pseudo probe and interaction with the pseudo phantom by the trainee.
- control unit is further configured to adapt the virtual tutor to provide a demonstration to optimally perform the clinical procedure to the trainee.
- the real-time data includes at least one of a motion tracking data, a pressure data, and an orientation data of the pseudo probe, while the trainee is manipulating the pseudo probe and interacting with the pseudo phantom.
- control unit may be further configured to generate an assessment report of the trainee’s overall performance comprising the detected one or more deficiencies and a performance score.
- control unit may be further configured to display, on a virtual screen, simulated anatomical structure corresponding to the manipulation of the pseudo probe and the interaction with the pseudo phantom by the trainee.
- a method of facilitating training, for performing the clinical procedure, using the mixed reality environment comprises a headset displaying the mixed reality environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real -world objects to a trainee wearing the headset.
- the one or more real -wo rid objects comprises a pseudo probe emulating the tactile characteristics of a real -word probe.
- the method further comprises a pseudo phantom simulating anatomical structures encountered during diagnostics.
- the method of facilitating training further comprises continuously receiving, from one or more sensors, real-time data indicative of manipulation of the pseudo probe and interaction with the pseudo phantom by the trainee.
- the method then comprises mapping the real-time data onto a pre-defined dataset corresponding to the clinical procedure.
- the real-time data is indicative of the trainee’s performance.
- the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe and interaction with the pseudo phantom to optimally perform the clinical procedure, and the predefined data is stored in a memory.
- the method further comprises detecting one or more deficiencies in the trainee’s performance indicated by the real-time data based on the mapping, and adapting a virtual tutor to render one or more corrective feedbacks to the trainee in the mixed reality environment, based on the detected one or more deficiencies in the trainee’s performance.
- FIG. 1 depicts an environment illustrating an existing ultrasound simulation technique, as per existing prior art.
- FIG. 2 depicts a system to facilitate training, for performing a clinical procedure using the mixed reality environment, in accordance with an embodiment of the present disclosure.
- FIG. 3 depicts an exemplary environment illustrating a physical world scenario for ultrasound training, in accordance with an embodiment of the present disclosure.
- FIG. 4 depicts an exemplary environment illustrating a virtual instructor interacting in real time with a trainee for ultrasound training, in accordance with an embodiment of the present disclosure.
- FIG. 5 is a flowchart of a method of facilitating training, for performing a clinical procedure, using a mixed reality environment, in accordance with an embodiment of the disclosure.
- An exemplary aspect of the disclosure may provide method(s) and system(s) to facilitate reality- virtuality continuum platform to enable self-directed simulation training.
- the present disclosure provides a method and system to facilitate training using the mixed reality environment to the trainee to perform a clinical procedure.
- the clinical procedure may be but not limited to an ultrasound clinical procedure, an orthopedic surgery clinical procedure, a dental surgery clinical procedure etc.
- the subsequent paragraphs disclose the mixed reality environment to facilitate training for the trainee to perform the ultrasound clinical procedure.
- FIG. 1 depicts an environment (100) illustrating an existing ultrasound simulation technique, as per existing prior art.
- the environment (100) imparts a static three-dimensional (3D) reconstruction of pre-recorded ultrasound images to enable simulation training to the healthcare professionals.
- a physical phantom (102) is used to represent a physical object or simulations to test and calibrate ultrasound equipment while imparting training to the professionals.
- an ultrasound probe (104) is used by the professionals to train on the physical phantom (102), and a display unit (106) is used to display the ultrasound images.
- a processing unit (108) is used in conjunction with the physical phantom (102), the ultrasound probe (104), and the display unit (106).
- one or more real ultrasound images are acquired from the patients using the ultrasound machines and the acquired ultrasound images are processed and reconstructed into static 3D models.
- These static 3D reconstructions of the ultrasound images are then integrated into the physical phantom ( 102), either by embedding them directly into the material or by overlaying them onto the surface.
- These physical phantoms (102) are physical models which are designed to mimic human tissue properties and are often made from materials like silicone or gelatine and may contain embedded structures or features to enhance realism.
- these physical phantoms (102) may often degrade over time, requiring frequent replacements, further adding to the cost and maintenance efforts.
- FIG. 2 depicts a system (200) to facilitate training for performing the clinical procedure using the mixed reality (MR) training environment, in accordance with an embodiment of the present disclosure.
- the MR training environment may be generated using the one or more elements of the system (200).
- the system (200) may comprise various elements such as a control unit (202), one or more sensors (204), an input/output (I/O) interface (206), a memory (208), a network interface (210), an image processing unit (212), a headset (214), a pseudo probe (216) a pseudo phantom (218), an Artificial Intelligence/ Machine learning (AI/ML) model (220), a communication network (222), a database (224), but not limited thereto.
- control unit (202) may be designed to operate in environments requiring low-power consumption.
- the control unit (202) may comprise specialized units such as integrated system (bus) controllers, memory management control units, digital signal processing units, etc.
- the control unit (202) may be equipped with highspeed multi-core processing capabilities to enable the system (200) to facilitate training, for performing the clinical procedure, using the MR training environment.
- the one or more sensors (204) may capture real-time data related to the trainee’s actions and interactions, while the trainee (not shown in figure) is performing the clinical procedure.
- the one or more sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors.
- the communication network (222) may be one of a wired connection or a wireless connection
- Examples of the communication network (222) may include, but are not limited to, the Internet, a cloud network, Cellular or Wireless Mobile Network (such as Long-Term Evolution and 5G New Radio), a Wireless Fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), or a Metropolitan Area Network (MAN).
- Various components of the system (200) and the trainee may be configured to connect to the communication network (222) in accordance with various wired and wireless communication protocols.
- wired and wireless communication protocols may include, but are not limited to, at least one of a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hyper-Text Transfer Protocol (HTTP), File Transfer Protocol (FTP), Zig Bee, EDGE, IEEE 802.11, light fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communication, wireless access point (AP), device to device communication, cellular communication protocols, and Bluetooth (BT) communication protocols.
- TCP/IP Transmission Control Protocol and Internet Protocol
- UDP User Datagram Protocol
- HTTP Hyper-Text Transfer Protocol
- FTP File Transfer Protocol
- Zig Bee EDGE
- AP wireless access point
- BT Bluetooth
- the network interface (210) may be implemented by using various known technologies to support wired or wireless communication of the system (200) with the database (224), via the communication network (222).
- the network interface (210) may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, or a local buffer circuitry.
- the network interface (210) may be configured to communicate via wireless communication with networks, such as the Internet, an Intranet, or a wireless network, such as a cellular telephone network, a wireless local area network (LAN), and a metropolitan area network (MAN).
- networks such as the Internet, an Intranet, or a wireless network, such as a cellular telephone network, a wireless local area network (LAN), and a metropolitan area network (MAN).
- the wireless communication may be configured to use one or more of a plurality of communication standards, protocols and technologies, such as Global System for Mobile (GSM) Communications, Enhanced Data GSM Environment (EDGE), wideband code division multiple access (W-CDMA), Long Term Evolution (LTE), 5G NR, code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (such as Institute of Electrical and Electronics Engineers (IEEE) 802.1 la, IEEE 802.1 lb, IEEE 802.11g or IEEE 802.1 In), voice over Internet Protocol (VoIP), light fidelity (Li-Fi), Worldwide Interoperability for Microwave Access (Wi-MAX), a protocol for email, instant messaging, and a Short Message Service (SMS).
- GSM Global System for Mobile
- EDGE Enhanced Data GSM Environment
- W-CDMA wideband code division multiple access
- LTE Long Term Evolution
- 5G NR code division multiple access
- CDMA code division multiple access
- TDMA time
- the system (200) comprises the image processing unit (212).
- the image processing unit (212) may process the images which may be stored in the memory (208).
- the image processing unit (212) may include functions such as but not limited to fdtering of image to remove noise, enhancing image visibility etc.
- the images may be one or more ultrasound images which may be reconstructed into one or more 3D images. These ultrasound images may be then pre- processed by the image processing unit (212) using adaptive filters and algorithms to remove noise, enhance image quality (super resolution), scale, resize, and also padding of the ultrasound images.
- the images being processed by the image processing unit (212) are specific to the type of diagnostic clinical procedure being performed by the trainee.
- the images being processed by the image processing unit (212) may be that of a fetal analysis along with uterine parts of the body when the diagnostic training is being provided in the field of obstetrics, and it may be that of the muscular and tissue analysis of hand when the training is going on for evaluating the health of muscular tissues of any limb.
- images being processed by the image processing unit (212) may be of the anatomies of the fetus while performing the ultrasound clinical procedure.
- the system (200) comprises the headset (214) which is worn by the trainee to facilitate the reality-virtuality continuum platform to enable self-directed simulation training.
- the reality-virtuality continuum platform may be enabled by the integration of the real world with the virtual world to generate the MR training environment.
- the integration may be enabled through a combination of one or more sensors (204) embedded in the headset (214).
- the integration may be enabled by the image processing unit (212) which may present virtual anatomical structures to the trainee (302).
- the integration may be enabled by the pseudo probe (216) which may be embedded with one or more sensors (204) to track the movement, orientation, and applied force, while interacting with the pseudo phantom (218).
- the integration may be enabled by the pseudo probe (216) comprising sensors to detect applied force, pressure distribution, and depth of penetration.
- the integration may be enabled by the pseudo phantom (218) which may be the active sensing system equipped with one or more sensors (204) to detect the external rigid/soft body interaction, movements and deformations within the pseudo phantom (218).
- the headset (214) may be a virtual reality (VR) headset which immerses the trainee (302) in the MR training environment.
- the headset (214) may be a mixed reality (MR) headset which combines elements of both VR and augmented reality (AR) technologies.
- the I/O interface (206) may be configured to receive the inputs from the control unit (202) or the image processing unit (212) or the AI/ML model (220) and may be further configured to output the received inputs to the headset (214) worn by the trainee (302).
- the headset (214) may overlay one or more virtual anatomical structures creating the MR training environment, allowing the trainee (302) to interact with both the virtual and physical objects simultaneously, as discussed in subsequent paragraphs.
- the system comprises the pseudo probe (216) which may be used in the ultrasound simulation training.
- These pseudo probes (216) may simulate the functionality and appearance of real ultrasound probes within the virtual or augmented reality environment through the one or more sensors (204) embedded within the pseudo probe (216). These one or more sensors (204) may detect the movement, orientation and applied force of the pseudo probe (216) while the trainee is performing the clinical procedure on the pseudo phantom (218).
- the pseudo probe (216) may allow the trainees to practice ultrasound scanning techniques, such as probe manipulation and image acquisition, using a computer interface or specialized simulation software.
- the one more motion sensors, pressure sensors, orientation sensors, and haptic sensors embedded in the pseudo probe (216) may detect the real time data including the movement, orientation and applied force of the pseudo probe (216). Further, these real time data may be transmitted to the system (200) via the I/O interface (206) and compared against the pre-defined dataset, allowing the system (200) to assess the trainee’s performance in real time.
- the pseudo probe (216) may be a 2D probe which produces two-dimensional images of the scanned area.
- the pseudo probe (216) may be a 3D probe which may capture multiple two- dimensional (2D) images from different angles and combine them to create a three-dimensional representation of the scanned area.
- the 3D ultrasound probe may be particularly useful in obstetrics as well as in other areas such as musculoskeletal and vascular imaging.
- the pseudo probe (216) may further provide haptic feedback to the trainee to mimic the tactile sensations of using an ultrasound probe on the anatomy, thus providing a realistic training experience with sensations of resistance and texture variations.
- the system comprises the pseudo phantom (218) which simulates anatomical structures, tissue properties, and imaging characteristics encountered in real -wo rid clinical scenarios.
- the pseudo phantoms (218) may be used to provide the trainees with realistic imaging scenarios for practicing ultrasound techniques, image interpretation, and procedural skills in the virtual environment.
- the pseudo phantoms (218) may be digitally simulated within the virtual or augmented reality environment to allow the trainee (302) to access a wide range of anatomical models without the need for multiple physical phantoms.
- these pseudo phantoms (218) offer flexibility, accessibility, and cost-effectiveness compared to physical phantoms, making them valuable tools in the ultrasound education and training.
- the pseudo phantom (218) used in the present disclosure may be constituted of different shapes and sizes depending on the medical field for which it is being used.
- the pseudo phantom (218) may constitute a round and circular structure representing a pregnant belly of a lady if the training is being conducted for fetal analysis or it may represent a hand for examining the muscular tissues etc.
- the pseudo phantom (218) may be inflated or deflated to represent the different case scenarios encountered while examining the actual patients.
- the pseudo phantom (218) may be inflated at different levels to replicate the growth of fetus in different trimesters of pregnancy, or the pseudo phantom (218) may be inflated/deflated to represent an obese/weak person or it may be inflated to actuate the inflammatory conditions due to the associated disease in the muscular tissues of the patients.
- the system comprises the AI/ML model (220).
- the AI/ML model (220) works in conjunction with the memory (208) to enable a virtual tutor (not shown in Figure) to render one or more corrective feedbacks to the trainee in the MR environment.
- the AI/ML model (220) may be trained on a massive human motion dataset.
- a computational model may then estimate the pseudoprobe's (216) POSE (position and orientation) at every instance for guidance and corrective feedback.
- This POSE information may be then adapted into kinematic movements of the virtual tutor using the AI/ML model (220).
- FIG. 3 depicts an exemplary environment (300) illustrating a physical world scenario for ultrasound training, in accordance with an embodiment of the present disclosure.
- Figure 3 depicts the physical/real world in which the trainee (302) is performing the clinical procedure using the MR training environment generated using the system (200).
- the headset (214) may display the MR environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real-world objects to the trainee (302) wearing the headset (214). By overlaying one or more simulated components onto the real-world objects, the system (200) may create an interactive and immersive training scenario for the training purposes.
- the one or more real -world objects comprise the pseudo probe (216) configured to emulate the tactile characteristics of a real -word probe.
- the pseudo probe (216) may replicate the tactile characteristics of an actual ultrasound probe, allowing the trainee (302) to develop proper handeye coordination and probe manipulation techniques.
- the one or more real-world objects may comprise the pseudo phantom (218) which may be configured to simulate anatomical structures encountered during diagnostics ensuring that the trainee (302) may practice scanning techniques in a realistic manner.
- the one or more simulated components may comprise of a probe and an ultrasound machine. The subsequent paragraphs explain the MR environment where the components from both the physical world as well as the virtual world may be integrated together to facilitate training, for performing the clinical procedure by the trainee (302).
- FIG. 4 depicts an exemplary environment (400) illustrating the virtual tutor (402) interacting in real time with the trainee (302) for ultrasound training, in accordance with an embodiment of the present disclosure.
- the trainee (302) in the MR environment (400) may see a virtual patient (404) instead of just the pseudo phantom (218).
- the virtual patient (404) may be over-laid/superimposed on the pseudo phantom (218) such that the trainee (302) is physically/actually scanning the pseudo phantom (218), but virtually the trainee (302) may be scanning the virtual patient (404).
- the MR environment (400) provides the trainee (302) an immersive and real experience of performing the clinical procedure on real patients.
- the clinical procedure may be performed by the trainee (302) using the pseudo probe (216) and the results of performed clinical procedure may be displayed in a spatial environment for example but not limited to the virtual screen (406).
- the virtual screen (406) may not be restrictive to only showing the results of the performed clinical procedure, but the whole clinical procedure carried out by the trainee (302) may also be viewed on the virtual screen (406).
- multiple virtual screens (406) may be used to display the performance of the trainee (302) along with displaying the whole clinical procedure.
- the virtual screen (406) may appear as a floating display within the MR or VR headset (214), allowing the trainee (302) to view the results in the MR training environment through the one or more sensors (204) embedded in the headset (214).
- the one or more sensors (204) embedded in the headset (214) may be but not limited to one or more infrared sensors, motion sensors, depth sensors etc.
- control unit (202) may be configured to adapt the virtual tutor (402) to provide a demonstration to optimally perform the clinical procedure to the trainee (302).
- the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may enable the virtual tutor (402) to fully demonstrate the ultrasound clinical procedure for the trainee (302) to observe and learn from the demonstration.
- the virtual tutor (402) may perform the whole clinical procedure on the generated virtual patient (404), or provide a partial training based on the trainee's (302) instructions.
- the demonstration may include guidance on the use of the pseudo probes (216) and the pseudo phantoms (218) to help trainees (302) develop proficiency in handling and maneuvering the pseudo probes (216) and the pseudo phantoms (218), for performing clinical procedure in the MR training environment.
- the virtual tutor (402) may instruct the trainee (302) on but not limited to proper probe positioning, angling, and pressure application on the pseudo phantom (218) using the pseudo probe (216), to obtain optimal ultrasound results.
- the virtual tutor (402) may provide the demonstration to the trainee (302) through visual cues and audio prompts.
- the demonstration provided by the virtual tutor (402) may include one or more pre-recorded clinical procedures stored in the memory (208) that may be referred by the trainee (302) multiple times.
- the following paragraphs may now explain in detail the exemplary MR environment (400) to facilitate training, for performing the clinical procedure by the trainee (302) and further render one or more corrective feedbacks to the trainee (302) in the MR training environment.
- the control unit (202) of the system (200) may continuously receive, from the one or more sensors (204), the real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302), while the trainee (302) is performing the clinical procedure in the MR environment.
- the real-time data may include at least one of a motion tracking data, a pressure data, and an orientation data of the pseudo probe (216), while the trainee (302) is manipulating the pseudo probe (216) and interacting with the pseudo phantom (218).
- the motion tracking data captured from the one or more motion sensors (204) indicative of the performance of the trainee (302) may ensure precise monitoring of the probe's (216) position and movement.
- the captured real time data from the one or more motion sensors may be compared against the predefined dataset, allowing the system (200) to assess the trainee’s (302) performance in real time.
- the pressure data may capture the amount of force applied by the trainee (302), while performing the clinical procedure.
- the orientation data may track the angle and alignment of the pseudo probe (216) while the trainee (302) is interacting with the pseudo phantom (218) to ensure correct positioning of the pseudo probe (216) to develop proficiency in training.
- control unit (202) may map the real-time data onto the pre-defined dataset corresponding to the clinical procedure performed by the trainee (302). For example, the control unit (202) may perform mapping by comparing the real-time data collected from the trainee’s (302) actions to the pre-defined dataset that represents the optimal execution of the clinical procedure.
- the one or more sensors (204) may capture the real-time data while the trainee (302) manipulates the pseudo probe (216) and interacts with the pseudo phantom (218) and may transmit to the control unit (202) via the I/O interface (206). Further, the control unit (202) upon receiving the real-time data from the one or more sensors (204) may then retrieve the corresponding pre-defined dataset from the memory (208).
- the control unit (202) may then map the real-time data onto the pre-defined dataset corresponding to the clinical procedure performed by the trainee (302) by using the AI/ML model (220).
- the real-time data may be indicative of the trainee’s (302) performance and the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) to optimally perform the training clinical procedure by the trainee (302), using the MR environment (400).
- the pre-defined dataset may include a plurality of optimal pseudo probe (216) manipulations and interactions with the pseudo phantom (218) which may be stored in the database (224) of the system (200) for continuous reference and improvement.
- control unit (202) may detect one or more deficiencies in the trainee’s (302) performance indicated by the real-time data based on the mapping of the real-time data associated with the trainee’s (302) actions with the predefined data set.
- the AI/ML model (220) may be employed in the present disclosure to analyse the real-time data, and compare the real-time data associated with the trainee’s (302) actions with the pre-defined data set, to provide effecting mapping.
- the Al/ ML model (220) may be trained on a large dataset related to the clinical procedure performed by the trainee (302).
- control unit (202) may adapt the virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) in the MR environment, based on the detected one or more deficiencies in the trainee’s (302) performance.
- the one or more sensors (204) may be used to capture real-time data related to the trainee’s (302) actions and interactions, while the trainee (302) is performing the clinical procedure.
- the sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors.
- the virtual tutor (402) may receive inputs via the I/O interface (206), the control unit (202) and from the AI/ML model (220) to provide real-time corrective feedbacks to the trainee (302) within the MR environment.
- the corrective feedbacks may comprise one or more of: guiding steps by the virtual tutor (402), visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee (302).
- the guiding steps by the virtual tutor (402) may comprise step-by- step instructions, helping the trainee (302) correct errors and adopt proper techniques.
- the whole training clinical procedure may be performed by the virtual tutor (402) on the virtual patient (404).
- the virtual tutor (402) may provide corrective feedbacks such as visual cues, audio prompts, or haptic signals to guide the trainee (302).
- the visual cues may direct the trainee’s (302) attention to critical aspects of pseudo probe (216) manipulation while interacting with the pseudo phantom (218).
- audio prompts may provide verbal instruction or alerts to the trainee (302).
- the haptic signals such as vibrations in the pseudo probe (216) may be provided through a haptic device coupled to the pseudo probe (216). These haptic signals may simulate real-world tactile sensations, guiding the trainee (302) to adjust pressure, angle, or positioning appropriately while interacting with the pseudo phantom (218).
- control unit (202) may generate an assessment report of the trainee’s (302) overall performance comprising the detected one or more deficiencies and a performance score.
- the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may compare the real time data associated with the trainee (302) when the trainee (302) has completed the clinical procedure and may then generate the assessment report by comparing it with the predefined data set.
- the AI/ML model (220) may detect one or more deficiencies in the trainee’s (302) actions during comparison and generate the assessment report.
- the AI/ML model (220) using the control unit (202) and upon receiving an instruction from the trainee (302), may adapt the virtual tutor (402) to demonstrate the correct way of carrying out the clinical procedure thereby enhancing the skill of the trainee (302).
- control unit (202) may display, on the virtual screen (406), simulated anatomical structure corresponding to the manipulation of the pseudo probe (216) and the interaction with the pseudo phantom (218) by the trainee (302). For example, if the trainee (302) positions the pseudo probe (216) over an area representing the abdomen of the pseudo phantom (218) and applies a specific pressure, the control unit (202) may recognize the placement on the pseudo phantom (218) and may display a simulated liver, kidney, or any other relevant organs corresponding to the placement. Further, if the trainee (302) adjusts the probe’ s (216) angle on the pseudo phantom (218), the virtual screen (406) may update to reflect the view, on the visual screen (406).
- a flowchart (500) that may correspond to a method (500) of facilitating training, for performing the ultrasound clinical procedure, using the MR training environment. Further, for ease of explanation, in the embodiments described below, the method (500) may be implemented by the system (200), as described with reference to Figs. 1-4. The method (500) illustrated in the flowchart may start from step (502).
- the method (500) may comprise the headset (214) displaying the MR environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real -world objects to the trainee (302) wearing the headset (214).
- the method (500) may create an interactive and immersive training scenario for the training purposes.
- the one or more real -world objects comprise the pseudo probe (216) emulating the tactile characteristics of a real -word probe.
- the pseudo probe (216) may replicate the tactile characteristics of an actual ultrasound probe, allowing the trainee (302) to develop proper hand-eye coordination and probe manipulation techniques.
- the one or more real- world objects may comprise the pseudo phantom (218) which may simulate anatomical structures encountered during diagnostics ensuring that the trainee (302) may practice scanning techniques in a realistic manner as explained above in Figure 3, and the same has not been repeated for the sake of brevity.
- the one or more simulated components may comprise of the probe and the ultrasound machine.
- the method (500) may further comprise adapting the virtual tutor (402) to provide a demonstration to optimally perform the clinical procedure to the trainee (302).
- the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may enable the virtual tutor (402) to fully demonstrate the ultrasound clinical procedure for the trainee (302) to observe and learn from the demonstration.
- the virtual tutor (402) may perform the whole clinical procedure on the generated virtual patient (404), or provide partial training based on the trainee's (302) instructions.
- the demonstration may include guidance on the use of the pseudo probes (216) and the pseudo phantoms (218) to help trainees (302) develop proficiency in handling and maneuvering the pseudo probes (216) and the pseudo phantoms (218) for performing clinical procedure in the MR training environment.
- the virtual tutor (402) may instruct the trainee (302) on but not limited to proper probe positioning, angling, and pressure application on the pseudo phantom (218) using the pseudo probe (216) to obtain optimal ultrasound results.
- the virtual tutor (402) may provide the demonstration to the trainee (302) through visual cues and audio prompts.
- the demonstration provided by the virtual tutor (402) may include one or more pre-recorded clinical procedures stored in the memory (208) that may be referred by the trainee (302) multiple times.
- the method (500) may comprise continuously receiving, from one or more sensors (204), real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302), while the trainee (302) is performing the clinical procedure in the MR environment (400).
- These manipulations may include one or more optimal techniques for performing the intended clinical procedure, such as any optimal movement, orientation and applied force required by the trainee (302) while the trainee (302) is using the pseudo probe (216) for interacting with the pseudo phantom (218) to perform the clinical procedure.
- the one or more sensors (204) may capture the real-time data related to the trainee’s actions and interactions, while the trainee (302) is performing the clinical procedure.
- the one or more sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors. These one or more sensors (204) may continuously receive the real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom by the trainee (302).
- the one or more sensors (204) may be embedded within the pseudo probe (216) itself to detect the movement, orientation and applied force of the pseudo probe (216) while the trainee (302) is performing the clinical procedure on the pseudo phantom (218).
- the one or more sensors (204) may be positioned on the pseudo probe (216) to register for example but not limited to applied force, pressure distribution, and depth of penetration on the pseudo phantom (218).
- the pseudo phantom (218) may also be the active sensing system equipped with one or more sensors (204) to detect any external rigid/ soft body interaction, movements and deformations with the pseudo phantom (218).
- the one or more sensors (204) may also be embedded in the headset (214) worn by the trainee (302) to track the trainee’s (302) movements.
- the real-time data may include at least one of the motion tracking data, the pressure data, and the orientation data of the pseudo probe (216), while the trainee (302) is manipulating the pseudo probe (216) and interacting with the pseudo phantom (218).
- the motion tracking data may ensure precise monitoring of the probe's (216) position and movement, allowing the system (200) to assess the trainee’s (302) ultrasound scanning technique.
- the pressure data may capture the amount of force applied by the trainee (302), while performing the clinical procedure.
- the orientation data may track the angle and alignment of the pseudo probe (216) while the trainee (302) is interacting with the pseudo phantom (218) to ensure correct positioning of the pseudo probe (216) to develop proficiency in training.
- the method (500) may recite mapping the real-time data onto the predefined dataset corresponding to the clinical procedure performed by the trainee (302).
- the real-time data may be indicative of the trainee’s (302) performance and the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) to optimally perform the training clinical procedure by the trainee (302), using the mixed reality environment.
- the pre-defined dataset may include a plurality of optimal pseudo probe (216) manipulations and interactions with the pseudo phantom (218) which may be stored in the database (224) of the system (200) for continuous reference and improvement.
- the method (500) may perform mapping by comparing the real-time data collected from the trainee’s (302) actions to the pre-defined dataset that represents the optimal execution of the clinical procedure.
- the one or more sensors (204) may capture the real-time data while the trainee (302) manipulates the pseudo probe (216) and interacts with the pseudo phantom (218) and may transmit to the control unit (202) via the I/O interface (206) .
- the method (500) upon receiving the real-time data from the one or more sensors (204) may then retrieve the corresponding pre-defined dataset from the memory (208).
- the method (500) may then map the real-time data onto the pre-defined dataset corresponding to the clinical procedure performed by the trainee (302) by using the AI/ML model (220).
- the pre-defined dataset may include a plurality of optimal pseudo probe (216) manipulations and interactions with the pseudo phantom (218) which may be stored in the database (224) of the system (200) for continuous reference and improvement.
- the method (500), at step (508), may comprise detecting one or more deficiencies in the trainee’s (302) performance indicated by the real-time data based on the mapping of the real-time data associated with the trainee’s (302) actions with the pre-defined data set.
- the AI/ML model (220) may be employed in the present disclosure to analyse the real-time data, and comparing the real-time data associated with the trainee’s (302) actions with the pre-defined data set, to provide effecting mapping.
- the AI/ML model (220) may be trained on a large dataset related to the clinical procedure performed by the trainee (302).
- the method (500), at step (510) may comprise adapting the virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) in the MR environment, based on the detected one or more deficiencies in the trainee’s (302) performance, as explained above in Figure 4.
- the one or more sensors (204) may be used to capture real-time data related to the trainee’s actions and interactions, while the trainee (302) is performing the clinical procedure.
- the sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors.
- the virtual tutor (402) may receive inputs via the I/O interface (206), the control unit (202) and from the AI/ML model to provide real-time corrective feedbacks to the trainee (302) within the MR environment.
- the corrective feedbacks may comprise one or more of: guiding steps by the virtual tutor, visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee (302).
- the guiding steps by the virtual tutor (402) may comprise step-by- step instructions, helping the trainee (302) correct errors and adopt proper techniques.
- the whole training clinical procedure may be performed by the virtual tutor (402) on the virtual patient (404).
- the virtual tutor (402) may provide corrective feedbacks such as visual cues, audio prompts, or haptic signals to guide the trainee (302).
- the visual cues may direct the trainee’s (302) attention to critical aspects of pseudo probe (216) manipulation while interacting with the pseudo phantom (218).
- audio prompts may provide verbal instruction or alerts to the trainee (302).
- the haptic signals such as vibrations in the pseudo probe (216), may be provided through the haptic device coupled to the pseudo probe (216). These haptic signals may simulate real-world tactile sensations, guiding the trainee (302) to adjust pressure, angle, or positioning appropriately while interacting with the pseudo phantom (218).
- the method (500) further comprises generating the assessment report of the trainee’s (302) overall performance comprising the detected one or more deficiencies and a performance score.
- the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may compare the real time data associated with the trainee (302) when the trainee (302) has completed the clinical procedure and may then generate the assessment report by comparing it with the predefined data set.
- the AI/ML model (220) may detect one or more deficiencies in the trainee’s actions during comparison and generate the assessment report.
- the AI/ML model (220) using the control unit (202) and upon receiving an instruction from the trainee (302), may adapt the virtual tutor (402) to demonstrate the correct way of carrying out the clinical procedure thereby enhancing the skill of the trainee (302).
- the method (500) further comprises vdisplaying, on the virtual screen (406), simulated anatomical structure corresponding to the manipulation of the pseudo probe (216) and the interaction with the pseudo phantom (218) by the trainee (302), as illustrated in Figure 4.
- the virtual screen (406) may be obtained by the process of edge computing where the images may be displayed in the spatial environment, in addition to that being displayed on the screen (406).
- the virtual screen (406) may be dragged by the trainee (302) to any location in the environment (400) which is comfortable for the trainee (302) to look at while performing the clinical procedure.
- the virtual screen (406) may show the results of the performed clinical procedure and also the whole clinical procedure carried out by the trainee (302). For example, multiple virtual screens (406) may be used to display the performance of the trainee (302) along with displaying the whole clinical procedure.
- component or feature can,” “may,” “could,” “should,” “would,” “preferably,” “possibly,” “typically,” “optionally,” “for example,” “often,” or “might” (or other such language) be included or have a characteristic, that component or feature is not required to be included or to have the characteristic. Such component or feature may be optionally included in some embodiments, or it may be excluded.
- certain ones of the operations herein may be modified or further amplified as described below. Moreover, in some embodiments additional optional operations may also be included. It should be appreciated that each of the modifications, optional additions or amplifications described herein may be included with the operations herein either alone or in combination with any others among the features described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Educational Technology (AREA)
- General Engineering & Computer Science (AREA)
- Educational Administration (AREA)
- Business, Economics & Management (AREA)
- Pathology (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Surgery (AREA)
- Medicinal Chemistry (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Veterinary Medicine (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Chemical & Material Sciences (AREA)
- Molecular Biology (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Human Computer Interaction (AREA)
- Instructional Devices (AREA)
Abstract
Present disclosure provides a method and system of facilitating training, for performing a clinical procedure, using a mixed reality (MR) environment. The method (500) comprises a headset (214) displaying the MR environment corresponding to the clinical procedure, by overlaying one or more simulated components onto one or more real-world objects to a trainee (302) wearing the headset (214). The method (500) further comprises continuously receiving, from one or more sensors, real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302). The method (500) then comprises mapping the real-time data onto a pre-defined dataset corresponding to the clinical procedure, detecting one or more deficiencies in the trainee's performance indicated by the real-time data based on the mapping, and adapting a virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) based on the detected one or more deficiencies in the trainee's performance.
Description
“SYSTEM AND METHOD FOR PROVIDING REALITY- VIRTUALITY CONTINUUM PLATFORM FOR EFFICIENT SIMULATION TRAINING”
TECHNICAL FIELD
[0001] The present disclosure relates to a field of mixed reality environment. More particularly, the present disclosure relates to providing a system and method for facilitating reality-virtuality continuum platform to enable self-directed simulation training.
BACKGROUND OF THE INVENTION
[0002] The following description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] In various fields, effective training is essential for ensuring accuracy, efficiency, and safety while performing any complex clinical procedure. Traditional training methods, such as hands-on practice, classroom instruction, and simulation-based learning, often have limitations in terms of accessibility, cost, and real-time feedback. With advancements in a mixed reality (MR) technology, immersive training solutions have emerged, allowing trainees to engage in interactive and realistic simulations that replicate real-world scenarios. MR-based training integrates elements of virtual reality (VR) and augmented reality (AR) to create an environment where the trainees can practice procedures, interact with virtual objects, and receive real-time guidance.
[0004] One such training scenario may be an ultrasound training which is a widely used imaging tool in modem medicine due to its non-invasive nature, making it a preferred choice for evaluating a variety of medical conditions. Ultrasound offers versatility in diagnosing various medical conditions, guiding procedures thus enabling timely treatment of the diagnosed condition. Despite its numerous advantages, performing an ultrasound requires significant expertise and training. The accuracy of the ultrasound procedure largely depends on the operator's skill in handling ultrasound probe, interpreting images, and distinguishing normal anatomy from abnormalities. Therefore, healthcare professionals undergo specialized training to develop a certain proficiency in ultrasound techniques before applying them to patients.
[0005] Ultrasound training may involve hardware-based simulations which incorporate use of physical phantoms and simulators that mimic the tactile and mechanical properties of human tissue. These may further include robotic systems as well. Further, software-based simulation utilizes computer-generated graphics and algorithms to simulate ultrasound images and scenarios by integrating VR and AR technologies. These hybrid simulation then combines elements of both hardware and software-based approaches to create a more comprehensive training experience.
[0006] However, existing ultrasound training techniques typically rely on human instructors to provide guidance, feedback, and evaluation during training. Also, none of the conventional techniques offer real-time assistance during training while the trainee is actively performing ultrasound procedures in a mixed reality training environment.
[0007] Therefore, there is a need for techniques to reduce the workload of the human instructors during training, and to ensure that the trainees receive consistent and accurate guidance throughout their training.
[0008] The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY OF THE INVENTION
[0009] The following presents a simplified summary to provide a basic understanding of some aspects for facilitating a reality-virtuality continuum platform to enable self-directed simulation training. In an embodiment, the present disclosure provides a system and method for facilitating training for performing a clinical procedure using the mixed reality environment. This summary is not an extensive overview and is intended to neither identify key or critical elements nor delineate the scope of such elements. Its purpose is to present some concepts of the described features in a simplified form as a prelude to the more detailed description that is presented later.
[0010] An exemplary aspect of the disclosure, a system to facilitate training, for performing the clinical procedure, using the mixed reality environment is disclosed. The system may
comprise a headset configured to display the mixed reality environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real -world objects to a trainee wearing the headset. The one or more real -wo rid objects comprises a pseudo probe configured to emulate the tactile characteristics of a real -word probe. The system may comprise a pseudo phantom configured to simulate anatomical structures encountered during diagnostics. Further, the system may comprise a control unit configured to continuously receive, from one or more sensors, real-time data indicative of manipulation of the pseudo probe and interaction with the pseudo phantom by the trainee. The control unit may then be configured to map the real-time data onto a pre-defined dataset corresponding to the clinical procedure. For example, the real-time data is indicative of the trainee’s performance. In another example, the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe and interaction with the pseudo phantom to optimally perform the clinical procedure. In an embodiment, the system may comprise a memory to store the pre-defined data. Further, the control unit may be configured to detect one or more deficiencies in the trainee’s performance indicated by the real-time data based on the mapping. The control unit may then be configured to adapt a virtual tutor to render one or more corrective feedbacks to the trainee in the mixed reality environment, based on the detected one or more deficiencies in the trainee’s performance.
[0011] In a non-limiting embodiment of the present disclosure, the control unit is further configured to adapt the virtual tutor to provide a demonstration to optimally perform the clinical procedure to the trainee.
[0012] In another non-limiting embodiment of the present disclosure, the real-time data includes at least one of a motion tracking data, a pressure data, and an orientation data of the pseudo probe, while the trainee is manipulating the pseudo probe and interacting with the pseudo phantom.
[0013] In yet another non-limiting embodiment of the present disclosure, the corrective feedbacks comprises one or more of: guiding steps by the virtual tutor, visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee.
[0014] In yet another non-limiting embodiment of the present disclosure, the control unit may be further configured to provide haptic signals to the trainee through a haptic device coupled to the pseudo probe.
[0015] In yet another non-limiting embodiment of the present disclosure, the control unit may be further configured to generate an assessment report of the trainee’s overall performance comprising the detected one or more deficiencies and a performance score.
[0016] In yet another non-limiting embodiment of the present disclosure, the control unit may be further configured to display, on a virtual screen, simulated anatomical structure corresponding to the manipulation of the pseudo probe and the interaction with the pseudo phantom by the trainee.
[0017] Another exemplary aspect of the present disclosure, a method of facilitating training, for performing the clinical procedure, using the mixed reality environment is disclosed. The method comprises a headset displaying the mixed reality environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real -world objects to a trainee wearing the headset. The one or more real -wo rid objects comprises a pseudo probe emulating the tactile characteristics of a real -word probe. The method further comprises a pseudo phantom simulating anatomical structures encountered during diagnostics. The method of facilitating training further comprises continuously receiving, from one or more sensors, real-time data indicative of manipulation of the pseudo probe and interaction with the pseudo phantom by the trainee. The method then comprises mapping the real-time data onto a pre-defined dataset corresponding to the clinical procedure. For example, the real-time data is indicative of the trainee’s performance. In another example, the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe and interaction with the pseudo phantom to optimally perform the clinical procedure, and the predefined data is stored in a memory. The method further comprises detecting one or more deficiencies in the trainee’s performance indicated by the real-time data based on the mapping, and adapting a virtual tutor to render one or more corrective feedbacks to the trainee in the mixed reality environment, based on the detected one or more deficiencies in the trainee’s performance.
[0018] It is to be understood that the aspects and embodiments of the disclosure described above may be used in any combination with each other. Several of the aspects and embodiments may be combined to form a further embodiment of the disclosure.
[0019] The above summary is provided merely for the purpose of summarizing some example embodiments to provide a basic understanding of some aspects of the disclosure. Accordingly, it will be appreciated that the above-described embodiments are merely examples and should not be construed to narrow the scope or spirit of the disclosure in any way. It will be appreciated that the scope of the disclosure encompasses many potential embodiments in addition to those here summarized, some of which will be further described below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, explain the disclosed principles.
[0021] FIG. 1 depicts an environment illustrating an existing ultrasound simulation technique, as per existing prior art.
[0022] FIG. 2 depicts a system to facilitate training, for performing a clinical procedure using the mixed reality environment, in accordance with an embodiment of the present disclosure.
[0023] FIG. 3 depicts an exemplary environment illustrating a physical world scenario for ultrasound training, in accordance with an embodiment of the present disclosure.
[0024] FIG. 4 depicts an exemplary environment illustrating a virtual instructor interacting in real time with a trainee for ultrasound training, in accordance with an embodiment of the present disclosure.
[0025] FIG. 5 is a flowchart of a method of facilitating training, for performing a clinical procedure, using a mixed reality environment, in accordance with an embodiment of the disclosure.
DETAILED DESCRIPTION
[0026] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following
detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims. Additional illustrative embodiments are listed below.
[0027] An exemplary aspect of the disclosure may provide method(s) and system(s) to facilitate reality- virtuality continuum platform to enable self-directed simulation training. In an embodiment, the present disclosure provides a method and system to facilitate training using the mixed reality environment to the trainee to perform a clinical procedure. For example, the clinical procedure may be but not limited to an ultrasound clinical procedure, an orthopedic surgery clinical procedure, a dental surgery clinical procedure etc. In an embodiment, the subsequent paragraphs disclose the mixed reality environment to facilitate training for the trainee to perform the ultrasound clinical procedure.
[0028] FIG. 1 depicts an environment (100) illustrating an existing ultrasound simulation technique, as per existing prior art. The environment (100) imparts a static three-dimensional (3D) reconstruction of pre-recorded ultrasound images to enable simulation training to the healthcare professionals. In the environment (100), a physical phantom (102) is used to represent a physical object or simulations to test and calibrate ultrasound equipment while imparting training to the professionals. Further, an ultrasound probe (104) is used by the professionals to train on the physical phantom (102), and a display unit (106) is used to display the ultrasound images. A processing unit (108) is used in conjunction with the physical phantom (102), the ultrasound probe (104), and the display unit (106).
[0029] In this prior art technique, one or more real ultrasound images are acquired from the patients using the ultrasound machines and the acquired ultrasound images are processed and reconstructed into static 3D models. These static 3D reconstructions of the ultrasound images are then integrated into the physical phantom ( 102), either by embedding them directly into the material or by overlaying them onto the surface. These physical phantoms (102) are physical models which are designed to mimic human tissue properties and are often made from materials like silicone or gelatine and may contain embedded structures or features to enhance realism. However, these physical phantoms (102) may often degrade over time, requiring frequent replacements, further adding to the cost and maintenance efforts. Hence, fabricating physical phantoms (102) to mimic human tissue properties is an expensive and time-consuming process, limiting its accessibility and scalability.
[0030] Also, the existing technique does not offer real-time assistance while the trainee is actively performing ultrasound procedures using the mixed reality (MR) training environment. Therefore, there is a need for technologies which not only effectively overcomes the challenges associated with the existing methodologies but also provide enhanced training experience to the trainee using the MR training environment. One such technique has been proposed in the present disclosure which is being discussed in the forthcoming paragraphs in consultation with Figs. 2- 5 of the present disclosure.
[0031] FIG. 2 depicts a system (200) to facilitate training for performing the clinical procedure using the mixed reality (MR) training environment, in accordance with an embodiment of the present disclosure. For example, the MR training environment may be generated using the one or more elements of the system (200). In an embodiment, the system (200) may comprise various elements such as a control unit (202), one or more sensors (204), an input/output (I/O) interface (206), a memory (208), a network interface (210), an image processing unit (212), a headset (214), a pseudo probe (216) a pseudo phantom (218), an Artificial Intelligence/ Machine learning (AI/ML) model (220), a communication network (222), a database (224), but not limited thereto.
[0032] In yet another embodiment, the control unit (202) may be designed to operate in environments requiring low-power consumption. The control unit (202) may comprise specialized units such as integrated system (bus) controllers, memory management control units, digital signal processing units, etc. The control unit (202) may be equipped with highspeed multi-core processing capabilities to enable the system (200) to facilitate training, for performing the clinical procedure, using the MR training environment. In an embodiment, the one or more sensors (204) may capture real-time data related to the trainee’s actions and interactions, while the trainee (not shown in figure) is performing the clinical procedure. For example, the one or more sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors. These one or more sensors (204) may continuously receive the real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom by the trainee. The one or more sensors (204) may be embedded within the pseudo probe (216) itself to detect the movement, orientation and applied force of the pseudo probe (216) while the trainee is performing the clinical procedure on the pseudo phantom (218). Additionally, the one or more sensors (204) may be positioned on the pseudo probe (216) to register for example but not limited to applied force, pressure
distribution, and depth of penetration on the pseudo phantom (218). In an embodiment, the pseudo phantom (218) may also be an active sensing system equipped with one or more sensors (204) to detect any external rigid/soft body interaction, movements and deformations with the pseudo phantom (218). In yet another embodiment, the one or more sensors (204) may also be embedded in the headset (214) worn by the trainee to track the trainee’s movements.
[0033] In yet another embodiment, the I/O interface (206) may include suitable logic, circuitry, and interfaces that may be configured to provide an output (such as, first and second output variations) for display on a virtual screen (not shown in figure) of the system (200). Further, the memory (208) may include suitable logic, circuitry, and interfaces that may be configured to store one or more instructions to be executed by the control unit (202). In certain examples, the memory (208) may represent any type of non-transitory computer readable medium such as random-access memory (RAM), read only memory (ROM), magnetic disk or tape, optical disk, flash memory, or holographic memory. In an embodiment, the memory (208) may include a combination of random-access memory and read only memory and may include data/instructions related to processing of one or more components of the system (200).
[0034] In an exemplary embodiment, the memory (208) may include pre-defined dataset comprising a plurality of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) to optimally perform the clinical procedure by the trainee. These manipulations may include one or more optimal techniques for performing the intended clinical procedure such as any optimal movement, orientation and applied force required by the trainee while the trainee is using the pseudo probe (216) for interacting with the pseudo phantom (218) to perform the clinical procedure. These predefined datasets stored in the memory (208) may be compared against the real-time data indicative of the trainee’s performance, allowing the system (200) to assess the trainee’s performance in real time. In an embodiment, the predefined dataset may be the 3D reconstructed ultrasound images. These pre-defined datasets may be registered into the pseudo phantom (218) based on its anatomical structure and position. After registering the 3D reconstructed ultrasound images on the pseudo phantom (218), any plane or slice of the 3D reconstructed ultrasound images may be displayed on the virtual screen based on the current position and orientation of the pseudo probe (216) while in contact with the pseudo phantom (218).
[0035] In an exemplary embodiment, the network interface (210) may include suitable logic, circuitry, and interfaces that may be configured to facilitate communication between the system (200) and the database (224), via the communication network (222). The communication network (222) may be one of a wired connection or a wireless connection Examples of the communication network (222) may include, but are not limited to, the Internet, a cloud network, Cellular or Wireless Mobile Network (such as Long-Term Evolution and 5G New Radio), a Wireless Fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), or a Metropolitan Area Network (MAN). Various components of the system (200) and the trainee may be configured to connect to the communication network (222) in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, at least one of a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hyper-Text Transfer Protocol (HTTP), File Transfer Protocol (FTP), Zig Bee, EDGE, IEEE 802.11, light fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communication, wireless access point (AP), device to device communication, cellular communication protocols, and Bluetooth (BT) communication protocols.
[0036] The network interface (210) may be implemented by using various known technologies to support wired or wireless communication of the system (200) with the database (224), via the communication network (222). The network interface (210) may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, or a local buffer circuitry. The network interface (210) may be configured to communicate via wireless communication with networks, such as the Internet, an Intranet, or a wireless network, such as a cellular telephone network, a wireless local area network (LAN), and a metropolitan area network (MAN). The wireless communication may be configured to use one or more of a plurality of communication standards, protocols and technologies, such as Global System for Mobile (GSM) Communications, Enhanced Data GSM Environment (EDGE), wideband code division multiple access (W-CDMA), Long Term Evolution (LTE), 5G NR, code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (such as Institute of Electrical and Electronics Engineers (IEEE) 802.1 la, IEEE 802.1 lb, IEEE 802.11g or IEEE 802.1 In), voice over Internet Protocol (VoIP), light fidelity (Li-Fi),
Worldwide Interoperability for Microwave Access (Wi-MAX), a protocol for email, instant messaging, and a Short Message Service (SMS).
[0037] In yet another embodiment, the system (200) comprises the image processing unit (212). The image processing unit (212) may process the images which may be stored in the memory (208). For example, the image processing unit (212) may include functions such as but not limited to fdtering of image to remove noise, enhancing image visibility etc. In an embodiment of the present disclosure, the images may be one or more ultrasound images which may be reconstructed into one or more 3D images. These ultrasound images may be then pre- processed by the image processing unit (212) using adaptive filters and algorithms to remove noise, enhance image quality (super resolution), scale, resize, and also padding of the ultrasound images.
[0038] The images being processed by the image processing unit (212) are specific to the type of diagnostic clinical procedure being performed by the trainee. In an non-limiting exemplary scenario, the images being processed by the image processing unit (212) may be that of a fetal analysis along with uterine parts of the body when the diagnostic training is being provided in the field of obstetrics, and it may be that of the muscular and tissue analysis of hand when the training is going on for evaluating the health of muscular tissues of any limb. In yet another non-limiting exemplary scenario, images being processed by the image processing unit (212) may be of the anatomies of the fetus while performing the ultrasound clinical procedure.
[0039] In yet another embodiment, the system (200) comprises the headset (214) which is worn by the trainee to facilitate the reality-virtuality continuum platform to enable self-directed simulation training. For example, the reality-virtuality continuum platform may be enabled by the integration of the real world with the virtual world to generate the MR training environment. In an embodiment, the integration may be enabled through a combination of one or more sensors (204) embedded in the headset (214). In another example, the integration may be enabled by the image processing unit (212) which may present virtual anatomical structures to the trainee (302). In yet another example, the integration may be enabled by the pseudo probe (216) which may be embedded with one or more sensors (204) to track the movement, orientation, and applied force, while interacting with the pseudo phantom (218). In yet another example, the integration may be enabled by the pseudo probe (216) comprising sensors to detect applied force, pressure distribution, and depth of penetration. In yet another
embodiment, the integration may be enabled by the pseudo phantom (218) which may be the active sensing system equipped with one or more sensors (204) to detect the external rigid/soft body interaction, movements and deformations within the pseudo phantom (218). In one nonlimiting embodiment, the headset (214) may be a virtual reality (VR) headset which immerses the trainee (302) in the MR training environment. In another non-limiting embodiment, the headset (214) may be a mixed reality (MR) headset which combines elements of both VR and augmented reality (AR) technologies. In an embodiment, the I/O interface (206) may be configured to receive the inputs from the control unit (202) or the image processing unit (212) or the AI/ML model (220) and may be further configured to output the received inputs to the headset (214) worn by the trainee (302). The headset (214) may overlay one or more virtual anatomical structures creating the MR training environment, allowing the trainee (302) to interact with both the virtual and physical objects simultaneously, as discussed in subsequent paragraphs.
[0040] In yet another embodiment, the system comprises the pseudo probe (216) which may be used in the ultrasound simulation training. These pseudo probes (216) may simulate the functionality and appearance of real ultrasound probes within the virtual or augmented reality environment through the one or more sensors (204) embedded within the pseudo probe (216). These one or more sensors (204) may detect the movement, orientation and applied force of the pseudo probe (216) while the trainee is performing the clinical procedure on the pseudo phantom (218). The pseudo probe (216) may allow the trainees to practice ultrasound scanning techniques, such as probe manipulation and image acquisition, using a computer interface or specialized simulation software. For example, the one more motion sensors, pressure sensors, orientation sensors, and haptic sensors embedded in the pseudo probe (216) may detect the real time data including the movement, orientation and applied force of the pseudo probe (216). Further, these real time data may be transmitted to the system (200) via the I/O interface (206) and compared against the pre-defined dataset, allowing the system (200) to assess the trainee’s performance in real time. In one non-limiting embodiment, the pseudo probe (216) may be a 2D probe which produces two-dimensional images of the scanned area. In another non-limiting embodiment, the pseudo probe (216) may be a 3D probe which may capture multiple two- dimensional (2D) images from different angles and combine them to create a three-dimensional representation of the scanned area. The 3D ultrasound probe may be particularly useful in obstetrics as well as in other areas such as musculoskeletal and vascular imaging. In yet one non-limiting embodiment, the pseudo probe (216) may further provide haptic feedback to the
trainee to mimic the tactile sensations of using an ultrasound probe on the anatomy, thus providing a realistic training experience with sensations of resistance and texture variations.
[0041] In yet another embodiment, the system comprises the pseudo phantom (218) which simulates anatomical structures, tissue properties, and imaging characteristics encountered in real -wo rid clinical scenarios. The pseudo phantoms (218) may be used to provide the trainees with realistic imaging scenarios for practicing ultrasound techniques, image interpretation, and procedural skills in the virtual environment. The pseudo phantoms (218) may be digitally simulated within the virtual or augmented reality environment to allow the trainee (302) to access a wide range of anatomical models without the need for multiple physical phantoms. Hence, these pseudo phantoms (218) offer flexibility, accessibility, and cost-effectiveness compared to physical phantoms, making them valuable tools in the ultrasound education and training. In one non-limiting embodiment, the pseudo phantom (218) used in the present disclosure may be constituted of different shapes and sizes depending on the medical field for which it is being used. In a non -limiting exemplary scenario, the pseudo phantom (218) may constitute a round and circular structure representing a pregnant belly of a lady if the training is being conducted for fetal analysis or it may represent a hand for examining the muscular tissues etc. In another non-limiting embodiment, the pseudo phantom (218) may be inflated or deflated to represent the different case scenarios encountered while examining the actual patients. For example, the pseudo phantom (218) may be inflated at different levels to replicate the growth of fetus in different trimesters of pregnancy, or the pseudo phantom (218) may be inflated/deflated to represent an obese/weak person or it may be inflated to actuate the inflammatory conditions due to the associated disease in the muscular tissues of the patients.
[0042] In yet another embodiment, the system comprises the AI/ML model (220). The AI/ML model (220) works in conjunction with the memory (208) to enable a virtual tutor (not shown in Figure) to render one or more corrective feedbacks to the trainee in the MR environment. For example, the AI/ML model (220) may be trained on a massive human motion dataset. Further, based on the pre-defined dataset, a computational model may then estimate the pseudoprobe's (216) POSE (position and orientation) at every instance for guidance and corrective feedback. This POSE information may be then adapted into kinematic movements of the virtual tutor using the AI/ML model (220). Moving on, a detailed explanation of the physical world scenario to facilitate training in the MR environment is explained in forthcoming paragraphs.
[0043] FIG. 3 depicts an exemplary environment (300) illustrating a physical world scenario for ultrasound training, in accordance with an embodiment of the present disclosure. Figure 3 depicts the physical/real world in which the trainee (302) is performing the clinical procedure using the MR training environment generated using the system (200). In an embodiment of the present disclosure, the headset (214) may display the MR environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real-world objects to the trainee (302) wearing the headset (214). By overlaying one or more simulated components onto the real-world objects, the system (200) may create an interactive and immersive training scenario for the training purposes. For example, the one or more real -world objects comprise the pseudo probe (216) configured to emulate the tactile characteristics of a real -word probe. The pseudo probe (216) may replicate the tactile characteristics of an actual ultrasound probe, allowing the trainee (302) to develop proper handeye coordination and probe manipulation techniques. Further the one or more real-world objects may comprise the pseudo phantom (218) which may be configured to simulate anatomical structures encountered during diagnostics ensuring that the trainee (302) may practice scanning techniques in a realistic manner. In another embodiment, the one or more simulated components may comprise of a probe and an ultrasound machine. The subsequent paragraphs explain the MR environment where the components from both the physical world as well as the virtual world may be integrated together to facilitate training, for performing the clinical procedure by the trainee (302).
[0044] FIG. 4 depicts an exemplary environment (400) illustrating the virtual tutor (402) interacting in real time with the trainee (302) for ultrasound training, in accordance with an embodiment of the present disclosure. In an embodiment, the trainee (302) in the MR environment (400) may see a virtual patient (404) instead of just the pseudo phantom (218). The virtual patient (404) may be over-laid/superimposed on the pseudo phantom (218) such that the trainee (302) is physically/actually scanning the pseudo phantom (218), but virtually the trainee (302) may be scanning the virtual patient (404). The MR environment (400) provides the trainee (302) an immersive and real experience of performing the clinical procedure on real patients. The clinical procedure may be performed by the trainee (302) using the pseudo probe (216) and the results of performed clinical procedure may be displayed in a spatial environment for example but not limited to the virtual screen (406). In an embodiment, the virtual screen (406) may not be restrictive to only showing the results of the performed clinical procedure, but the whole clinical procedure carried out by the trainee (302) may also
be viewed on the virtual screen (406). For example, multiple virtual screens (406) may be used to display the performance of the trainee (302) along with displaying the whole clinical procedure. In an embodiment, the virtual screen (406) may appear as a floating display within the MR or VR headset (214), allowing the trainee (302) to view the results in the MR training environment through the one or more sensors (204) embedded in the headset (214). The one or more sensors (204) embedded in the headset (214) may be but not limited to one or more infrared sensors, motion sensors, depth sensors etc.
[0045] In an embodiment to facilitate training, for performing the clinical procedure the control unit (202) may be configured to adapt the virtual tutor (402) to provide a demonstration to optimally perform the clinical procedure to the trainee (302). For example, the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may enable the virtual tutor (402) to fully demonstrate the ultrasound clinical procedure for the trainee (302) to observe and learn from the demonstration. In an embodiment, the virtual tutor (402) may perform the whole clinical procedure on the generated virtual patient (404), or provide a partial training based on the trainee's (302) instructions. Additionally, the demonstration may include guidance on the use of the pseudo probes (216) and the pseudo phantoms (218) to help trainees (302) develop proficiency in handling and maneuvering the pseudo probes (216) and the pseudo phantoms (218), for performing clinical procedure in the MR training environment. For example, the virtual tutor (402) may instruct the trainee (302) on but not limited to proper probe positioning, angling, and pressure application on the pseudo phantom (218) using the pseudo probe (216), to obtain optimal ultrasound results. In an embodiment, the virtual tutor (402) may provide the demonstration to the trainee (302) through visual cues and audio prompts. In yet another embodiment, the demonstration provided by the virtual tutor (402) may include one or more pre-recorded clinical procedures stored in the memory (208) that may be referred by the trainee (302) multiple times. The following paragraphs may now explain in detail the exemplary MR environment (400) to facilitate training, for performing the clinical procedure by the trainee (302) and further render one or more corrective feedbacks to the trainee (302) in the MR training environment.
[0046] In an embodiment, to facilitate training, for performing the clinical procedure, the control unit (202) of the system (200) may continuously receive, from the one or more sensors (204), the real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302), while the trainee (302) is performing the
clinical procedure in the MR environment. The real-time data may include at least one of a motion tracking data, a pressure data, and an orientation data of the pseudo probe (216), while the trainee (302) is manipulating the pseudo probe (216) and interacting with the pseudo phantom (218). In an embodiment, the motion tracking data captured from the one or more motion sensors (204) indicative of the performance of the trainee (302) may ensure precise monitoring of the probe's (216) position and movement. For example, the captured real time data from the one or more motion sensors may be compared against the predefined dataset, allowing the system (200) to assess the trainee’s (302) performance in real time. In an embodiment, the pressure data may capture the amount of force applied by the trainee (302), while performing the clinical procedure. In an embodiment, the orientation data may track the angle and alignment of the pseudo probe (216) while the trainee (302) is interacting with the pseudo phantom (218) to ensure correct positioning of the pseudo probe (216) to develop proficiency in training.
[0047] Further the control unit (202) may map the real-time data onto the pre-defined dataset corresponding to the clinical procedure performed by the trainee (302). For example, the control unit (202) may perform mapping by comparing the real-time data collected from the trainee’s (302) actions to the pre-defined dataset that represents the optimal execution of the clinical procedure. In an embodiment, the one or more sensors (204) may capture the real-time data while the trainee (302) manipulates the pseudo probe (216) and interacts with the pseudo phantom (218) and may transmit to the control unit (202) via the I/O interface (206). Further, the control unit (202) upon receiving the real-time data from the one or more sensors (204) may then retrieve the corresponding pre-defined dataset from the memory (208). The control unit (202) may then map the real-time data onto the pre-defined dataset corresponding to the clinical procedure performed by the trainee (302) by using the AI/ML model (220). In an embodiment, the real-time data may be indicative of the trainee’s (302) performance and the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) to optimally perform the training clinical procedure by the trainee (302), using the MR environment (400). In an embodiment, the pre-defined dataset may include a plurality of optimal pseudo probe (216) manipulations and interactions with the pseudo phantom (218) which may be stored in the database (224) of the system (200) for continuous reference and improvement.
[0048] In an embodiment of the present disclosure, the control unit (202) may detect one or more deficiencies in the trainee’s (302) performance indicated by the real-time data based on the mapping of the real-time data associated with the trainee’s (302) actions with the predefined data set. For example, the AI/ML model (220) may be employed in the present disclosure to analyse the real-time data, and compare the real-time data associated with the trainee’s (302) actions with the pre-defined data set, to provide effecting mapping. The Al/ ML model (220) may be trained on a large dataset related to the clinical procedure performed by the trainee (302).
[0049] In an embodiment, the control unit (202) may adapt the virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) in the MR environment, based on the detected one or more deficiencies in the trainee’s (302) performance. The one or more sensors (204) may be used to capture real-time data related to the trainee’s (302) actions and interactions, while the trainee (302) is performing the clinical procedure. For example, the sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors.
[0050] Further, the virtual tutor (402) may receive inputs via the I/O interface (206), the control unit (202) and from the AI/ML model (220) to provide real-time corrective feedbacks to the trainee (302) within the MR environment. For example, the corrective feedbacks may comprise one or more of: guiding steps by the virtual tutor (402), visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee (302).
[0051] In an embodiment, the guiding steps by the virtual tutor (402) may comprise step-by- step instructions, helping the trainee (302) correct errors and adopt proper techniques. For example, the whole training clinical procedure may be performed by the virtual tutor (402) on the virtual patient (404). In another example, if the trainee (302) applies incorrect pressure or angle, the virtual tutor (402) may provide corrective feedbacks such as visual cues, audio prompts, or haptic signals to guide the trainee (302).
[0052] In an embodiment, the visual cues may direct the trainee’s (302) attention to critical aspects of pseudo probe (216) manipulation while interacting with the pseudo phantom (218).
In another embodiment, audio prompts may provide verbal instruction or alerts to the trainee (302).
[0053] In another embodiment, the haptic signals, such as vibrations in the pseudo probe (216), may be provided through a haptic device coupled to the pseudo probe (216). These haptic signals may simulate real-world tactile sensations, guiding the trainee (302) to adjust pressure, angle, or positioning appropriately while interacting with the pseudo phantom (218).
[0054] In yet another embodiment, the control unit (202) may generate an assessment report of the trainee’s (302) overall performance comprising the detected one or more deficiencies and a performance score. For example, the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may compare the real time data associated with the trainee (302) when the trainee (302) has completed the clinical procedure and may then generate the assessment report by comparing it with the predefined data set. For example, the AI/ML model (220) may detect one or more deficiencies in the trainee’s (302) actions during comparison and generate the assessment report. Further upon generating the assessment score to the trainee (302), the AI/ML model (220) using the control unit (202) and upon receiving an instruction from the trainee (302), may adapt the virtual tutor (402) to demonstrate the correct way of carrying out the clinical procedure thereby enhancing the skill of the trainee (302).
[0055] In yet another embodiment, the control unit (202) may display, on the virtual screen (406), simulated anatomical structure corresponding to the manipulation of the pseudo probe (216) and the interaction with the pseudo phantom (218) by the trainee (302). For example, if the trainee (302) positions the pseudo probe (216) over an area representing the abdomen of the pseudo phantom (218) and applies a specific pressure, the control unit (202) may recognize the placement on the pseudo phantom (218) and may display a simulated liver, kidney, or any other relevant organs corresponding to the placement. Further, if the trainee (302) adjusts the probe’ s (216) angle on the pseudo phantom (218), the virtual screen (406) may update to reflect the view, on the visual screen (406). In one non-limiting embodiment, the virtual screen (406) may be obtained by the process of edge computing where the images may be displayed in the spatial environment, in addition to that being displayed on the screen (406). In another nonlimiting embodiment, the virtual screen (406) may be dragged by the trainee (302) to any location in the environment (400) which is comfortable for the trainee (302) to look at while performing the clinical procedure.
[0056] FIG. 5 is a flowchart of a method (500) of facilitating training, for performing the clinical procedure, using the MR training environment, in accordance with an embodiment of the disclosure. FIG. 5 is described in conjunction with reference from FIG. 1-4. With reference to FIG. 5, there is shown a flowchart (500) that may correspond to a method (500) of facilitating training, for performing the ultrasound clinical procedure, using the MR training environment. Further, for ease of explanation, in the embodiments described below, the method (500) may be implemented by the system (200), as described with reference to Figs. 1-4. The method (500) illustrated in the flowchart may start from step (502).
[0057] At step (502), the method (500) may comprise the headset (214) displaying the MR environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real -world objects to the trainee (302) wearing the headset (214). By overlaying one or more simulated components onto the real-world objects, the method (500) may create an interactive and immersive training scenario for the training purposes. For example, the one or more real -world objects comprise the pseudo probe (216) emulating the tactile characteristics of a real -word probe. The pseudo probe (216) may replicate the tactile characteristics of an actual ultrasound probe, allowing the trainee (302) to develop proper hand-eye coordination and probe manipulation techniques. Further the one or more real- world objects may comprise the pseudo phantom (218) which may simulate anatomical structures encountered during diagnostics ensuring that the trainee (302) may practice scanning techniques in a realistic manner as explained above in Figure 3, and the same has not been repeated for the sake of brevity. In another embodiment, the one or more simulated components may comprise of the probe and the ultrasound machine.
[0058] In an embodiment to facilitate training, for performing the clinical procedure the method (500) may further comprise adapting the virtual tutor (402) to provide a demonstration to optimally perform the clinical procedure to the trainee (302). For example, the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may enable the virtual tutor (402) to fully demonstrate the ultrasound clinical procedure for the trainee (302) to observe and learn from the demonstration. In an embodiment, the virtual tutor (402) may perform the whole clinical procedure on the generated virtual patient (404), or provide partial training based on the trainee's (302) instructions. Additionally, the demonstration may include guidance on the use of the pseudo probes (216) and the pseudo phantoms (218) to help trainees
(302) develop proficiency in handling and maneuvering the pseudo probes (216) and the pseudo phantoms (218) for performing clinical procedure in the MR training environment. For example, the virtual tutor (402) may instruct the trainee (302) on but not limited to proper probe positioning, angling, and pressure application on the pseudo phantom (218) using the pseudo probe (216) to obtain optimal ultrasound results. In an embodiment, the virtual tutor (402) may provide the demonstration to the trainee (302) through visual cues and audio prompts. In yet another embodiment, the demonstration provided by the virtual tutor (402) may include one or more pre-recorded clinical procedures stored in the memory (208) that may be referred by the trainee (302) multiple times.
[0059] Further, at step (504), the method (500) may comprise continuously receiving, from one or more sensors (204), real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302), while the trainee (302) is performing the clinical procedure in the MR environment (400). These manipulations may include one or more optimal techniques for performing the intended clinical procedure, such as any optimal movement, orientation and applied force required by the trainee (302) while the trainee (302) is using the pseudo probe (216) for interacting with the pseudo phantom (218) to perform the clinical procedure. In an embodiment, the one or more sensors (204) may capture the real-time data related to the trainee’s actions and interactions, while the trainee (302) is performing the clinical procedure. For example, the one or more sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors. These one or more sensors (204) may continuously receive the real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom by the trainee (302). The one or more sensors (204) may be embedded within the pseudo probe (216) itself to detect the movement, orientation and applied force of the pseudo probe (216) while the trainee (302) is performing the clinical procedure on the pseudo phantom (218). Additionally, the one or more sensors (204) may be positioned on the pseudo probe (216) to register for example but not limited to applied force, pressure distribution, and depth of penetration on the pseudo phantom (218). In an embodiment, the pseudo phantom (218) may also be the active sensing system equipped with one or more sensors (204) to detect any external rigid/ soft body interaction, movements and deformations with the pseudo phantom (218). In yet another embodiment, the one or more sensors (204) may also be embedded in the headset (214) worn by the trainee (302) to track the trainee’s (302) movements.
[0060] Further, the real-time data may include at least one of the motion tracking data, the pressure data, and the orientation data of the pseudo probe (216), while the trainee (302) is manipulating the pseudo probe (216) and interacting with the pseudo phantom (218). In an embodiment, the motion tracking data may ensure precise monitoring of the probe's (216) position and movement, allowing the system (200) to assess the trainee’s (302) ultrasound scanning technique. In an embodiment, the pressure data may capture the amount of force applied by the trainee (302), while performing the clinical procedure. In an embodiment, the orientation data may track the angle and alignment of the pseudo probe (216) while the trainee (302) is interacting with the pseudo phantom (218) to ensure correct positioning of the pseudo probe (216) to develop proficiency in training.
[0061] At step (506), the method (500) may recite mapping the real-time data onto the predefined dataset corresponding to the clinical procedure performed by the trainee (302). In an embodiment, the real-time data may be indicative of the trainee’s (302) performance and the pre-defined dataset may indicate a plurality of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) to optimally perform the training clinical procedure by the trainee (302), using the mixed reality environment. In an embodiment, the pre-defined dataset may include a plurality of optimal pseudo probe (216) manipulations and interactions with the pseudo phantom (218) which may be stored in the database (224) of the system (200) for continuous reference and improvement. For example, in an embodiment of the present disclosure, the method (500) may perform mapping by comparing the real-time data collected from the trainee’s (302) actions to the pre-defined dataset that represents the optimal execution of the clinical procedure. In an embodiment, the one or more sensors (204) may capture the real-time data while the trainee (302) manipulates the pseudo probe (216) and interacts with the pseudo phantom (218) and may transmit to the control unit (202) via the I/O interface (206) . Further, the method (500) upon receiving the real-time data from the one or more sensors (204) may then retrieve the corresponding pre-defined dataset from the memory (208). The method (500) may then map the real-time data onto the pre-defined dataset corresponding to the clinical procedure performed by the trainee (302) by using the AI/ML model (220). In an embodiment, the pre-defined dataset may include a plurality of optimal pseudo probe (216) manipulations and interactions with the pseudo phantom (218) which may be stored in the database (224) of the system (200) for continuous reference and improvement.
[0062] The method (500), at step (508), may comprise detecting one or more deficiencies in the trainee’s (302) performance indicated by the real-time data based on the mapping of the real-time data associated with the trainee’s (302) actions with the pre-defined data set. For example, the AI/ML model (220) may be employed in the present disclosure to analyse the real-time data, and comparing the real-time data associated with the trainee’s (302) actions with the pre-defined data set, to provide effecting mapping. The AI/ML model (220) may be trained on a large dataset related to the clinical procedure performed by the trainee (302).
[0063] The method (500), at step (510) may comprise adapting the virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) in the MR environment, based on the detected one or more deficiencies in the trainee’s (302) performance, as explained above in Figure 4. The one or more sensors (204) may be used to capture real-time data related to the trainee’s actions and interactions, while the trainee (302) is performing the clinical procedure. For example, the sensors (204) may be but not limited to one or more motion sensors, pressure sensors, orientation sensors, and haptic sensors.
[0064] Further, the virtual tutor (402) may receive inputs via the I/O interface (206), the control unit (202) and from the AI/ML model to provide real-time corrective feedbacks to the trainee (302) within the MR environment. For example, the corrective feedbacks may comprise one or more of: guiding steps by the virtual tutor, visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee (302).
[0065] In an embodiment, the guiding steps by the virtual tutor (402) may comprise step-by- step instructions, helping the trainee (302) correct errors and adopt proper techniques. For example, the whole training clinical procedure may be performed by the virtual tutor (402) on the virtual patient (404). In another example, if the trainee (302) applies incorrect pressure or angle, the virtual tutor (402) may provide corrective feedbacks such as visual cues, audio prompts, or haptic signals to guide the trainee (302).
[0066] In an embodiment, the visual cues may direct the trainee’s (302) attention to critical aspects of pseudo probe (216) manipulation while interacting with the pseudo phantom (218). In another embodiment, audio prompts may provide verbal instruction or alerts to the trainee (302).
[0067] In another embodiment, the haptic signals, such as vibrations in the pseudo probe (216), may be provided through the haptic device coupled to the pseudo probe (216). These haptic signals may simulate real-world tactile sensations, guiding the trainee (302) to adjust pressure, angle, or positioning appropriately while interacting with the pseudo phantom (218).
[0068] In yet another embodiment, the method (500) further comprises generating the assessment report of the trainee’s (302) overall performance comprising the detected one or more deficiencies and a performance score. For example, the AI/ML model (220) in conjunction with the memory (208) and the control unit (202) may compare the real time data associated with the trainee (302) when the trainee (302) has completed the clinical procedure and may then generate the assessment report by comparing it with the predefined data set. For example, the AI/ML model (220) may detect one or more deficiencies in the trainee’s actions during comparison and generate the assessment report. Further upon generating the assessment score to the trainee (302), the AI/ML model (220) using the control unit (202) and upon receiving an instruction from the trainee (302), may adapt the virtual tutor (402) to demonstrate the correct way of carrying out the clinical procedure thereby enhancing the skill of the trainee (302).
[0069] In yet another embodiment, the method (500) further comprises vdisplaying, on the virtual screen (406), simulated anatomical structure corresponding to the manipulation of the pseudo probe (216) and the interaction with the pseudo phantom (218) by the trainee (302), as illustrated in Figure 4. In one non-limiting embodiment, the virtual screen (406) may be obtained by the process of edge computing where the images may be displayed in the spatial environment, in addition to that being displayed on the screen (406). In another non-limiting embodiment, the virtual screen (406) may be dragged by the trainee (302) to any location in the environment (400) which is comfortable for the trainee (302) to look at while performing the clinical procedure. In another embodiment, the virtual screen (406) may show the results of the performed clinical procedure and also the whole clinical procedure carried out by the trainee (302). For example, multiple virtual screens (406) may be used to display the performance of the trainee (302) along with displaying the whole clinical procedure.
[0070] The order in which the flowchart (500) is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to
implement the flowchart (500) or alternate methods. Additionally, individual blocks may be deleted from the flowchart (500) without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0071] The illustrated steps are set out to explain the exemplary embodiments shown, and it may be anticipated that ongoing technological development will change the way particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
[0072] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As may be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.
[0073] Various embodiments of the present invention are described with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure may satisfy applicable legal requirements. The term “or” is used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative,” “example,” and “exemplary” are used to be examples with no indication of quality level. Like numbers refer to like elements throughout.
[0074] The phrases “in an embodiment,” “in one embodiment,” “according to one embodiment,” and the like generally mean that the feature, structure, or characteristic following the phrase may be included in at least one embodiment of the present disclosure and may be
included in more than one embodiment of the present disclosure (importantly, such phrases do not necessarily refer to the same embodiment).
[0075] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations.
[0076] If the specification states a component or feature “can,” “may,” “could,” “should,” “would,” “preferably,” “possibly,” “typically,” “optionally,” “for example,” “often,” or “might” (or other such language) be included or have a characteristic, that component or feature is not required to be included or to have the characteristic. Such component or feature may be optionally included in some embodiments, or it may be excluded.
[0077] In some example embodiments, certain ones of the operations herein may be modified or further amplified as described below. Moreover, in some embodiments additional optional operations may also be included. It should be appreciated that each of the modifications, optional additions or amplifications described herein may be included with the operations herein either alone or in combination with any others among the features described herein.
[0078] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain to having the benefit of teachings presented in the foregoing descriptions and the associated drawings. Although the figures only show certain components of the apparatus and systems described herein, it is understood that various other components may be used in conjunction with the supply management system. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, the steps in the method described above may not necessarily occur in the order depicted in the accompanying diagrams, and in some cases one or more of the steps depicted may occur substantially simultaneously, or additional steps may be involved. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims
1. A system (200) to facilitate training, for performing a clinical procedure using a mixed reality environment, the system (200) comprising: a headset (214) configured to display the mixed reality environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real-world objects to a trainee (302) wearing the headset (214), wherein the one or more real-world objects comprises a pseudo probe (216) configured to emulate the tactile characteristics of a real -word probe, and a pseudo phantom (218) configured to simulate anatomical structures encountered during diagnostics; and a control unit (202) configured to: continuously receive, from one or more sensors (204), real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302); map the real-time data onto a pre-defined dataset corresponding to the clinical procedure, wherein the real-time data is indicative of the trainee’s performance and wherein the pre-defined dataset indicate a plurality of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) to optimally perform the clinical procedure, and the pre-defined dataset is stored in a memory (208); detect one or more deficiencies in the trainee’s (302) performance indicated by the real-time data based on the mapping; and adapt a virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) in the mixed reality environment, based on the detected one or more deficiencies in the trainee’s (302) performance.
2. The system (200) as claimed in claim 1, wherein the control unit (202) is further configured to adapt the virtual tutor (402) to provide a demonstration to optimally perform the clinical procedure to the trainee (302).
3. The system (200) as claimed in claim 1 , wherein the real-time data includes at least one of a motion tracking data, a pressure data, and an orientation data of the pseudo probe, while the trainee (302) is manipulating the pseudo probe (216) and interacting with the pseudo phantom (218).
4. The system (200) as claimed in claim 1, wherein the corrective feedbacks comprises one or more of: guiding steps by the virtual tutor (402), visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee (302).
5. The system (200) as claimed in claim 4, wherein the control unit (202) is further configured to provide haptic signals to the trainee (302) through a haptic device coupled to the pseudo probe (216).
6. The system (200) as claimed in claim 1, wherein the control unit (202) is further configured to generate an assessment report of the trainee’s overall performance comprising the detected one or more deficiencies and a performance score.
7. The system (200) as claimed in claim 1, wherein the control unit (202) is further configured to display, on a virtual screen (406), simulated anatomical structure corresponding to the manipulation of the pseudo probe (216) and the interaction with the pseudo phantom (218) by the trainee (302).
8. A method (500) of facilitating training, for performing a clinical procedure, using a mixed reality environment, the method (500) comprising: a headset (214) displaying (502) the mixed reality environment corresponding to the clinical procedure, by visualizing the overlay of one or more simulated components onto one or more real-world objects to a trainee (302) wearing the headset (214), wherein the one or more real world objects comprises a pseudo probe (216) emulating the tactile characteristics of a real -word probe, and a pseudo phantom (218) simulating anatomical structures encountered during diagnostics, wherein facilitating training further comprises: continuously receiving (504), from one or more sensors (204), real-time data indicative of manipulation of the pseudo probe (216) and interaction with the pseudo phantom (218) by the trainee (302); mapping (506) the real-time data onto a pre-defined dataset corresponding to the clinical procedure, wherein the real-time data is indicative of the trainee’s performance and wherein the pre-defined dataset indicate a plurality of manipulation of the pseudo probe (216)
and interaction with the pseudo phantom (218) to optimally perform the clinical procedure, and the pre-defined dataset is stored in a memory (208); detecting (508) one or more deficiencies in the trainee’s performance indicated by the real-time data based on the mapping; and adapting (510) a virtual tutor (402) to render one or more corrective feedbacks to the trainee (302) in the mixed reality environment, based on the detected one or more deficiencies in the trainee’s performance.
9. The method (500) as claimed in claim 8, further comprises: adapting the virtual tutor (402) to provide a demonstration to optimally perform the clinical procedure to the trainee (302).
10. The method (500) as claimed in claim 8, wherein the real-time data includes at least one of a motion tracking data, a pressure data, and an orientation data of the pseudo probe (216), while the trainee (302) is manipulating the pseudo probe (216) and interacting with the pseudo phantom (218).
11. The method (500) as claimed in claim 8, wherein the corrective feedbacks comprises one or more of: guiding steps by the virtual tutor (402), visual cues, audio prompts, and haptic signals to indicate the detected one or more deficiencies and recommend corrective actions to the trainee (302).
12. The method (500) as claimed in claim 11, wherein haptic signals are provided to the trainee (302) through a haptic device coupled to the pseudo probe (216).
13. The method (500) as claimed in claim 8, further comprises: generating an assessment report of the trainee’s overall performance comprising the detected one or more deficiencies and a performance score.
14. The method (500) as claimed in claim 8, further comprises: displaying on a virtual screen (406), simulated anatomical structure corresponding to the manipulation of the pseudo probe (216) and the interaction with the pseudo phantom (218) by the trainee (302).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202441017793 | 2024-03-12 | ||
| IN202441017793 | 2024-03-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025191612A1 true WO2025191612A1 (en) | 2025-09-18 |
Family
ID=97062919
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IN2025/050362 Pending WO2025191612A1 (en) | 2024-03-12 | 2025-03-12 | System and method for providing reality-virtuality continuum platform for efficient simulation training |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025191612A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107067856B (en) * | 2016-12-31 | 2020-03-27 | 歌尔科技有限公司 | Medical simulation training system and method |
| CN114038259A (en) * | 2021-10-20 | 2022-02-11 | 俞正义 | 5G virtual reality medical ultrasonic training system and method thereof |
-
2025
- 2025-03-12 WO PCT/IN2025/050362 patent/WO2025191612A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107067856B (en) * | 2016-12-31 | 2020-03-27 | 歌尔科技有限公司 | Medical simulation training system and method |
| CN114038259A (en) * | 2021-10-20 | 2022-02-11 | 俞正义 | 5G virtual reality medical ultrasonic training system and method thereof |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102834854B (en) | ultrasonic simulation training system | |
| Basdogan et al. | VR-based simulators for training in minimally invasive surgery | |
| US10453360B2 (en) | Ultrasound simulation methods | |
| Sutherland et al. | An augmented reality haptic training simulator for spinal needle procedures | |
| US20160328998A1 (en) | Virtual interactive system for ultrasound training | |
| US20150056591A1 (en) | Device for training users of an ultrasound imaging device | |
| CN115457008B (en) | Abdominal cavity real-time puncture virtual simulation training method and device | |
| JP2012503501A (en) | Simulation of medical image diagnosis | |
| CN110174953A (en) | Prosthetic replacement surgery simulation system and construction method based on mixed reality technology | |
| Allgaier et al. | LiVRSono-virtual reality training with haptics for intraoperative ultrasound | |
| KR100551201B1 (en) | Dental training and evaluation system using haptic interface based on volume model | |
| Wagner et al. | Intraocular surgery on a virtual eye | |
| WO2025191612A1 (en) | System and method for providing reality-virtuality continuum platform for efficient simulation training | |
| JP2008134373A (en) | Method and system of preparing biological data for operation simulation, operation simulation method, and operation simulator | |
| Stallkamp et al. | UltraTrainer-a training system for medical ultrasound examination | |
| CN116782850A (en) | Ultrasonic simulation system | |
| CN116631252A (en) | Physical examination simulation system and method based on mixed reality technology | |
| CN111768494B (en) | Method for training reduction of joint dislocation | |
| Ourahmoune et al. | A virtual environment for ultrasound examination learning | |
| CN115953532A (en) | Method and device for displaying ultrasonic image for teaching and teaching system of ultrasonic image | |
| Ullrich et al. | Virtual needle simulation with haptics for regional anaesthesia | |
| CN117131712B (en) | A virtual and real emergency rescue simulation system and method | |
| CN119648495B (en) | Mixed reality-based simulated medical visualization system, method and device | |
| CN117173268A (en) | Method and system for constructing virtual dynamic ultrasonic image based on CT data | |
| Zara¹ et al. | Haptic Training Simulators Design |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25771707 Country of ref document: EP Kind code of ref document: A1 |