WO2024154647A1 - Procédé d'apprentissage et système d'apprentissage - Google Patents
Procédé d'apprentissage et système d'apprentissage Download PDFInfo
- Publication number
- WO2024154647A1 WO2024154647A1 PCT/JP2024/000462 JP2024000462W WO2024154647A1 WO 2024154647 A1 WO2024154647 A1 WO 2024154647A1 JP 2024000462 W JP2024000462 W JP 2024000462W WO 2024154647 A1 WO2024154647 A1 WO 2024154647A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning
- human body
- learner
- model
- mixed reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/28—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
- G09B23/30—Anatomical models
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
Definitions
- the present invention relates to a learning method and a learning system. More specifically, the present invention relates to a learning method and a learning system that enable a learner to learn the procedure or steps of a procedure to be performed by using a learning tool on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.
- Non-Patent Document 1 Traditionally, in medical learning settings such as medical faculties at universities, nursing high schools and vocational schools, human body models known as medical simulators have been used to study and train about human anatomy and procedures, such as those described in Non-Patent Document 1 below.
- Non-Patent Document 1 The human body mannequin described in Non-Patent Document 1 is a model of part of the human body (lower jaw to chest to right shoulder) tailored to the purpose of learning (instruction and practice of medical techniques such as puncture and intravenous catheter management), and the bone structure and blood vessels of the modeled parts are accurately reproduced. It is said to enable practical training from selecting the puncture position to inserting the catheter.
- the simulated experience system described in Patent Document 1 includes a video display device, a main body formed in the same or nearly the same shape as an object of education, research, or training, a controller provided on the main body and having a signal transmitter capable of transmitting a signal for synchronizing the movement of an image of a controlled object simulating the object displayed on the video display device with the operation of the main body, a signal receiver connected to the controller and the video display device and capable of receiving a signal transmitted from the signal transmitter, a calculation unit connected to the signal receiver and capable of analyzing the operation of the main body based on the received signal and calculating operation data, an image generation unit capable of generating an image of the controlled object based on data on the shape of the object, a synchronization processing unit capable of synchronizing the operation data calculated by the calculation unit so that the image of the controlled object generated by the image generation unit moves in accordance with the operation of the main body, and a computer having an image output unit capable of outputting the image of the controlled object processed by the synchronization processing unit to the video
- Patent Document 1 allows the user to touch a main body that has the same or nearly the same shape as an educational object, and to operate an image to be operated that imitates the object displayed on an image display device, stimulating the user's sense of sight and touch, providing an intuitive and immersive simulated experience.
- Non-Patent Document 1 which are existing technology, are real objects that provide a sense of realism and texture during learning, making them ideal for confirming procedures when using the target tool.
- these human body dummies are often models of parts of the human body tailored to the learning purpose, and in addition, the structure of the model usually does not include surrounding human body structures that are not intended for learning, making them impossible or unsuitable for use outside of learning purposes.
- full-body human body dummies do exist, they are expensive compared to models of parts of the human body, and there is a difference between parts that are sophisticated and parts that are not, making them difficult to say that they are suitable for general-purpose use.
- the simulated experience system described in Patent Document 1 allows learning by visually recognizing objects such as organs displayed on a video display device and touching a controller that imitates the objects, but the objects are still virtual reality with no substance.
- the simulated experience system is not suitable for practical training such as confirming procedures and procedures that use the target tools.
- the present invention has been devised in light of the above, and aims to provide a learning method and learning system that enables a learner to learn the procedure or steps of a procedure to be performed by using the tools to be studied on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.
- the learning method of the present invention is carried out using a human body model as a projection target, a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model, and a learning target instrument, which is a medical or examination instrument, for learning a procedure, and includes a first step in which a learner wears the mixed reality display and visually views the human body model and the first virtual augmentation projected thereon through the mixed reality display, and a second step in which the learner, having completed the first step, picks up the learning target instrument and applies it to the human body model while visualizing the human body model and the first virtual augmentation, thereby learning the procedure or steps of the procedure to be performed.
- a human body model as a projection target
- a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model
- a learning target instrument which is a medical or examination instrument, for
- the learner completes the preparation by wearing the mixed reality display, and the learner can view the human body model and the first virtual augmentation projected onto it through the mixed reality display.
- the term "learner” is used to include not only pre-employment students in medical schools, nursing schools, vocational schools, etc. who aim to become medical professionals, but also those who are already medical professionals. For example, new graduates from medical schools, etc. may become familiar with different types of equipment than those used in school, or may undergo practical training beyond what they received in school. Also, even those who have already graduated from medical schools, etc. may undergo training to become familiar with newly introduced equipment, new treatment methods, or as part of continuous learning to further improve their skills. Therefore, these individuals are also included in the term “learner.”
- the learning method of the present invention is essentially used by learners, but this does not exclude instructors (those who practice the present invention as models in explaining the method before and after learning).
- Examples of the "first virtual augmentation” include images of human organs and bones. Projecting images of organs, etc. onto appropriate locations on the human body model increases the sense of immersion in learning and also allows for advance confirmation of the positions of organs, etc. Furthermore, the “first virtual augmentation” includes not only single images of human organs, etc., but also projection of multiple images in an overlapping manner (superimposed manner).
- One example of projection in an overlapping manner is a manner in which an image of a bone and an image of an organ located below the same bone image are projected in an overlapping manner. In this case, more practical learning is possible by referring to the arrangement of the multiple projected images.
- the image data projected in the "first virtual augmentation” may be installed in the mixed reality display or stored in an auxiliary storage device connected to the aircraft, or may be received from an external device such as a server connected to the mixed reality display by wire or wireless means. Furthermore, image processing related to the "first virtual augmentation" may be performed by a function provided in the mixed reality display, or may receive data processed by an external device such as a server connected to the mixed reality display by wireless means or the like.
- the mixed reality display can also project the first virtual augmentation onto disembodied parts of the human model (parts that have no substance, which can also be considered space). For example, if the only part of the solid human model in front of the viewer's eyes is the torso, the first virtual augmentation of the disembodied parts of the human model, such as the head, lower limbs, and arms, can be visualized in three dimensions as if they were added to the human model as if it were a real object. Furthermore, the display of the disembodied parts can be switched on and off as needed.
- the mixed reality display allows a variety of learning to be done even if the viewer does not own multiple human models, and switching between displaying and hiding the disembodied parts is expected to improve the efficiency and effectiveness of learning, and can also reduce the introduction and operating costs associated with owning multiple human models.
- the learner can pick up the tool to be studied that will be used in the actual treatment, examination, etc., when learning the procedure or treatment of the procedure, and apply the tool to be studied to the human body model onto which the first virtual augmentation is projected.
- medical instruments include, for example, puncture needles, drainage tubes, suture needles, syringes, forceps, scalpels and other medical blades, plates and pins used in fracture treatment, etc.
- examination instruments include, for example, probes in ultrasound diagnostic equipment, electrodes in electrocardiogram measuring equipment, endoscopes, etc. It goes without saying that the medical instruments and examination instruments mentioned above are merely examples, and various instruments can be the subject of the study.
- the learning method of the present invention makes it possible to change the content of the first virtual augmentation projected by the mixed reality display, and by projecting various images or videos (first virtual augmentation) onto one human dummies, it is possible to obtain the same effect as actually owning multiple human dummies.
- the learning method of the present invention requires less procurement costs and less storage space when not in use compared to conventional learning methods.
- the mixed reality display can be used to carry out various types of learning even if the viewer does not own multiple human models, and switching between display and non-display of the non-physical parts can be expected to improve the efficiency and effectiveness of learning, and the introduction and operation costs associated with owning multiple human models can be reduced.
- the above-mentioned learning method may be such that the mixed reality display is capable of visualizing a second virtual augmentation, which is a facial expression model of the patient, superimposed on the facial portion of the human body model, and in the first and second steps, the second virtual augmentation is applied to the human body model, and in at least the second step, the learner studies while appropriately viewing the facial expression model of the patient projected by the second virtual augmentation, or studies while appropriately viewing the facial expression model of the patient and engaging in conversation according to the situation.
- a second virtual augmentation which is a facial expression model of the patient, superimposed on the facial portion of the human body model, and in the first and second steps, the second virtual augmentation is applied to the human body model, and in at least the second step, the learner studies while appropriately viewing the facial expression model of the patient projected by the second virtual augmentation, or studies while appropriately viewing the facial expression model of the patient and engaging in conversation according to the situation.
- the second virtual augmentation is visualized (applied) superimposed on the facial portion of the human body model, allowing the learner to study while also visually checking the facial expression model of the patient role projected by the second virtual augmentation as appropriate, or to study while visually checking the facial expression model of the patient role and also engaging in conversation according to the situation.
- this learning method allows trainees to practice the procedure or treatment as if they were dealing with an actual patient, without having to prepare an actual patient or a role-playing patient (in other words, even though they are only dealing with a human model), and trainees are provided with visual stimulation as if they were actually treating a patient in an actual setting (in other words, they get a realistic visual experience), which is expected to result in even greater learning effects.
- the second virtual augmentation may be applied to the human body model not only in the second step, but also in the first step.
- the trainee directly learn the technique, but he or she can also train to observe the changes in the patient's facial expression due to a sudden change in the patient's condition before the procedure begins, and to talk to the patient to ease their anxiety and tension.
- the facial expression model of the patient represented by the second virtual augmentation may be, for example, a standard model preset in the mixed reality display, or it may be the academic instructor or a fellow attendee receiving instruction. If the academic instructor or fellow attendee is used as the facial expression model of the patient, a sense of tension is provided during learning, which is expected to result in a high learning effect (on the other hand, it may be possible to provide humor and expect a high learning effect in a relaxed atmosphere).
- image processing software may be used to display real-time facial expressions captured by a camera. This may also be used in conjunction with a setting that generates a sound such as "Ouch! in accordance with the facial expression of a standard model playing the patient role represented in the second virtual augmentation.
- the model and the learner may be set to be able to converse, allowing training in how to respond flexibly to situations.
- the above-mentioned learning method may also be such that, among the facial expression models visualized in the second virtual augmentation, at least the depiction of the eyes is set so that the gaze can be moved toward the learner at any time, and the gaze can be recognized by the learner via the mixed reality display.
- the learner when the learner is studying while visually checking the facial expression model of the patient role projected by the second virtual augmentation, the line of sight of the patient role can be changed and the learner can recognize this.
- the timely movement of the gaze may be, for example, a standard action preset (programmed) in the mixed reality display, but is not limited to this. It may also be that a supervising instructor or attendee observing the learner's training status uses software or the like to intentionally move the gaze, or a pressure sensor or pressure sensor provided in the human model is linked to the software of the mixed reality display, and the gaze is set to move toward the learner at the appropriate time when a specified pressure is detected. Furthermore, when linked to a pressure sensor or the like provided in the human model, the presence or absence of gaze movement and the duration of the gaze movement may be set to vary depending on the strength of the detected pressure.
- the second virtual augmentation may be applied to the human body model in the first step as well as in the second step.
- the facial expression model in the learning method of this aspect may be a standard model preset in the mixed reality display, or may be that of a teaching instructor or a person in attendance, etc.
- the above-mentioned learning method may be such that the first virtual augmentation allows medical images and three-dimensional anatomical images, set to an appropriate size to fit the human body model, to be superimposed on the same screen.
- medical images includes two-dimensional computed tomography images (hereinafter referred to as “2D CT images”), magnetic resonance images (hereinafter referred to as “2D MRI images”), ultrasound images (hereinafter referred to as “echo images”), X-ray images (hereinafter referred to as “X-ray images”), nuclear medicine images (hereinafter referred to as “RI images”), etc., and does not exclude other types of medical images.
- 2D CT images two-dimensional computed tomography images
- 2D MRI images magnetic resonance images
- ultrasound images hereinafter referred to as “echo images”
- X-ray images X-ray images
- nuclear medicine images hereinafter referred to as "RI images”
- the learner can learn while appropriately viewing the stereoscopic life-size medical images and 3D anatomical images superimposed and projected by the first virtual expansion.
- the learner can learn by associating the spatial positional relationship between the medical images (especially 2D ones) and the 3D anatomical images (3D), which allows training to understand the relationship between the medical images and the 3D anatomical images, which many students and inexperienced practitioners (doctors and technicians) have struggled with, and further deepening their understanding, and further improving the learning effect can be expected.
- the medical images and 3D anatomical images may be, for example, standard healthy images, but are not limited to this, and may be images of organs, bones, etc. of a patient or individual with a lesion or specific characteristics, or may be used as photographed medical images of an individual and 3D anatomical images constructed based on the medical images.
- images of organs, bones, etc. of a patient with a lesion are applied, they can be used not only by learners but also by medical professionals including doctors in considering treatment plans and pre-operative meetings before actual surgery or treatment.
- “Able to superimpose medical images and 3D anatomical images on the same screen” means that medical images and 3D anatomical images can be superimposed on the same screen, and includes both simultaneous overlapping display and switching between medical images and 3D anatomical images that are displayed alternately in the same position.
- the above-mentioned learning method may be such that a pseudo-structure reproducing any one of skin, muscle, bone, blood vessel, membrane or organ is applied to at least a specific part of the human body model that is the learning target, or a combination of multiple pseudo-structures may be applied.
- this learning method when applying the learning tool to the human body model in training that simulates examination or treatment, a pseudo structure that reproduces skin, etc. is applied to the area where the examination or treatment is performed, so that a tactile response closer to that of the human body can be obtained.
- This allows the learner to receive tactile stimulation as if they were performing treatment on a patient in an actual location (in other words, they can get a realistic sense of touch), and a higher learning effect can be expected.
- the human body model may be one in which multiple pseudo structures of skin, muscle, bone, blood vessel, membrane or organ are applied to a specific part to be studied. For example, if skin, bone and target organ are selected as pseudo structures and these are arranged in a superimposed (layered) manner, in training for puncturing the target organ, the learner can learn by actual touch how to palpate the gap between the bones, how much force to use to pierce the skin, how to insert the needle between the bones, and how much force and depth to use to insert the needle into the target organ. Furthermore, if pseudo structures of muscle, membrane and blood vessel are applied, the learner can feel the resistance of the muscle or membrane when puncturing, and learn the technique of inserting the needle between the muscle fibers or blood vessels without damaging them.
- the pseudo-structure is applied to "at least the specified portion to be studied," the pseudo-structure may be applied to the entire human body model, not just a part of it. If the pseudo-structure is applied to the entire human body model, the structure becomes more complex, but one human body model can be used to train a variety of tests and procedures, improving convenience and versatility.
- the simulated structures may be made of soft materials for skin, muscles, blood vessels, membranes, and organs, and semi-hard or hard materials for bones.
- this learning method when applying the learning tool to the human body model in the training simulating the above-mentioned examination or treatment, a pseudo structure made of a material with a hardness corresponding to the part of the body to be examined or treated is applied, providing a tactile response that is even closer to that of the human body.
- This allows the learner to receive tactile stimulation as if they were actually performing treatment on a patient in an actual setting (in other words, they can get a realistic sense of touch), and an even greater learning effect can be expected.
- soft materials materials such as resin and rubber are preferably used that have a hardness close to that of the target skin, muscles, blood vessels, membranes, and organs.
- hard materials materials such as resin, rubber, stone materials, and metal materials are preferably used that have a hardness close to that of the target bones.
- semi-hard materials such as resins and rubbers that have been prepared to be harder may be used when reproducing cartilage.
- the above-mentioned learning method may be configured such that the human body model and the learning target device are provided with a position sensor and a pressure sensor or pressure sensor, and when the position information detected by the position sensor and/or the pressure value detected by the pressure sensor or pressure sensor exceed a preset value, the second virtual augmentation is changed to an expression expressing discomfort or distress.
- the facial expression model represented by the second virtual extension changes to one that expresses discomfort or agony, allowing the learner to immediately determine that they have performed an inappropriate treatment, and to train in performing treatment while observing (visually checking) the facial expression, just as they would when performing treatment on a real human.
- Whether or not a treatment would cause pain in a human body is determined by whether or not the position information and/or pressure value detected by the position sensor and pressure sensor or pressure sensor installed on the human body model and the training target device exceed a preset value.
- the facial expression model represented by the second virtual augmentation displayed on the mixed reality display changes to an expression expressing discomfort or distress.
- a pressure sensor or pressure sensor is provided on a specific part of the human body model that is the learning target, and during training to press an examination device (e.g., a probe in an ultrasound diagnostic device), which is the learning target instrument, against the part, a pressure value is transmitted by a pressure sensor or the like attached to the human body model, and if the pressure value received by the mixed reality display directly or via an external device that analyzes the pressure value is inappropriate (e.g., the pressing force is too strong), the facial expression model represented by the second virtual augmentation displayed on the mixed reality display changes to an expression expressing discomfort or distress.
- an examination device e.g., a probe in an ultrasound diagnostic device
- this can be used in conjunction with a setting that generates a sound such as "Ouch! in response to changes in facial expression that express discomfort or distress, which increases the sense of realism and allows the training to be done with a sense of tension.
- the above-mentioned learning method may be such that the human body model is provided with a camera capable of photographing the learner at the eye position if the model has a head, or at a position equivalent to the eye if the model does not have a head.
- this learning method it is possible to capture video from the perspective of a learner observing training (learning) that simulates examinations and treatments (in other words, video taken from the side of the human body model), and obtain the video.
- learning training
- the learner can objectively view the facial expressions and behavior of the therapist as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the therapist and the treated in a single training session.
- the aforementioned "camera” may be provided in any way that allows it to photograph the learner, but if it is a fixed structure, it is preferable that it has a wide angle of view, and it may also be a movable structure.
- movable structures include a structure in which the head or eye unit (including those located in a position equivalent to the eye unit) can be operated manually, automatically, or remotely to face the learner and photograph the learner.
- the images captured by the camera may be displayed in real time on a mixed reality display worn by the learner, or may be recorded on a hard disk or the like of a personal computer (hereinafter referred to as "PC") connected to the mixed reality display.
- the images may be displayed in a sub-window that opens next to the image of the human body model that the learner is looking at, or may be displayed full screen by appropriately switching with the image of the human body model that the learner is looking at.
- the learner, etc. can check the images on another monitor or the like after the training (learning).
- the captured images may be simultaneously output to a large monitor, in which case the images can be shared with other learners waiting other than the learner currently training (learning), and the learner who will next begin training can easily understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire learner group.
- learning learner currently training
- the above-mentioned learning method may be such that a human body model is installed in an indoor space that can be used as a teaching space, and a mixed reality display is capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space, and in both or either of the first and second steps, the learner learns the procedure or procedure to be performed while visually viewing the third virtual augmentation projected onto the indoor space via the mixed reality display.
- a human body model is installed in an indoor space that can be used as a teaching space
- a mixed reality display is capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space, and in both or either of the first and second steps, the learner learns the procedure or procedure to be performed while visually viewing the third virtual augmentation projected onto the indoor space via the mixed reality display.
- the expression "examination device and/or treatment device” used in this embodiment means both an examination device and a treatment device, and either an examination device or a treatment device.
- the indoor space can be viewed as an examination room or treatment room, and the human body model placed there can be viewed as a subject to be examined or treated, etc., for learning purposes.
- the learner may learn the procedure or procedure to be performed while further viewing the third virtual augmentation projected into the indoor space via the mixed reality display.
- the learner may learn or practice preparation of the inspection device, etc. before starting, check procedures, etc., and may also learn or practice how to handle the learner and his/her movements according to the positions of the inspection device, etc. and the human body model after starting.
- Examples of images projected by the third virtual augmentation include MRI (Magnetic Resonance Imaging) inspection equipment, CT (Computed Tomography) inspection equipment, X-ray inspection equipment, radiation therapy equipment, proton beam therapy equipment, and endoscopic equipment (endoscopes and the surgical tools used therewith).
- Learners can learn and train their movements by moving a human body model relative to the inspection equipment projected by virtual augmentation, or by moving the arm of the inspection equipment projected by virtual augmentation and applying it to the human body model.
- this learning method allows students to practice the procedure or treatment they need to perform without having to prepare actual examination or treatment equipment, as if they were treating a patient in a space with actual examination or treatment equipment.
- This provides students with visual stimulation as if they were actually treating a patient in a real setting (in other words, they get a realistic visual experience), and is expected to have an even greater learning effect.
- the learning system of the present invention comprises a human body model to be projected, a mixed reality display that can visualize a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model and can be worn on the learner's head, and a learning target instrument that is a medical instrument or examination instrument to be used for procedural learning.
- a first virtual augmentation which is a physical anatomical model, superimposed on a part or all of the human body model and can be worn on the learner's head
- a learning target instrument that is a medical instrument or examination instrument to be used for procedural learning.
- the learner can view the human body model and the first virtual augmentation projected onto it via the mixed reality display.
- the learner can pick up the tool to be studied that will be used in the actual treatment, examination, etc., and apply the tool to be studied to the human body model onto which the first virtual augmentation is projected.
- the learner can visually obtain a sense of realism and immersion, as if they were in front of a patient, and because they hold the actual tools being studied and apply them to the human body model, rather than using a virtual image that does not involve any sensation, they can obtain a tactile response that is close to that of actual work.
- the learner receives visual and tactile stimulation that makes them feel as if they are actually performing treatment on a patient in an actual location (in other words, they get realistic visual and tactile sensations), and a high learning effect can be expected.
- the "human body model” may be anything onto which the first virtual augmentation described above can be projected, and for example, a life-size model, such as a so-called training model doll, is preferably used. Furthermore, as described below, the human body model does not necessarily have to be full-body size, since the mixed reality display can project the first virtual augmentation onto parts of the human body model that do not have a physical body, and it may be in a form consisting of only the parts particularly necessary for learning (for example, only the torso without the upper limbs, lower limbs, or head).
- the "learning subject instrument” may be any medical or testing instrument that is the subject of skill learning, and various instruments may be the subject of learning.
- medical instruments include needles, tubes, scalpels, and other medical blades
- testing instruments include probes in ultrasound diagnostic equipment and electrodes in electrocardiogram measuring equipment.
- the “mixed reality display” is at least capable of visualizing the first virtual augmentation, which is a physical anatomical model, on a part or all of the human body model in a superimposed manner, and is constructed so as to be wearable on the learner's head.
- a head-mounted display (goggles, glasses, helmet, etc.) capable of implementing so-called XR (Extended reality) technology is used.
- XR Extended reality
- MR Mated Reality
- AR Augmented Reality
- Examples of the "first virtual augmentation” include images of human organs and bones. Projecting images of organs, etc. onto appropriate locations on the human body model increases the sense of immersion in learning and also allows for advance confirmation of the positions of organs, etc. Furthermore, the “first virtual augmentation” includes not only single images of human organs, etc., but also projection of multiple images in an overlapping manner (superimposed manner).
- One example of projection in an overlapping manner is a manner in which an image of a bone and an image of an organ located below the same bone image are projected in an overlapping manner. In this case, more practical learning is possible by referring to the arrangement of the multiple projected images.
- the image data projected in the "first virtual augmentation” may be installed in the mixed reality display or stored in an auxiliary storage device connected to the aircraft, or may be received from an external device such as a server connected to the mixed reality display by wire or wireless means. Furthermore, image processing related to the "first virtual augmentation" may be performed by a function provided in the mixed reality display, or may receive data processed by an external device such as a server connected to the mixed reality display by wireless means or the like.
- the content of the first virtual augmentation projected by the mixed reality display can be changed, and by projecting various images or videos (first virtual augmentation) onto one human body model, it is possible to obtain the same effect as having multiple human body models, which means that compared to the previously described conventional systems for learning about treatments and examinations, the system requires less procurement costs and requires less storage space when not in use.
- the mixed reality display can project the first virtual augmentation onto the non-physical parts of the human model, so that the non-physical parts projected by the first virtual augmentation can be expressed as if they were added to the human model as if they were actually present.
- the non-physical parts can be switched between display and non-display as needed.
- the mixed reality display allows various learning activities to be carried out even if the user does not own multiple human models, and switching between display and non-display of the non-physical parts is expected to improve the efficiency and effectiveness of learning, and can also reduce the introduction and operating costs associated with owning multiple human models.
- the learning system described above may also be configured so that the mixed reality display is configured to be able to visualize a second virtual augmentation, which is a facial expression model of the patient, superimposed on the face of the human body model.
- a second virtual augmentation which is a facial expression model of the patient
- the second virtual augmentation can be visualized (applied) superimposed on the facial portion of the human body model.
- This allows the learner to study while also visually checking the facial expression model of the patient role projected by the second virtual augmentation as needed, or to study while visually checking the facial expression model of the patient role as needed and engaging in conversation according to the situation.
- this learning is not only for directly acquiring skills, but also for training in observing changes in the patient's facial expression due to a sudden change in the patient's condition before the start of the procedure, and in conversation to ease the patient's anxiety and tension.
- this learning system it is possible to train in the procedure or treatment to be performed as if the student were dealing with a real patient, without having to prepare an actual patient or a role-playing patient.
- This provides the student with visual stimulation as if they were actually performing treatment on a patient in a real setting, and is expected to have an even greater learning effect.
- the facial expression model of the patient represented in the second virtual augmentation may be a standard model or a person present, as described above, and in the case of a person present, it may be possible to display real-time facial expressions processed by video processing software. Also, as described above, this may be used in conjunction with a setting in which sound is generated in accordance with the facial expression of the standard model represented in the second virtual augmentation, and if a person present is used as the facial expression model, it may be set so that the model and the learner can converse, allowing training in how to respond flexibly to situations.
- the learning system described above may also be configured such that the gaze of at least the depiction of the eyes in the facial expression model visualized in the second virtual augmentation can be moved toward the learner at any time, and the gaze can be recognized by the learner via the mixed reality display.
- the timely movement of the gaze can be, for example, a standardized action preset in the mixed reality display, a learning instructor observing the learner's training status using software or the like to intentionally move the gaze, or a pressure sensor or the like provided on the human model is linked to the software or the like of the mixed reality display, and the gaze is set to move toward the learner at the appropriate time when a specified pressure is detected.
- a pressure sensor or the like provided on the human model when linked to a pressure sensor or the like provided on the human model, the presence or absence of gaze movement and the duration of the gaze movement can be set to vary depending on the strength of the detected pressure.
- the second virtual augmentation may be applied to the human body model not only in the second step but also in the first step.
- the facial expression model in the learning system of this embodiment may be a standard model preset in the mixed reality display, or may be that of a learning instructor or a companion, etc.
- the above-mentioned learning system may be one in which the first virtual augmentation is provided so that medical images and three-dimensional anatomical images, set to an appropriate size to fit the human body model, can be superimposed on the same screen.
- medical images here includes the above-mentioned two-dimensional CT images, etc., and does not exclude other types of medical images.
- the learner can study while appropriately viewing the stereoscopic life-size medical images and 3D anatomical images superimposed and projected by the first virtual extension.
- the spatial positional relationship between the medical images (particularly 2D ones) and the 3D anatomical images (3D) can be associated and studied on the same screen, which allows training to understand the relationship between the medical images and the 3D anatomical images, which many students and inexperienced practitioners have struggled with, and further deepens their understanding, and is expected to further improve the learning effect.
- “Able to superimpose medical images and 3D anatomical images on the same screen” means that medical images and 3D anatomical images can be superimposed on the same screen, and includes both simultaneous overlapping display and switching between medical images and 3D anatomical images that are displayed alternately in the same position.
- the medical images and 3D anatomical images may be standard healthy images, images of organs, bones, etc. of a patient or individual with a lesion or specific characteristics, or images of a photographed individual's medical images and 3D anatomical images constructed based on the medical images.
- images of organs, bones, etc. of a patient with a lesion are used, they can be used not only by learners but also by medical professionals, including doctors, in considering treatment plans and pre-operative meetings before carrying out actual surgery or treatment.
- the learning system described above may be configured such that the human body model and the learning object instrument are provided with a position sensor and a pressure sensor or pressure sensor, the mixed reality display has a receiving function capable of receiving the position information detected by the position sensor and the pressure value detected by the pressure sensor or pressure sensor, and a set value storage function, and when both or either one of the position information and the pressure value received by the receiving function exceeds a preset value, the second virtual augmentation is configured to change to an expression expressing discomfort or distress.
- the facial expression model represented by the second virtual extension changes to one that expresses discomfort or agony. This allows the learner to immediately determine if they have performed an inappropriate treatment, and allows training to be performed while observing the facial expression in the same way as when performing treatment on a real human.
- Whether or not a treatment would cause pain in a human body is determined by whether or not the position information and/or pressure value detected by the position sensor and pressure sensor or pressure sensor installed on the human body model and the training target device exceed a preset value.
- the facial expression model represented by the second virtual extension displayed on the mixed reality display changes to an expression expressing discomfort or agony.
- a pressure value is transmitted by a pressure sensor or the like attached to the human body model, and if the pressure value received by the mixed reality display directly or via an external device that analyzes the pressure value is inappropriate, the facial expression model represented by the second virtual extension displayed on the mixed reality display changes to an expression expressing discomfort or agony.
- a setting in which sound is generated in accordance with the change to an expression expressing discomfort or agony may be used in combination, in which case the sense of realism is increased and training can be performed with a sense of tension.
- the aforementioned learning system may be such that a pseudo-structure reproducing one of skin, muscle, bone, blood vessel, membrane or organ is applied to at least a specific part of the human body model that is the subject of learning, or a combination of multiple such pseudo-structures may be applied.
- this learning system when applying the learning tool to the human body model in training that simulates examinations and treatments, a pseudo structure that reproduces skin, etc. is applied to the area where the examination or treatment is performed, so that a tactile response closer to that of the human body is obtained. This allows the learner to receive tactile stimulation as if they were actually performing treatment on a patient in the actual field, and a higher learning effect can be expected.
- the human body model may also be one in which multiple pseudo structures of skin, muscle, bone, blood vessel, membrane or organ are applied to a specific part to be studied. For example, if skin, bone and target organ are selected as pseudo structures and arranged in a superimposed manner, in training for puncturing the target organ, the learner can learn, through actual touch, how to palpate the gap between the bones, how much force to use to pierce the skin, how to insert the needle between the bones, and how much force and depth to use to insert the needle into the target organ. Furthermore, if pseudo structures of muscle, membrane and blood vessel are applied, the learner can feel the resistance of the muscle or membrane when puncturing, and learn the technique of inserting the needle between the muscle fibers or blood vessels without damaging them.
- the pseudo-structure may be applied not only to a part of the human body model, but also to the entire body.
- one human body model can be used to train a variety of tests and procedures, which is convenient and further improves versatility.
- the simulated structures may be made of soft materials for skin, muscles, blood vessels, membranes, and organs, and semi-hard or hard materials for bones.
- this learning system when applying the learning tool to the human body model in the training that mimics the above-mentioned examination or treatment, a pseudo structure made of a material with a hardness corresponding to the part of the body to be examined or treated is applied, so that a tactile response that is even closer to that of the human body can be obtained.
- This provides the learner with a tactile stimulation that makes them feel as if they are actually performing treatment on a patient in the actual field, and an even greater learning effect can be expected.
- the learning system described above may be such that the human body model is provided with a camera capable of photographing the learner at the eye position if the model has a head, or at a position equivalent to the eye if the model does not have a head.
- the learning system of this embodiment it is possible to capture video from the perspective of an observer of a learner undergoing training (learning) that simulates an examination or treatment, and obtain the video.
- the learner can objectively view the facial expressions and behavior of the practitioner as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the practitioner and the treated in a single training session.
- the aforementioned "camera” may be provided in any way that allows it to photograph the learner, but if it is a fixed structure, it is preferable that it has a wide angle of view, and it may also be a movable structure.
- movable structures include a structure in which the head or eye unit (including those located in a position equivalent to the eye unit) can be operated manually, automatically, or remotely to face the learner and photograph the learner.
- the images captured by the camera may be displayed in real time on a mixed reality display worn by the learner, or may be recorded on a hard disk of a personal computer connected to the mixed reality display.
- the images may be displayed in a sub-window that opens next to the image of the human body model that the learner is looking at, or may be displayed full screen by appropriately switching with the image of the human body model that the learner is looking at.
- the learner can check the images on another monitor after the training (learning).
- the captured images may be simultaneously output to a large monitor, in which case the images can be shared with other learners waiting other than the learner currently training (learning), and the learner who will next begin training can easily understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire group of learners.
- learning learner currently training
- the learning system described above may also be configured so that the mixed reality display can visualize a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space in which the human body model is placed.
- a third virtual augmentation which is a model of an examination device and/or a treatment device, superimposed on the indoor space in which the human body model is placed.
- the expression "examination device and/or treatment device” used in this embodiment means both an examination device and a treatment device, and either an examination device or a treatment device.
- the indoor space can be viewed as an examination room or treatment room, and the human body model placed there can be viewed as the subject to be examined or treated, etc., for learning.
- learning or training can be performed on preparation of the examination device, etc., and check procedures before starting, and learning or training on how to handle the situation and the learner's movements according to the positions of the examination device, etc. and the human body model after starting can also be performed.
- this learning method allows students to practice the procedures or treatments they need to perform without having to prepare actual examination or treatment equipment, as if they were treating a patient in a space with actual examination or treatment equipment. This provides students with visual stimulation as if they were actually treating a patient in a real setting, and is expected to have an even greater learning effect.
- the learning method and learning system of the present invention allow a learner to learn the procedure or steps of a procedure to be performed by using the learning tool on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.
- FIG. 1 is a schematic diagram showing a configuration of a learning system according to a first embodiment of the present invention
- FIG. 2 is an image diagram showing a human body model and a first virtual augmentation projected onto the human body model in the learning system shown in FIG. 1
- 1 shows a learning system for a second embodiment of the present invention, in which (a) is an oblique view showing the state before the first virtual extension and the second virtual extension are projected onto the human body model, (b) is an oblique view showing the state after the first virtual extension and the second virtual extension have been projected onto the human body model, and (c) is an oblique view showing the state after only the bone image has been erased from the first virtual extension projected in (b).
- This is a modified example (variant example 5) of the learning system according to the second embodiment shown in Figure 3, and is a front view showing a state in which the first virtual extension and the second virtual extension (chest CT image), virtual operation buttons, etc. are projected onto the human body model.
- 5 is an explanatory diagram of the usage state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual extension (transverse chest CT image) projected onto the human body model has been lowered to approximately the middle position in the chest height direction by operation, and (b) shows a state in which the display position has been lowered further than the position in (a) by operation.
- 5 is an explanatory diagram of the usage state of the learning system shown in FIG.
- FIG. 4 is an explanatory diagram of the use state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual extension (chest CT sagittal image) projected onto the human body model by operation is approximately the middle position in the chest width direction, and (b) shows a state in which a 3D anatomical image is simultaneously superimposed on the image of (a) by operation.
- FIG. 13 is an explanatory diagram showing the configuration of a human body model used in a learning system according to a third embodiment of the present invention. An image diagram showing a third virtual augmentation projected onto a classroom in a learning system according to the fourth embodiment of the present invention.
- a learning system 1 used in a classroom R1 includes a human body model 2 as a projection target, a mixed reality display 3 worn by a learner H, and a learning target tool 4 as a target for learning manual techniques by the learner H. Each part of the learning system 1 will be described in detail below.
- the human body model 2 can be the projection target for the first virtual augmentation described below. In this embodiment, it is a life-size training model doll with a head and torso, and a commercially available chest drain insertion simulator is used.
- the mixed reality display 3 is a device capable of visualizing a first virtual augmentation 31, which is a physical anatomical model, in a superimposed manner on a part or all of the human body mannequin 2, and is provided so as to be wearable on the head of the learner H.
- the mixed reality display 3 is a head-mounted display (goggle type) capable of implementing MR technology (see FIG. 1 ).
- the first virtual augmentation 31 is an image of a person's bones, organs, blood vessels, and trachea, and the data of these images is installed in the memory function unit of the mixed reality display, and the image generation function unit constructs an image in which multiple images are superimposed (superimposed mode), and the image is projected via the display unit.
- the learning object instrument 4 is a subject of procedure learning for the learner H, and in this embodiment is a chest drain catheter and an inner tube (medical instrument) (see FIGS. 1 and 2).
- the learning method includes at least the following first and second steps.
- the learner H wears the mixed reality display 3 on his/her head and visually recognizes the human body model 2 and the first virtual augmentation 31 projected thereon through the mixed reality display 3; (2) In the second step, the learner H, who has completed the first step, visually checks the human body model 2 and the first virtual extension 31, picks up the tool 4 to be studied and applies it to the human body model 2, thereby learning the procedure or treatment to be performed.
- the instructor places the human model 2 in an appropriate position (on the bed in FIG. 1), wears the mixed reality display 3, and sets it up so that the first virtual augmentation 31 is correctly projected onto the human model 2.
- the instructor wearing the mixed reality display 3, operates the virtual controller that appears on the display to select the first virtual augmentation 31 to be projected, and adjusts the position etc. so that the first virtual augmentation 31 is visualized superimposed on the human model 2.
- learner H can view the human body model 2 and the first virtual augmentation 31 projected onto it via the mixed reality display 3.
- learner H can pick up the tool 4 to be studied that will be used in the actual treatment and apply the tool 4 to the human body model 2 onto which the first virtual augmentation 31 is projected.
- learner H can visually obtain a sense of realism and immersion, as if he were in front of a patient, and because he holds the actual target tool 4 in his hands and applies it to the human body model 2, rather than using a virtual image that does not involve any sensation, he can obtain a tactile response that is close to that of actual work.
- learner H receives visual and tactile stimulation as if he were actually performing treatment on a patient in an actual workplace, and a high learning effect can be expected.
- the human body model 2 only includes the head and torso, but the mixed reality display 3 can project a first virtual extension, such as virtual upper and lower limbs, in a three-dimensional, superimposed manner onto the non-physical parts of the human body model 2, allowing the user to visually identify and learn about body parts that are connected to the part being studied (that are added to the part being studied).
- a first virtual extension such as virtual upper and lower limbs
- the mixed reality display 3 can also switch between displaying and hiding the parts that do not have a physical presence as necessary.
- the mixed reality display 3 can easily change the content of the first virtual augmentation 31 to be projected by installing image data, and by projecting various images and videos onto one human model, it is possible to obtain the same effect as having multiple human models.
- various types of learning can be done by changing the contents of the first virtual extension 31 or by switching between displaying and hiding the parts that do not have a physical body, which improves the efficiency and effectiveness of learning, while also reducing the introduction (procurement) and operating costs associated with owning multiple human body models, and requires less storage space when not in use.
- the first virtual augmentation may use not only images of standard organs, etc., but also images of affected areas of individual patients collected during prior examinations, etc.
- simulated training learning
- by installing or updating data related to the first virtual augmentation it is possible to learn about the examination of new or special cases, and to practice becoming proficient in new treatment methods.
- the learning system 1a is another embodiment (second embodiment) of the learning system 1, and includes a human body model 2a, a mixed reality display 3a, and a learning target instrument 4a.
- the learning system 1a has a structure and effects in common with the learning system 1 of the first embodiment, so the common structure and effects will be omitted, and the different structure and effects will be described later.
- the mixed reality display 3a is not shown, but is referred to as the mixed reality display "3a" for the convenience of explaining the difference from the mixed reality display 3.
- the human body model 2a can be the projection target for the first virtual expansion and the second virtual expansion described below, and in this embodiment is a life-size training model doll having a head, torso, and the upper half of the thighs (see Figure 3 (a); no special mechanisms such as a chest drain insertion part are provided).
- the mixed reality display 3a is a head-mounted display (goggles type) capable of implementing MR technology, and is configured to be able to visualize, in addition to the first virtual augmentation 31 described above, a second virtual augmentation 32, which is a facial expression model of the patient, superimposed on the facial portion of the human body model 2a (see Figures 3(b) and (c)).
- the second virtual augmentation 32 is an image of a person's face, and the data of this image is installed in the memory function unit of the mixed reality display, and then constructed by the image generation function unit as an image in which the image is superimposed on the facial portion of the human body model 2a (superimposed mode), and this image is projected via the display unit.
- the learning object tool 4a is a tool for the learner to learn a procedure, and in this embodiment is an ultrasound diagnostic device and its probe (examination tool) (see FIGS. 3(a) to 3(c)).
- the second virtual extension 32 is visualized (applied) superimposed on the facial portion of the human body model 2a (see Figures 3(b) and (c)). This allows the learner to study while appropriately viewing the facial expression model projected as the second virtual extension 32. Furthermore, according to the learning system 1a, in addition to directly acquiring skills, learners can also train by observing changes in the patient's (human body model 2a's) facial expression due to a sudden change in the patient's condition before the start of the procedure, and by having a conversation with the patient to ease their anxiety and tension.
- the learning system 1a and the learning method using it it is possible to train in the procedure or treatment to be performed as if the student were dealing with a real patient, without having to prepare an actual patient or a role-playing patient.
- This provides the student with visual stimulation as if they were actually performing treatment on a patient in a real setting, and an even greater learning effect can be expected.
- the learning system 1a is configured to capture facial images of the instructor or attendees taken by the mixed reality display 3a or an external terminal as a facial expression model of the patient role represented by the second virtual augmentation 32, and to display real-time facial expressions processed by image processing software.
- the facial expression model and the learner are set to be able to converse with each other, so that training in responding flexibly to situations can be carried out.
- the learning system 1a according to Modification 1 has the same structure and effects as the learning system 1a according to the second embodiment, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 1 is not shown in the drawings, it will be described using the same reference numerals as the second embodiment.
- the facial expression model of the person in attendance plays the role of a patient, which creates a sense of tension during learning and is expected to have a high learning effect (on the other hand, it is possible that a good learning effect can be expected in a relaxed atmosphere by providing humor).
- learning can be carried out including training in which the facial expression model of the patient is appropriately viewed and conversation appropriate to the situation is also included (for example, explanations or casual conversation to ease the patient's anxiety and tension).
- a pressure sensor (not shown) is provided in the torso of the human body model 2a, and a position sensor (not shown) is provided in the tip of the learning target instrument 4a.
- the mixed reality display 3a has a receiving function for receiving position information detected by the position sensor, a receiving function capable of receiving a pressure value detected by the pressure sensor, and a setting value storage function, and is configured such that when both or either one of the position information and the pressure value received by each receiving function exceeds a preset setting value, the second virtual augmentation 32 changes to an expression expressing discomfort or agony.
- the learning system 1a according to Modification 2 has the same structure and effects as the learning system 1a according to the second embodiment (and Modification 1), so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 2 is not shown in the drawings, it will be described using the same reference numerals as in the second embodiment.
- learner H presses the probe against the abdomen or other part of the human body model 2a, a pressure value is transmitted by a pressure sensor provided in the human body model 2a.
- the mixed reality display 3a receives the transmitted pressure value, and if the pressure value is inappropriate, the facial expression model represented by the second virtual augmentation 32 displayed on the screen of the mixed reality display 3a changes to an expression expressing discomfort or distress.
- the learning system 1a of the second modification and the learning method using it when the learning target tool 4a is applied to the human body model 2a in the training simulating the above-mentioned examination, if a treatment that would cause pain in a human is performed, the facial expression model represented by the second virtual extension 32 changes to a facial expression expressing discomfort or agony.
- This allows the learner H to immediately determine that he or she has performed an inappropriate treatment, and allows the learner H to train in performing the treatment while observing the facial expression in the same way as when performing the treatment on a real human.
- the learning system 1a according to the modified example 3 has the same structure and effects as the learning system 1a according to the second embodiment, except for the above points, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to the modified example 3 is not shown in the figures, it will be described using the same reference numerals as the second embodiment.
- the line of sight of the patient role can be changed and the learner H can recognize this.
- actual patients and those receiving medical treatment may look at the face of the practitioner when they feel anxious or in pain, but according to the learning system 1a of the modified example 3 and the learning method using the same, the line of sight of the visualized facial expression model moves in the direction of the learner H as appropriate, allowing the learner to experience a sense of tension and realism similar to that of an actual treatment, which is expected to further improve the learning effect.
- the aforementioned timely movement of the gaze can be performed automatically or manually.
- automatic movements include a mode in which the gaze is a standard operation preset in the mixed reality display, and a mode in which the gaze is set to move toward the learner at the appropriate time when pressure is detected at a specific part of the human model by linking a pressure sensor or the like provided on the human model with the software of the mixed reality display, etc.
- manual movements includes a mode in which a teaching instructor or the like intentionally moves the gaze.
- a camera capable of photographing the learner H is provided at the position of the eye 321 on the head of the human body model 2a.
- the camera is a fixed wide-angle camera, and the head of the human body model 2a is structured so that it can move left and right and up and down.
- the image captured by the camera can be displayed in real time on the mixed reality display 3 worn by the learner H, and is also provided so as to be recordable on a hard disk or the like of a personal computer wirelessly connected to the mixed reality display 3.
- the learning system 1a according to Modification 4 has the same structure and effects as the learning system 1a according to the second embodiment, except for the above points, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 4 is not shown in the drawings, it will be described using the same reference numerals as the second embodiment.
- video can be captured from the viewpoint of observing the learner H during training (learning) simulating examinations and treatments, and the video can be obtained in real time.
- the learner can check the video recorded on the hard disk of a personal computer on another monitor after training (learning).
- the learner H can objectively view the facial expressions and actions of the therapist as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the therapist and the treated in a single training session.
- the captured images can be simultaneously output to a large monitor, in which case the images can be shared with other students waiting in line in addition to the student currently undergoing training (studying). This makes it easier for the next student to begin training to understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire group of students.
- the "camera” is the structure described above, but is not limited to this, and may be, for example, a movable structure or the like, as long as it is capable of photographing the learner.
- a camera capable of photographing the learner may be provided at a position equivalent to the eye.
- a stereo camera is installed as the camera, another learner playing the role of a patient, etc. can wear a mixed reality display and observe the image, thereby immersing himself in the three-dimensional environment from the perspective of the patient, etc., and also experiencing the psychology of the patient, etc.
- the learning system 1a' uses a human body model 2a' (headless, torso only), and as a first virtual extension 31a' applied to the human body model 2a, a two-dimensional CT image 311 (corresponding to the above-mentioned "medical image"; the same applies below) of an organ set to an appropriate size to fit the human body model 2a and a three-dimensional anatomical image 312 are provided so as to be superimposed on the same screen.
- Figures 4 to 7 show the entire image displayed on the mixed reality display 3a in the learning system 1a', which is viewed by the learner H.
- the first virtual extension 31a' shown in this figure is a two-dimensional CT image 311 of the chest (multiple overlapping chest CT cross-sectional images).
- a number of handles (three in this modified example) and a number of buttons (eight in this modified example) expressed by virtual extension are displayed around the human body model 2a'.
- Each of the aforementioned handles is used to change the display position of the image, etc., and is operated by learner H by holding it on the display screen.
- Each of the aforementioned buttons is used to switch the display of images, etc., and is operated by learner H by pressing it on the display screen.
- the learning system 1a' allows learner H to grasp the first handle 313 in the displayed image and move it up and down in the image to display a chest CT transverse image of any location. For example, when learner H grasps the first handle 313 and moves it downward from the position of the first handle 313 shown in Figure 4, the position of the displayed chest CT transverse image will gradually decrease as shown in Figures 5(a)-(b). Conversely, when learner H raises the first handle 313 that he is grasping, the position of the chest CT transverse image will gradually increase (not shown). In other words, the raising and lowering operation of the first handle 313 is linked to the display position on the chest CT transverse image.
- the learning system 1a' allows learner H to grasp the second handle 314 in the displayed image and move it in the forward and backward directions of the image to display a chest CT coronal section image of any location.
- the position of the displayed chest CT coronal section image will gradually move toward the back as shown in FIGS. 6(a)-(b).
- learner H pulls the grasped second handle 314 toward the front the position of the chest CT coronal section image will gradually move toward the front (not shown).
- the operation of moving the second handle 314 forward and backward is linked to the display position on the chest CT coronal section image.
- the learning system 1a' allows learner H to grasp the third handle 315 in the displayed image and move it left and right (left and right in Figs. 4 to 6, front and back in Fig. 7(a)) to display a chest CT sagittal image of any location.
- learner H grasps the third handle 315 and pushes it from the position of the third handle 315 shown in Fig. 7(a) toward the back of the displayed image, the position of the displayed chest CT sagittal image will gradually move toward the back.
- learner H pulls the grasped third handle 315 toward the viewer the position of the chest CT sagittal image will gradually move toward the viewer (not shown).
- the forward and backward (left and right) movement of the third handle 315 is linked to the display position in the chest CT sagittal image.
- the first button 316 is an ON/OFF switch for displaying the 2D CT image 311.
- the learning system 1a' can display the 2D CT image 311 (a 2D CT image of the chest in Figures 4 to 7(a)) (projecting the 2D CT image 311 in a superimposed manner onto the human body model 2a').
- the second button 317 is an ON/OFF switch for displaying the 3D anatomical image 312.
- the learning system 1a' can display the 3D anatomical image 312 (in FIG. 7(b) a 3D anatomical image of the chest) (projecting the 3D anatomical image 312 superimposed on the human body model 2a').
- the images shown in Figs. 7(a) and (b) can be alternately displayed.
- learner H can study while appropriately viewing the stereoscopic life-size 2D CT image 311 and 3D anatomical image 312 superimposed and projected by the first virtual extension 31a', and can ultimately learn the spatial positional relationship between the 2D CT image 311 and the 3D anatomical image 312 by associating the two dimensions with the three dimensions.
- the 2D CT image 311 and the 3D anatomical image 312 may be images of a healthy standard, or images of the organs, bones, etc. of a patient or individual with a lesion or specific characteristics.
- images of the organs, bones, etc. of a patient with a lesion they can be used not only by learners but also by medical professionals including doctors to consider treatment plans and for pre-operative meetings before carrying out actual surgery or treatment.
- “Able to superimpose 2D CT images and 3D anatomical images on the same screen” means that 2D CT images and 3D anatomical images can be superimposed on the same screen, and includes both cases where 2D CT images and 3D anatomical images are alternately displayed in the same position by switching, and where they are simultaneously displayed overlapping each other.
- the learning system 1b is another embodiment (third embodiment) of the learning system 1, and includes a human body model 2b, a mixed reality display 3, and a learning target instrument 4. Since the learning system 1b has the same structures and effects as the learning system 1 of the first embodiment in terms of the mixed reality display 3 and the learning target instrument 4, the description of the common structures and effects will be omitted, and the structure and effects of the human body model 2b, which are different, will be described later.
- the mixed reality display 3 is not illustrated, but for convenience of description, the mixed reality display is designated by the symbol "3".
- the human body model 2b can be a projection target of the first virtual augmentation 31 and the second virtual augmentation 32 (see FIG. 4).
- the human body model 2b is a life-size training model doll having a head, a torso, and the upper half of the thighs, and a cavity is formed in the chest, and a pseudo-structure 21 (see the dashed line in FIG. 4) that reproduces the arrangement, shape, and texture of the skin, muscles, bones, pleura, lungs, and blood vessels is applied to the cavity.
- the skin, muscles, pleura, lungs, and blood vessels are made of soft materials, and the bones are made of hard materials.
- the pseudo-structure may be a purchased model (ready-made product) of an individual organ or the like and assembled, or may be manufactured by the user using a 3D printer as described later.
- the pseudo-structure When the pseudo-structure is produced in-house, it can be produced using a 3D printer, a resin film material (wrap), a foamed resin material (sponge), various hard and soft resin materials, or a combination of these. Furthermore, the pseudo-structure produced in-house or outsourced can be not only a standard organ, but also a reproduction of an individual patient's organ collected during a prior examination. In this case, compared to training using a general model, it is possible to carry out simulated training that is more in line with the case and incorporates visual and tactile sensations before performing surgery on a patient with a specific condition.
- a pseudo structure 21 that reproduces the shape and hardness of the area to be treated, such as the skin, is applied, so that a tactile response closer to that of the human body can be obtained. This allows the learner H to receive tactile stimulation as if he or she were actually performing treatment on a patient in the field, and a higher learning effect can be expected.
- the human body model 2b has a structure with a pseudo structure 21 in which each element is arranged in a superimposed manner, so that in training for the procedure of puncturing a target organ as shown in FIG. 4 (i.e., using the learning target instrument 4), learner H can learn the techniques with a sense of touch close to that of an actual human body, such as palpation to find between the bones, the amount of force to use when puncturing the skin, membrane, and muscle while feeling their resistance, the procedure of removing the needle between the bones without damaging the muscle fibers or blood vessels, and the amount of force and depth to use when reaching the needle into the target organ.
- learner H can learn the techniques with a sense of touch close to that of an actual human body, such as palpation to find between the bones, the amount of force to use when puncturing the skin, membrane, and muscle while feeling their resistance, the procedure of removing the needle between the bones without damaging the muscle fibers or blood vessels, and the amount of force and depth to use when reaching the needle into the target organ.
- the image projected in the first virtual extension 31 is set to be projected at a position overlapping with the equivalent in the pseudo structure 21.
- the bone part of the pseudo structure 21 and the image of the bone projected in the first virtual extension 31 are projected at an overlapping position.
- This allows the learner to visually recognize the course of the projected blood vessels, etc., and the arrangement of the organs and bones (first virtual extension 31) via the mixed reality display 3, and also the projected facial expression model (second virtual extension 32), so it goes without saying that learning through vision is also possible (however, in FIG. 4, the configuration of the pseudo structure 21 is emphasized to make it easier to understand, and the image projected in the first virtual extension 31 as shown in FIG. 2 is omitted, and only the position is shown).
- the learning system 1c is another embodiment (fourth embodiment) of the learning system 1, and includes a human body model 2, a mixed reality display 3c, and a learning target instrument 4c. Since the learning system 1c has the same structure and effects as the learning system 1 of the first embodiment in terms of the structure and effects of the human body model 2, the description of the common structure and effects will be omitted, and the structure and effects of the different mixed reality display 3c and learning target instrument 4c will be described later.
- the mixed reality display 3c is omitted from the illustration, but for the convenience of explaining the difference from the mixed reality display 3, the mixed reality display is marked with the symbol "3c".
- the mixed reality display 3c is a head-mounted display (goggles type) capable of implementing MR technology, and in addition to the first virtual augmentation 31 and second virtual augmentation 32 described above, it is configured to be able to visualize a third virtual augmentation 33, which is an examination device (in this embodiment, a CT examination device), superimposed on the classroom R2, which is the indoor space in which the human body model 2 is installed (see Figure 5).
- a third virtual augmentation 33 which is an examination device (in this embodiment, a CT examination device), superimposed on the classroom R2, which is the indoor space in which the human body model 2 is installed (see Figure 5).
- the third virtual augmentation 33 is an image of the inspection device 331, and the data of these images is installed in the memory function unit of the mixed reality display, and then constructed by the image generation function unit as an image in which the image is superimposed (superimposed) on the indoor space of classroom R2, and this image is projected via the display unit.
- the gantry portion is projected into empty space, and the cradle portion is projected and superimposed onto a general bed on which the human body model 2 is placed.
- the learning target device 4c is an AED (Automated External Defibrillator).
- the learner can view the third virtual extension 33 projected onto the space in the classroom R2 via the mixed reality display 3c, and can study by regarding the classroom R2 as an examination room and the human body dummy 2 (on which the first virtual extension 31 and the second virtual extension 32 are projected) installed there as the subject to be examined (see FIG. 5).
- the learner can then study or train on the preparation of the examination device 331 and the check procedures before starting while viewing the image of the third virtual extension 33, and can also study or train on the learner's movements and other movements according to the positions of the examination device 331 and the human body dummy 2 after starting.
- the learning system 1c since a contrast agent is used in a CT scan, there is a possibility that the subject may develop drug-induced shock.
- the learning system 1c and a learning method using it it is possible to conduct resuscitation training using the learning subject device 4c (AED) on the human body model 2, assuming that such a shock occurs.
- the learning system 1c reproduces a special environment within the classroom R2, and makes it possible to perform simulations for dealing with situations that may arise in that environment.
- learning system 1c and a learning method using it allow trainees to practice the procedures or treatments they need to perform as if they were treating a patient in a space with actual testing equipment, without having to prepare actual testing equipment. This provides the trainee with visual stimulation as if they were treating a patient in an actual setting, and is expected to have an even greater learning effect. Furthermore, learning system 1c and a learning method using it make it easy to have a simulated experience as if there were virtual testing equipment, even when actual testing equipment does not exist, and is expected to help trainees become accustomed to the surrounding environment (examination room or operating room).
- the external output of the mixed reality display can be used to project the image seen by learner H onto another display, allowing learner H's perspective, situation, and other experiences (successful or unsuccessful examples, etc.) to be shared with other students. Furthermore, instructors can also provide guidance and advice to students while watching the other displays.
- the various virtual augmentations displayed on the mixed reality display 3 (3a, 3c) can be given a function to superimpose a message on the area to be treated or on the correctness of the selected instrument for a particular disease or symptom on that area or instrument. For example, when the appropriate puncture area or range is touched, a pop-up message such as "correct” can be displayed, improving the effectiveness of self-study.
- the first virtual augmentation 31 and the second virtual augmentation 32 displayed on the mixed reality display 3 (3a, 3c) in each of the above-mentioned learning systems 1 (1a, 1a', 1b, 1c) can select and project facial expressions and body types of people of all ages and genders from preset data. This allows for more practical learning while visually recognizing differences in body types and organs due to age and gender. In addition, since there is no need to prepare human body models of different ages, genders, and body types when learning, it is possible to reduce the introduction (procurement) and operating costs associated with owning multiple human body models, and less storage space is required when not in use.
- Each of the learning systems 1 (1a, 1a', 1b, 1c) described above also makes it possible to learn about rare anatomical or clinical cases.
- medical accidents can occur due to encountering rare anatomical or clinical cases, and there have been reported cases leading to the death of the patient.
- the surgeon's lack of knowledge or experience can cause damage to blood vessels during treatment, leading to serious medical accidents.
- each learning system 1 (1a, 1a', 1b, 1c)
- it is possible to select whether to display 3D anatomical images making it possible to reproduce and learn about rare anatomical or clinical cases, and this can also be used to learn about the risks, safety, and medical safety of procedures in rare cases such as those mentioned above.
- each of the learning systems 1a, 1a', 1b, and 1c described above it is possible to reproduce and display the change in the patient's facial gaze as a second virtual extension, so that the patient's gaze turns toward the surgeon, which increases the sense of realism (especially the sense of tension) of the learner playing the role of the surgeon, and further improvements in the learning effect can be expected.
- 2D CT images are not observed at life-size, and currently there is no environment in which 2D CT images and 3D anatomical images can be studied in a life-size, stereoscopic manner. There have been no simulators to date that allow life-size 2D CT images and 3D anatomical images to be superimposed simultaneously and that allow learning to associate the spatial positional relationship between the 2D and 3D dimensions.
- a human body model which is a machine or tool
- each of learning systems 1a, 1a', 1b, and 1c adds interactivity based on the above-mentioned effects.
- Conventionally there have been no learning systems that use a human body model or the like that is anthropomorphized and has interactivity, but according to the learning method using each of learning systems 1a, 1a', 1b, and 1c, it is expected that the learning effect will be more efficient and superior than learning using a conventional human body model or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Medicinal Chemistry (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Instructional Devices (AREA)
Abstract
L'invention concerne un procédé d'apprentissage et un système d'apprentissage qui permettent à un apprenant d'apprendre à effectuer une procédure ou une mesure pour une compétence devant être réalisée pour un modèle de corps humain réel à l'aide d'outils en cours d'apprentissage en visionnant une extension virtuelle d'un modèle anatomique physique projeté sur un modèle de corps humain par l'intermédiaire d'un dispositif de réalité mixte porté. Ce système d'apprentissage (1) comprend : un modèle de corps humain (2) qui doit être projeté ; un dispositif d'affichage de réalité mixte (3) qui est capable de visualiser une première extension virtuelle (31) consistant en un modèle anatomique physique de manière superposée sur la totalité ou sur une partie du modèle de corps humain (2), le dispositif d'affichage de réalité mixte (3) étant porté sur la tête d'un apprenant (H) ; et un outil (4) en cours d'apprentissage qui consiste en un outil médical ou un outil d'inspection avec lequel une compétence doit être apprise.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024571725A JPWO2024154647A1 (fr) | 2023-01-18 | 2024-01-11 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2023-006226 | 2023-01-18 | ||
| JP2023006226 | 2023-01-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024154647A1 true WO2024154647A1 (fr) | 2024-07-25 |
Family
ID=91955894
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2024/000462 Ceased WO2024154647A1 (fr) | 2023-01-18 | 2024-01-11 | Procédé d'apprentissage et système d'apprentissage |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JPWO2024154647A1 (fr) |
| WO (1) | WO2024154647A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119626060A (zh) * | 2024-12-19 | 2025-03-14 | 中国人民解放军陆军军医大学第二附属医院 | 一种互动性穿刺活检术教学系统 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004348091A (ja) * | 2003-03-26 | 2004-12-09 | National Institute Of Advanced Industrial & Technology | 実体模型及びこれを用いた手術支援システム |
| JP2012181364A (ja) * | 2011-03-01 | 2012-09-20 | Morita Mfg Co Ltd | 医療用実習装置及び実習用パーツ |
| JP2018112646A (ja) * | 2017-01-11 | 2018-07-19 | 村上 貴志 | 手術トレーニングシステム |
| JP2021096413A (ja) * | 2019-12-19 | 2021-06-24 | 国立大学法人北海道大学 | 気管内吸引の訓練装置 |
| JP2022507622A (ja) * | 2018-11-17 | 2022-01-18 | ノバラッド コーポレーション | 拡張現実ディスプレイでの光学コードの使用 |
-
2024
- 2024-01-11 JP JP2024571725A patent/JPWO2024154647A1/ja active Pending
- 2024-01-11 WO PCT/JP2024/000462 patent/WO2024154647A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004348091A (ja) * | 2003-03-26 | 2004-12-09 | National Institute Of Advanced Industrial & Technology | 実体模型及びこれを用いた手術支援システム |
| JP2012181364A (ja) * | 2011-03-01 | 2012-09-20 | Morita Mfg Co Ltd | 医療用実習装置及び実習用パーツ |
| JP2018112646A (ja) * | 2017-01-11 | 2018-07-19 | 村上 貴志 | 手術トレーニングシステム |
| JP2022507622A (ja) * | 2018-11-17 | 2022-01-18 | ノバラッド コーポレーション | 拡張現実ディスプレイでの光学コードの使用 |
| JP2021096413A (ja) * | 2019-12-19 | 2021-06-24 | 国立大学法人北海道大学 | 気管内吸引の訓練装置 |
Non-Patent Citations (1)
| Title |
|---|
| ANONYMOUS: "Not just viewing, but also dissection, with VR pre-operative simulation", 5 January 2017 (2017-01-05), XP093193328, Retrieved from the Internet <URL:https://www.moguravr.com/spectovive-vr-operation/> * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119626060A (zh) * | 2024-12-19 | 2025-03-14 | 中国人民解放军陆军军医大学第二附属医院 | 一种互动性穿刺活检术教学系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2024154647A1 (fr) | 2024-07-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11195340B2 (en) | Systems and methods for rendering immersive environments | |
| Issenberg et al. | Simulation and new learning technologies | |
| US20030031993A1 (en) | Medical examination teaching and measurement system | |
| US20120270197A1 (en) | Physiology simulation garment, systems and methods | |
| KR20180058656A (ko) | 현실-증강된 형태학적 방법 | |
| Mostafa et al. | Designing NeuroSimVR: a stereoscopic virtual reality spine surgery simulator | |
| Kuchenbecker et al. | Evaluation of a vibrotactile simulator for dental caries detection | |
| US20230169880A1 (en) | System and method for evaluating simulation-based medical training | |
| Simon et al. | Design and evaluation of UltRASim: An immersive simulator for learning ultrasound-guided regional anesthesia basic skills | |
| WO2024154647A1 (fr) | Procédé d'apprentissage et système d'apprentissage | |
| RU2687564C1 (ru) | Система обучения и оценки выполнения медицинским персоналом инъекционных и хирургических минимально-инвазивных процедур | |
| Vincent-Lambert et al. | A guide for the assessment of clinical competence using simulation | |
| CN118486218A (zh) | 脊柱内镜手术模拟训练系统和方法 | |
| Beltes et al. | Dental Education Tools in Digital Dentistry | |
| Coles | Investigating augmented reality visio-haptic techniques for medical training | |
| Dumay | Medicine in virtual environments | |
| Brown | Simulation Technology | |
| Haase et al. | Virtual reality and habitats for learning microsurgical skills | |
| Violante | Virtual Reality Simulation Transforms Medical Education: Can It Advance Student’s Surgical Skills and Application? | |
| Crossan | The design and evaluation of a haptic veterinary palpation training simulator | |
| Botelho et al. | Virtual Reality for Pediatric Trauma Education-A Preliminary Face and Content Validation Study | |
| Luursema et al. | Stereopsis in medical virtual-learning-environments | |
| Norkhairani et al. | Simulation for laparoscopy surgery with haptic element for medical students in HUKM: a preliminary analysis | |
| Sainsbury | Development and evaluation summaries of a percutaneous nephrolithotomy (PCNL) surgical simulator | |
| Woo | Immersive Learning of Bimanual Haptic Intravenous Needle Insertion in Virtual Reality: Developing a Simulator for Nursing Students |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24744570 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024571725 Country of ref document: JP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |