[go: up one dir, main page]

WO2024154647A1 - Learning method and learning system - Google Patents

Learning method and learning system Download PDF

Info

Publication number
WO2024154647A1
WO2024154647A1 PCT/JP2024/000462 JP2024000462W WO2024154647A1 WO 2024154647 A1 WO2024154647 A1 WO 2024154647A1 JP 2024000462 W JP2024000462 W JP 2024000462W WO 2024154647 A1 WO2024154647 A1 WO 2024154647A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
human body
learner
model
mixed reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2024/000462
Other languages
French (fr)
Japanese (ja)
Inventor
礼司 片山
圭 山田
正樹 樫原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kurume University
Original Assignee
Kurume University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kurume University filed Critical Kurume University
Priority to JP2024571725A priority Critical patent/JPWO2024154647A1/ja
Publication of WO2024154647A1 publication Critical patent/WO2024154647A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/30Anatomical models
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Definitions

  • the present invention relates to a learning method and a learning system. More specifically, the present invention relates to a learning method and a learning system that enable a learner to learn the procedure or steps of a procedure to be performed by using a learning tool on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.
  • Non-Patent Document 1 Traditionally, in medical learning settings such as medical faculties at universities, nursing high schools and vocational schools, human body models known as medical simulators have been used to study and train about human anatomy and procedures, such as those described in Non-Patent Document 1 below.
  • Non-Patent Document 1 The human body mannequin described in Non-Patent Document 1 is a model of part of the human body (lower jaw to chest to right shoulder) tailored to the purpose of learning (instruction and practice of medical techniques such as puncture and intravenous catheter management), and the bone structure and blood vessels of the modeled parts are accurately reproduced. It is said to enable practical training from selecting the puncture position to inserting the catheter.
  • the simulated experience system described in Patent Document 1 includes a video display device, a main body formed in the same or nearly the same shape as an object of education, research, or training, a controller provided on the main body and having a signal transmitter capable of transmitting a signal for synchronizing the movement of an image of a controlled object simulating the object displayed on the video display device with the operation of the main body, a signal receiver connected to the controller and the video display device and capable of receiving a signal transmitted from the signal transmitter, a calculation unit connected to the signal receiver and capable of analyzing the operation of the main body based on the received signal and calculating operation data, an image generation unit capable of generating an image of the controlled object based on data on the shape of the object, a synchronization processing unit capable of synchronizing the operation data calculated by the calculation unit so that the image of the controlled object generated by the image generation unit moves in accordance with the operation of the main body, and a computer having an image output unit capable of outputting the image of the controlled object processed by the synchronization processing unit to the video
  • Patent Document 1 allows the user to touch a main body that has the same or nearly the same shape as an educational object, and to operate an image to be operated that imitates the object displayed on an image display device, stimulating the user's sense of sight and touch, providing an intuitive and immersive simulated experience.
  • Non-Patent Document 1 which are existing technology, are real objects that provide a sense of realism and texture during learning, making them ideal for confirming procedures when using the target tool.
  • these human body dummies are often models of parts of the human body tailored to the learning purpose, and in addition, the structure of the model usually does not include surrounding human body structures that are not intended for learning, making them impossible or unsuitable for use outside of learning purposes.
  • full-body human body dummies do exist, they are expensive compared to models of parts of the human body, and there is a difference between parts that are sophisticated and parts that are not, making them difficult to say that they are suitable for general-purpose use.
  • the simulated experience system described in Patent Document 1 allows learning by visually recognizing objects such as organs displayed on a video display device and touching a controller that imitates the objects, but the objects are still virtual reality with no substance.
  • the simulated experience system is not suitable for practical training such as confirming procedures and procedures that use the target tools.
  • the present invention has been devised in light of the above, and aims to provide a learning method and learning system that enables a learner to learn the procedure or steps of a procedure to be performed by using the tools to be studied on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.
  • the learning method of the present invention is carried out using a human body model as a projection target, a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model, and a learning target instrument, which is a medical or examination instrument, for learning a procedure, and includes a first step in which a learner wears the mixed reality display and visually views the human body model and the first virtual augmentation projected thereon through the mixed reality display, and a second step in which the learner, having completed the first step, picks up the learning target instrument and applies it to the human body model while visualizing the human body model and the first virtual augmentation, thereby learning the procedure or steps of the procedure to be performed.
  • a human body model as a projection target
  • a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model
  • a learning target instrument which is a medical or examination instrument, for
  • the learner completes the preparation by wearing the mixed reality display, and the learner can view the human body model and the first virtual augmentation projected onto it through the mixed reality display.
  • the term "learner” is used to include not only pre-employment students in medical schools, nursing schools, vocational schools, etc. who aim to become medical professionals, but also those who are already medical professionals. For example, new graduates from medical schools, etc. may become familiar with different types of equipment than those used in school, or may undergo practical training beyond what they received in school. Also, even those who have already graduated from medical schools, etc. may undergo training to become familiar with newly introduced equipment, new treatment methods, or as part of continuous learning to further improve their skills. Therefore, these individuals are also included in the term “learner.”
  • the learning method of the present invention is essentially used by learners, but this does not exclude instructors (those who practice the present invention as models in explaining the method before and after learning).
  • Examples of the "first virtual augmentation” include images of human organs and bones. Projecting images of organs, etc. onto appropriate locations on the human body model increases the sense of immersion in learning and also allows for advance confirmation of the positions of organs, etc. Furthermore, the “first virtual augmentation” includes not only single images of human organs, etc., but also projection of multiple images in an overlapping manner (superimposed manner).
  • One example of projection in an overlapping manner is a manner in which an image of a bone and an image of an organ located below the same bone image are projected in an overlapping manner. In this case, more practical learning is possible by referring to the arrangement of the multiple projected images.
  • the image data projected in the "first virtual augmentation” may be installed in the mixed reality display or stored in an auxiliary storage device connected to the aircraft, or may be received from an external device such as a server connected to the mixed reality display by wire or wireless means. Furthermore, image processing related to the "first virtual augmentation" may be performed by a function provided in the mixed reality display, or may receive data processed by an external device such as a server connected to the mixed reality display by wireless means or the like.
  • the mixed reality display can also project the first virtual augmentation onto disembodied parts of the human model (parts that have no substance, which can also be considered space). For example, if the only part of the solid human model in front of the viewer's eyes is the torso, the first virtual augmentation of the disembodied parts of the human model, such as the head, lower limbs, and arms, can be visualized in three dimensions as if they were added to the human model as if it were a real object. Furthermore, the display of the disembodied parts can be switched on and off as needed.
  • the mixed reality display allows a variety of learning to be done even if the viewer does not own multiple human models, and switching between displaying and hiding the disembodied parts is expected to improve the efficiency and effectiveness of learning, and can also reduce the introduction and operating costs associated with owning multiple human models.
  • the learner can pick up the tool to be studied that will be used in the actual treatment, examination, etc., when learning the procedure or treatment of the procedure, and apply the tool to be studied to the human body model onto which the first virtual augmentation is projected.
  • medical instruments include, for example, puncture needles, drainage tubes, suture needles, syringes, forceps, scalpels and other medical blades, plates and pins used in fracture treatment, etc.
  • examination instruments include, for example, probes in ultrasound diagnostic equipment, electrodes in electrocardiogram measuring equipment, endoscopes, etc. It goes without saying that the medical instruments and examination instruments mentioned above are merely examples, and various instruments can be the subject of the study.
  • the learning method of the present invention makes it possible to change the content of the first virtual augmentation projected by the mixed reality display, and by projecting various images or videos (first virtual augmentation) onto one human dummies, it is possible to obtain the same effect as actually owning multiple human dummies.
  • the learning method of the present invention requires less procurement costs and less storage space when not in use compared to conventional learning methods.
  • the mixed reality display can be used to carry out various types of learning even if the viewer does not own multiple human models, and switching between display and non-display of the non-physical parts can be expected to improve the efficiency and effectiveness of learning, and the introduction and operation costs associated with owning multiple human models can be reduced.
  • the above-mentioned learning method may be such that the mixed reality display is capable of visualizing a second virtual augmentation, which is a facial expression model of the patient, superimposed on the facial portion of the human body model, and in the first and second steps, the second virtual augmentation is applied to the human body model, and in at least the second step, the learner studies while appropriately viewing the facial expression model of the patient projected by the second virtual augmentation, or studies while appropriately viewing the facial expression model of the patient and engaging in conversation according to the situation.
  • a second virtual augmentation which is a facial expression model of the patient, superimposed on the facial portion of the human body model, and in the first and second steps, the second virtual augmentation is applied to the human body model, and in at least the second step, the learner studies while appropriately viewing the facial expression model of the patient projected by the second virtual augmentation, or studies while appropriately viewing the facial expression model of the patient and engaging in conversation according to the situation.
  • the second virtual augmentation is visualized (applied) superimposed on the facial portion of the human body model, allowing the learner to study while also visually checking the facial expression model of the patient role projected by the second virtual augmentation as appropriate, or to study while visually checking the facial expression model of the patient role and also engaging in conversation according to the situation.
  • this learning method allows trainees to practice the procedure or treatment as if they were dealing with an actual patient, without having to prepare an actual patient or a role-playing patient (in other words, even though they are only dealing with a human model), and trainees are provided with visual stimulation as if they were actually treating a patient in an actual setting (in other words, they get a realistic visual experience), which is expected to result in even greater learning effects.
  • the second virtual augmentation may be applied to the human body model not only in the second step, but also in the first step.
  • the trainee directly learn the technique, but he or she can also train to observe the changes in the patient's facial expression due to a sudden change in the patient's condition before the procedure begins, and to talk to the patient to ease their anxiety and tension.
  • the facial expression model of the patient represented by the second virtual augmentation may be, for example, a standard model preset in the mixed reality display, or it may be the academic instructor or a fellow attendee receiving instruction. If the academic instructor or fellow attendee is used as the facial expression model of the patient, a sense of tension is provided during learning, which is expected to result in a high learning effect (on the other hand, it may be possible to provide humor and expect a high learning effect in a relaxed atmosphere).
  • image processing software may be used to display real-time facial expressions captured by a camera. This may also be used in conjunction with a setting that generates a sound such as "Ouch! in accordance with the facial expression of a standard model playing the patient role represented in the second virtual augmentation.
  • the model and the learner may be set to be able to converse, allowing training in how to respond flexibly to situations.
  • the above-mentioned learning method may also be such that, among the facial expression models visualized in the second virtual augmentation, at least the depiction of the eyes is set so that the gaze can be moved toward the learner at any time, and the gaze can be recognized by the learner via the mixed reality display.
  • the learner when the learner is studying while visually checking the facial expression model of the patient role projected by the second virtual augmentation, the line of sight of the patient role can be changed and the learner can recognize this.
  • the timely movement of the gaze may be, for example, a standard action preset (programmed) in the mixed reality display, but is not limited to this. It may also be that a supervising instructor or attendee observing the learner's training status uses software or the like to intentionally move the gaze, or a pressure sensor or pressure sensor provided in the human model is linked to the software of the mixed reality display, and the gaze is set to move toward the learner at the appropriate time when a specified pressure is detected. Furthermore, when linked to a pressure sensor or the like provided in the human model, the presence or absence of gaze movement and the duration of the gaze movement may be set to vary depending on the strength of the detected pressure.
  • the second virtual augmentation may be applied to the human body model in the first step as well as in the second step.
  • the facial expression model in the learning method of this aspect may be a standard model preset in the mixed reality display, or may be that of a teaching instructor or a person in attendance, etc.
  • the above-mentioned learning method may be such that the first virtual augmentation allows medical images and three-dimensional anatomical images, set to an appropriate size to fit the human body model, to be superimposed on the same screen.
  • medical images includes two-dimensional computed tomography images (hereinafter referred to as “2D CT images”), magnetic resonance images (hereinafter referred to as “2D MRI images”), ultrasound images (hereinafter referred to as “echo images”), X-ray images (hereinafter referred to as “X-ray images”), nuclear medicine images (hereinafter referred to as “RI images”), etc., and does not exclude other types of medical images.
  • 2D CT images two-dimensional computed tomography images
  • 2D MRI images magnetic resonance images
  • ultrasound images hereinafter referred to as “echo images”
  • X-ray images X-ray images
  • nuclear medicine images hereinafter referred to as "RI images”
  • the learner can learn while appropriately viewing the stereoscopic life-size medical images and 3D anatomical images superimposed and projected by the first virtual expansion.
  • the learner can learn by associating the spatial positional relationship between the medical images (especially 2D ones) and the 3D anatomical images (3D), which allows training to understand the relationship between the medical images and the 3D anatomical images, which many students and inexperienced practitioners (doctors and technicians) have struggled with, and further deepening their understanding, and further improving the learning effect can be expected.
  • the medical images and 3D anatomical images may be, for example, standard healthy images, but are not limited to this, and may be images of organs, bones, etc. of a patient or individual with a lesion or specific characteristics, or may be used as photographed medical images of an individual and 3D anatomical images constructed based on the medical images.
  • images of organs, bones, etc. of a patient with a lesion are applied, they can be used not only by learners but also by medical professionals including doctors in considering treatment plans and pre-operative meetings before actual surgery or treatment.
  • “Able to superimpose medical images and 3D anatomical images on the same screen” means that medical images and 3D anatomical images can be superimposed on the same screen, and includes both simultaneous overlapping display and switching between medical images and 3D anatomical images that are displayed alternately in the same position.
  • the above-mentioned learning method may be such that a pseudo-structure reproducing any one of skin, muscle, bone, blood vessel, membrane or organ is applied to at least a specific part of the human body model that is the learning target, or a combination of multiple pseudo-structures may be applied.
  • this learning method when applying the learning tool to the human body model in training that simulates examination or treatment, a pseudo structure that reproduces skin, etc. is applied to the area where the examination or treatment is performed, so that a tactile response closer to that of the human body can be obtained.
  • This allows the learner to receive tactile stimulation as if they were performing treatment on a patient in an actual location (in other words, they can get a realistic sense of touch), and a higher learning effect can be expected.
  • the human body model may be one in which multiple pseudo structures of skin, muscle, bone, blood vessel, membrane or organ are applied to a specific part to be studied. For example, if skin, bone and target organ are selected as pseudo structures and these are arranged in a superimposed (layered) manner, in training for puncturing the target organ, the learner can learn by actual touch how to palpate the gap between the bones, how much force to use to pierce the skin, how to insert the needle between the bones, and how much force and depth to use to insert the needle into the target organ. Furthermore, if pseudo structures of muscle, membrane and blood vessel are applied, the learner can feel the resistance of the muscle or membrane when puncturing, and learn the technique of inserting the needle between the muscle fibers or blood vessels without damaging them.
  • the pseudo-structure is applied to "at least the specified portion to be studied," the pseudo-structure may be applied to the entire human body model, not just a part of it. If the pseudo-structure is applied to the entire human body model, the structure becomes more complex, but one human body model can be used to train a variety of tests and procedures, improving convenience and versatility.
  • the simulated structures may be made of soft materials for skin, muscles, blood vessels, membranes, and organs, and semi-hard or hard materials for bones.
  • this learning method when applying the learning tool to the human body model in the training simulating the above-mentioned examination or treatment, a pseudo structure made of a material with a hardness corresponding to the part of the body to be examined or treated is applied, providing a tactile response that is even closer to that of the human body.
  • This allows the learner to receive tactile stimulation as if they were actually performing treatment on a patient in an actual setting (in other words, they can get a realistic sense of touch), and an even greater learning effect can be expected.
  • soft materials materials such as resin and rubber are preferably used that have a hardness close to that of the target skin, muscles, blood vessels, membranes, and organs.
  • hard materials materials such as resin, rubber, stone materials, and metal materials are preferably used that have a hardness close to that of the target bones.
  • semi-hard materials such as resins and rubbers that have been prepared to be harder may be used when reproducing cartilage.
  • the above-mentioned learning method may be configured such that the human body model and the learning target device are provided with a position sensor and a pressure sensor or pressure sensor, and when the position information detected by the position sensor and/or the pressure value detected by the pressure sensor or pressure sensor exceed a preset value, the second virtual augmentation is changed to an expression expressing discomfort or distress.
  • the facial expression model represented by the second virtual extension changes to one that expresses discomfort or agony, allowing the learner to immediately determine that they have performed an inappropriate treatment, and to train in performing treatment while observing (visually checking) the facial expression, just as they would when performing treatment on a real human.
  • Whether or not a treatment would cause pain in a human body is determined by whether or not the position information and/or pressure value detected by the position sensor and pressure sensor or pressure sensor installed on the human body model and the training target device exceed a preset value.
  • the facial expression model represented by the second virtual augmentation displayed on the mixed reality display changes to an expression expressing discomfort or distress.
  • a pressure sensor or pressure sensor is provided on a specific part of the human body model that is the learning target, and during training to press an examination device (e.g., a probe in an ultrasound diagnostic device), which is the learning target instrument, against the part, a pressure value is transmitted by a pressure sensor or the like attached to the human body model, and if the pressure value received by the mixed reality display directly or via an external device that analyzes the pressure value is inappropriate (e.g., the pressing force is too strong), the facial expression model represented by the second virtual augmentation displayed on the mixed reality display changes to an expression expressing discomfort or distress.
  • an examination device e.g., a probe in an ultrasound diagnostic device
  • this can be used in conjunction with a setting that generates a sound such as "Ouch! in response to changes in facial expression that express discomfort or distress, which increases the sense of realism and allows the training to be done with a sense of tension.
  • the above-mentioned learning method may be such that the human body model is provided with a camera capable of photographing the learner at the eye position if the model has a head, or at a position equivalent to the eye if the model does not have a head.
  • this learning method it is possible to capture video from the perspective of a learner observing training (learning) that simulates examinations and treatments (in other words, video taken from the side of the human body model), and obtain the video.
  • learning training
  • the learner can objectively view the facial expressions and behavior of the therapist as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the therapist and the treated in a single training session.
  • the aforementioned "camera” may be provided in any way that allows it to photograph the learner, but if it is a fixed structure, it is preferable that it has a wide angle of view, and it may also be a movable structure.
  • movable structures include a structure in which the head or eye unit (including those located in a position equivalent to the eye unit) can be operated manually, automatically, or remotely to face the learner and photograph the learner.
  • the images captured by the camera may be displayed in real time on a mixed reality display worn by the learner, or may be recorded on a hard disk or the like of a personal computer (hereinafter referred to as "PC") connected to the mixed reality display.
  • the images may be displayed in a sub-window that opens next to the image of the human body model that the learner is looking at, or may be displayed full screen by appropriately switching with the image of the human body model that the learner is looking at.
  • the learner, etc. can check the images on another monitor or the like after the training (learning).
  • the captured images may be simultaneously output to a large monitor, in which case the images can be shared with other learners waiting other than the learner currently training (learning), and the learner who will next begin training can easily understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire learner group.
  • learning learner currently training
  • the above-mentioned learning method may be such that a human body model is installed in an indoor space that can be used as a teaching space, and a mixed reality display is capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space, and in both or either of the first and second steps, the learner learns the procedure or procedure to be performed while visually viewing the third virtual augmentation projected onto the indoor space via the mixed reality display.
  • a human body model is installed in an indoor space that can be used as a teaching space
  • a mixed reality display is capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space, and in both or either of the first and second steps, the learner learns the procedure or procedure to be performed while visually viewing the third virtual augmentation projected onto the indoor space via the mixed reality display.
  • the expression "examination device and/or treatment device” used in this embodiment means both an examination device and a treatment device, and either an examination device or a treatment device.
  • the indoor space can be viewed as an examination room or treatment room, and the human body model placed there can be viewed as a subject to be examined or treated, etc., for learning purposes.
  • the learner may learn the procedure or procedure to be performed while further viewing the third virtual augmentation projected into the indoor space via the mixed reality display.
  • the learner may learn or practice preparation of the inspection device, etc. before starting, check procedures, etc., and may also learn or practice how to handle the learner and his/her movements according to the positions of the inspection device, etc. and the human body model after starting.
  • Examples of images projected by the third virtual augmentation include MRI (Magnetic Resonance Imaging) inspection equipment, CT (Computed Tomography) inspection equipment, X-ray inspection equipment, radiation therapy equipment, proton beam therapy equipment, and endoscopic equipment (endoscopes and the surgical tools used therewith).
  • Learners can learn and train their movements by moving a human body model relative to the inspection equipment projected by virtual augmentation, or by moving the arm of the inspection equipment projected by virtual augmentation and applying it to the human body model.
  • this learning method allows students to practice the procedure or treatment they need to perform without having to prepare actual examination or treatment equipment, as if they were treating a patient in a space with actual examination or treatment equipment.
  • This provides students with visual stimulation as if they were actually treating a patient in a real setting (in other words, they get a realistic visual experience), and is expected to have an even greater learning effect.
  • the learning system of the present invention comprises a human body model to be projected, a mixed reality display that can visualize a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model and can be worn on the learner's head, and a learning target instrument that is a medical instrument or examination instrument to be used for procedural learning.
  • a first virtual augmentation which is a physical anatomical model, superimposed on a part or all of the human body model and can be worn on the learner's head
  • a learning target instrument that is a medical instrument or examination instrument to be used for procedural learning.
  • the learner can view the human body model and the first virtual augmentation projected onto it via the mixed reality display.
  • the learner can pick up the tool to be studied that will be used in the actual treatment, examination, etc., and apply the tool to be studied to the human body model onto which the first virtual augmentation is projected.
  • the learner can visually obtain a sense of realism and immersion, as if they were in front of a patient, and because they hold the actual tools being studied and apply them to the human body model, rather than using a virtual image that does not involve any sensation, they can obtain a tactile response that is close to that of actual work.
  • the learner receives visual and tactile stimulation that makes them feel as if they are actually performing treatment on a patient in an actual location (in other words, they get realistic visual and tactile sensations), and a high learning effect can be expected.
  • the "human body model” may be anything onto which the first virtual augmentation described above can be projected, and for example, a life-size model, such as a so-called training model doll, is preferably used. Furthermore, as described below, the human body model does not necessarily have to be full-body size, since the mixed reality display can project the first virtual augmentation onto parts of the human body model that do not have a physical body, and it may be in a form consisting of only the parts particularly necessary for learning (for example, only the torso without the upper limbs, lower limbs, or head).
  • the "learning subject instrument” may be any medical or testing instrument that is the subject of skill learning, and various instruments may be the subject of learning.
  • medical instruments include needles, tubes, scalpels, and other medical blades
  • testing instruments include probes in ultrasound diagnostic equipment and electrodes in electrocardiogram measuring equipment.
  • the “mixed reality display” is at least capable of visualizing the first virtual augmentation, which is a physical anatomical model, on a part or all of the human body model in a superimposed manner, and is constructed so as to be wearable on the learner's head.
  • a head-mounted display (goggles, glasses, helmet, etc.) capable of implementing so-called XR (Extended reality) technology is used.
  • XR Extended reality
  • MR Mated Reality
  • AR Augmented Reality
  • Examples of the "first virtual augmentation” include images of human organs and bones. Projecting images of organs, etc. onto appropriate locations on the human body model increases the sense of immersion in learning and also allows for advance confirmation of the positions of organs, etc. Furthermore, the “first virtual augmentation” includes not only single images of human organs, etc., but also projection of multiple images in an overlapping manner (superimposed manner).
  • One example of projection in an overlapping manner is a manner in which an image of a bone and an image of an organ located below the same bone image are projected in an overlapping manner. In this case, more practical learning is possible by referring to the arrangement of the multiple projected images.
  • the image data projected in the "first virtual augmentation” may be installed in the mixed reality display or stored in an auxiliary storage device connected to the aircraft, or may be received from an external device such as a server connected to the mixed reality display by wire or wireless means. Furthermore, image processing related to the "first virtual augmentation" may be performed by a function provided in the mixed reality display, or may receive data processed by an external device such as a server connected to the mixed reality display by wireless means or the like.
  • the content of the first virtual augmentation projected by the mixed reality display can be changed, and by projecting various images or videos (first virtual augmentation) onto one human body model, it is possible to obtain the same effect as having multiple human body models, which means that compared to the previously described conventional systems for learning about treatments and examinations, the system requires less procurement costs and requires less storage space when not in use.
  • the mixed reality display can project the first virtual augmentation onto the non-physical parts of the human model, so that the non-physical parts projected by the first virtual augmentation can be expressed as if they were added to the human model as if they were actually present.
  • the non-physical parts can be switched between display and non-display as needed.
  • the mixed reality display allows various learning activities to be carried out even if the user does not own multiple human models, and switching between display and non-display of the non-physical parts is expected to improve the efficiency and effectiveness of learning, and can also reduce the introduction and operating costs associated with owning multiple human models.
  • the learning system described above may also be configured so that the mixed reality display is configured to be able to visualize a second virtual augmentation, which is a facial expression model of the patient, superimposed on the face of the human body model.
  • a second virtual augmentation which is a facial expression model of the patient
  • the second virtual augmentation can be visualized (applied) superimposed on the facial portion of the human body model.
  • This allows the learner to study while also visually checking the facial expression model of the patient role projected by the second virtual augmentation as needed, or to study while visually checking the facial expression model of the patient role as needed and engaging in conversation according to the situation.
  • this learning is not only for directly acquiring skills, but also for training in observing changes in the patient's facial expression due to a sudden change in the patient's condition before the start of the procedure, and in conversation to ease the patient's anxiety and tension.
  • this learning system it is possible to train in the procedure or treatment to be performed as if the student were dealing with a real patient, without having to prepare an actual patient or a role-playing patient.
  • This provides the student with visual stimulation as if they were actually performing treatment on a patient in a real setting, and is expected to have an even greater learning effect.
  • the facial expression model of the patient represented in the second virtual augmentation may be a standard model or a person present, as described above, and in the case of a person present, it may be possible to display real-time facial expressions processed by video processing software. Also, as described above, this may be used in conjunction with a setting in which sound is generated in accordance with the facial expression of the standard model represented in the second virtual augmentation, and if a person present is used as the facial expression model, it may be set so that the model and the learner can converse, allowing training in how to respond flexibly to situations.
  • the learning system described above may also be configured such that the gaze of at least the depiction of the eyes in the facial expression model visualized in the second virtual augmentation can be moved toward the learner at any time, and the gaze can be recognized by the learner via the mixed reality display.
  • the timely movement of the gaze can be, for example, a standardized action preset in the mixed reality display, a learning instructor observing the learner's training status using software or the like to intentionally move the gaze, or a pressure sensor or the like provided on the human model is linked to the software or the like of the mixed reality display, and the gaze is set to move toward the learner at the appropriate time when a specified pressure is detected.
  • a pressure sensor or the like provided on the human model when linked to a pressure sensor or the like provided on the human model, the presence or absence of gaze movement and the duration of the gaze movement can be set to vary depending on the strength of the detected pressure.
  • the second virtual augmentation may be applied to the human body model not only in the second step but also in the first step.
  • the facial expression model in the learning system of this embodiment may be a standard model preset in the mixed reality display, or may be that of a learning instructor or a companion, etc.
  • the above-mentioned learning system may be one in which the first virtual augmentation is provided so that medical images and three-dimensional anatomical images, set to an appropriate size to fit the human body model, can be superimposed on the same screen.
  • medical images here includes the above-mentioned two-dimensional CT images, etc., and does not exclude other types of medical images.
  • the learner can study while appropriately viewing the stereoscopic life-size medical images and 3D anatomical images superimposed and projected by the first virtual extension.
  • the spatial positional relationship between the medical images (particularly 2D ones) and the 3D anatomical images (3D) can be associated and studied on the same screen, which allows training to understand the relationship between the medical images and the 3D anatomical images, which many students and inexperienced practitioners have struggled with, and further deepens their understanding, and is expected to further improve the learning effect.
  • “Able to superimpose medical images and 3D anatomical images on the same screen” means that medical images and 3D anatomical images can be superimposed on the same screen, and includes both simultaneous overlapping display and switching between medical images and 3D anatomical images that are displayed alternately in the same position.
  • the medical images and 3D anatomical images may be standard healthy images, images of organs, bones, etc. of a patient or individual with a lesion or specific characteristics, or images of a photographed individual's medical images and 3D anatomical images constructed based on the medical images.
  • images of organs, bones, etc. of a patient with a lesion are used, they can be used not only by learners but also by medical professionals, including doctors, in considering treatment plans and pre-operative meetings before carrying out actual surgery or treatment.
  • the learning system described above may be configured such that the human body model and the learning object instrument are provided with a position sensor and a pressure sensor or pressure sensor, the mixed reality display has a receiving function capable of receiving the position information detected by the position sensor and the pressure value detected by the pressure sensor or pressure sensor, and a set value storage function, and when both or either one of the position information and the pressure value received by the receiving function exceeds a preset value, the second virtual augmentation is configured to change to an expression expressing discomfort or distress.
  • the facial expression model represented by the second virtual extension changes to one that expresses discomfort or agony. This allows the learner to immediately determine if they have performed an inappropriate treatment, and allows training to be performed while observing the facial expression in the same way as when performing treatment on a real human.
  • Whether or not a treatment would cause pain in a human body is determined by whether or not the position information and/or pressure value detected by the position sensor and pressure sensor or pressure sensor installed on the human body model and the training target device exceed a preset value.
  • the facial expression model represented by the second virtual extension displayed on the mixed reality display changes to an expression expressing discomfort or agony.
  • a pressure value is transmitted by a pressure sensor or the like attached to the human body model, and if the pressure value received by the mixed reality display directly or via an external device that analyzes the pressure value is inappropriate, the facial expression model represented by the second virtual extension displayed on the mixed reality display changes to an expression expressing discomfort or agony.
  • a setting in which sound is generated in accordance with the change to an expression expressing discomfort or agony may be used in combination, in which case the sense of realism is increased and training can be performed with a sense of tension.
  • the aforementioned learning system may be such that a pseudo-structure reproducing one of skin, muscle, bone, blood vessel, membrane or organ is applied to at least a specific part of the human body model that is the subject of learning, or a combination of multiple such pseudo-structures may be applied.
  • this learning system when applying the learning tool to the human body model in training that simulates examinations and treatments, a pseudo structure that reproduces skin, etc. is applied to the area where the examination or treatment is performed, so that a tactile response closer to that of the human body is obtained. This allows the learner to receive tactile stimulation as if they were actually performing treatment on a patient in the actual field, and a higher learning effect can be expected.
  • the human body model may also be one in which multiple pseudo structures of skin, muscle, bone, blood vessel, membrane or organ are applied to a specific part to be studied. For example, if skin, bone and target organ are selected as pseudo structures and arranged in a superimposed manner, in training for puncturing the target organ, the learner can learn, through actual touch, how to palpate the gap between the bones, how much force to use to pierce the skin, how to insert the needle between the bones, and how much force and depth to use to insert the needle into the target organ. Furthermore, if pseudo structures of muscle, membrane and blood vessel are applied, the learner can feel the resistance of the muscle or membrane when puncturing, and learn the technique of inserting the needle between the muscle fibers or blood vessels without damaging them.
  • the pseudo-structure may be applied not only to a part of the human body model, but also to the entire body.
  • one human body model can be used to train a variety of tests and procedures, which is convenient and further improves versatility.
  • the simulated structures may be made of soft materials for skin, muscles, blood vessels, membranes, and organs, and semi-hard or hard materials for bones.
  • this learning system when applying the learning tool to the human body model in the training that mimics the above-mentioned examination or treatment, a pseudo structure made of a material with a hardness corresponding to the part of the body to be examined or treated is applied, so that a tactile response that is even closer to that of the human body can be obtained.
  • This provides the learner with a tactile stimulation that makes them feel as if they are actually performing treatment on a patient in the actual field, and an even greater learning effect can be expected.
  • the learning system described above may be such that the human body model is provided with a camera capable of photographing the learner at the eye position if the model has a head, or at a position equivalent to the eye if the model does not have a head.
  • the learning system of this embodiment it is possible to capture video from the perspective of an observer of a learner undergoing training (learning) that simulates an examination or treatment, and obtain the video.
  • the learner can objectively view the facial expressions and behavior of the practitioner as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the practitioner and the treated in a single training session.
  • the aforementioned "camera” may be provided in any way that allows it to photograph the learner, but if it is a fixed structure, it is preferable that it has a wide angle of view, and it may also be a movable structure.
  • movable structures include a structure in which the head or eye unit (including those located in a position equivalent to the eye unit) can be operated manually, automatically, or remotely to face the learner and photograph the learner.
  • the images captured by the camera may be displayed in real time on a mixed reality display worn by the learner, or may be recorded on a hard disk of a personal computer connected to the mixed reality display.
  • the images may be displayed in a sub-window that opens next to the image of the human body model that the learner is looking at, or may be displayed full screen by appropriately switching with the image of the human body model that the learner is looking at.
  • the learner can check the images on another monitor after the training (learning).
  • the captured images may be simultaneously output to a large monitor, in which case the images can be shared with other learners waiting other than the learner currently training (learning), and the learner who will next begin training can easily understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire group of learners.
  • learning learner currently training
  • the learning system described above may also be configured so that the mixed reality display can visualize a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space in which the human body model is placed.
  • a third virtual augmentation which is a model of an examination device and/or a treatment device, superimposed on the indoor space in which the human body model is placed.
  • the expression "examination device and/or treatment device” used in this embodiment means both an examination device and a treatment device, and either an examination device or a treatment device.
  • the indoor space can be viewed as an examination room or treatment room, and the human body model placed there can be viewed as the subject to be examined or treated, etc., for learning.
  • learning or training can be performed on preparation of the examination device, etc., and check procedures before starting, and learning or training on how to handle the situation and the learner's movements according to the positions of the examination device, etc. and the human body model after starting can also be performed.
  • this learning method allows students to practice the procedures or treatments they need to perform without having to prepare actual examination or treatment equipment, as if they were treating a patient in a space with actual examination or treatment equipment. This provides students with visual stimulation as if they were actually treating a patient in a real setting, and is expected to have an even greater learning effect.
  • the learning method and learning system of the present invention allow a learner to learn the procedure or steps of a procedure to be performed by using the learning tool on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.
  • FIG. 1 is a schematic diagram showing a configuration of a learning system according to a first embodiment of the present invention
  • FIG. 2 is an image diagram showing a human body model and a first virtual augmentation projected onto the human body model in the learning system shown in FIG. 1
  • 1 shows a learning system for a second embodiment of the present invention, in which (a) is an oblique view showing the state before the first virtual extension and the second virtual extension are projected onto the human body model, (b) is an oblique view showing the state after the first virtual extension and the second virtual extension have been projected onto the human body model, and (c) is an oblique view showing the state after only the bone image has been erased from the first virtual extension projected in (b).
  • This is a modified example (variant example 5) of the learning system according to the second embodiment shown in Figure 3, and is a front view showing a state in which the first virtual extension and the second virtual extension (chest CT image), virtual operation buttons, etc. are projected onto the human body model.
  • 5 is an explanatory diagram of the usage state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual extension (transverse chest CT image) projected onto the human body model has been lowered to approximately the middle position in the chest height direction by operation, and (b) shows a state in which the display position has been lowered further than the position in (a) by operation.
  • 5 is an explanatory diagram of the usage state of the learning system shown in FIG.
  • FIG. 4 is an explanatory diagram of the use state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual extension (chest CT sagittal image) projected onto the human body model by operation is approximately the middle position in the chest width direction, and (b) shows a state in which a 3D anatomical image is simultaneously superimposed on the image of (a) by operation.
  • FIG. 13 is an explanatory diagram showing the configuration of a human body model used in a learning system according to a third embodiment of the present invention. An image diagram showing a third virtual augmentation projected onto a classroom in a learning system according to the fourth embodiment of the present invention.
  • a learning system 1 used in a classroom R1 includes a human body model 2 as a projection target, a mixed reality display 3 worn by a learner H, and a learning target tool 4 as a target for learning manual techniques by the learner H. Each part of the learning system 1 will be described in detail below.
  • the human body model 2 can be the projection target for the first virtual augmentation described below. In this embodiment, it is a life-size training model doll with a head and torso, and a commercially available chest drain insertion simulator is used.
  • the mixed reality display 3 is a device capable of visualizing a first virtual augmentation 31, which is a physical anatomical model, in a superimposed manner on a part or all of the human body mannequin 2, and is provided so as to be wearable on the head of the learner H.
  • the mixed reality display 3 is a head-mounted display (goggle type) capable of implementing MR technology (see FIG. 1 ).
  • the first virtual augmentation 31 is an image of a person's bones, organs, blood vessels, and trachea, and the data of these images is installed in the memory function unit of the mixed reality display, and the image generation function unit constructs an image in which multiple images are superimposed (superimposed mode), and the image is projected via the display unit.
  • the learning object instrument 4 is a subject of procedure learning for the learner H, and in this embodiment is a chest drain catheter and an inner tube (medical instrument) (see FIGS. 1 and 2).
  • the learning method includes at least the following first and second steps.
  • the learner H wears the mixed reality display 3 on his/her head and visually recognizes the human body model 2 and the first virtual augmentation 31 projected thereon through the mixed reality display 3; (2) In the second step, the learner H, who has completed the first step, visually checks the human body model 2 and the first virtual extension 31, picks up the tool 4 to be studied and applies it to the human body model 2, thereby learning the procedure or treatment to be performed.
  • the instructor places the human model 2 in an appropriate position (on the bed in FIG. 1), wears the mixed reality display 3, and sets it up so that the first virtual augmentation 31 is correctly projected onto the human model 2.
  • the instructor wearing the mixed reality display 3, operates the virtual controller that appears on the display to select the first virtual augmentation 31 to be projected, and adjusts the position etc. so that the first virtual augmentation 31 is visualized superimposed on the human model 2.
  • learner H can view the human body model 2 and the first virtual augmentation 31 projected onto it via the mixed reality display 3.
  • learner H can pick up the tool 4 to be studied that will be used in the actual treatment and apply the tool 4 to the human body model 2 onto which the first virtual augmentation 31 is projected.
  • learner H can visually obtain a sense of realism and immersion, as if he were in front of a patient, and because he holds the actual target tool 4 in his hands and applies it to the human body model 2, rather than using a virtual image that does not involve any sensation, he can obtain a tactile response that is close to that of actual work.
  • learner H receives visual and tactile stimulation as if he were actually performing treatment on a patient in an actual workplace, and a high learning effect can be expected.
  • the human body model 2 only includes the head and torso, but the mixed reality display 3 can project a first virtual extension, such as virtual upper and lower limbs, in a three-dimensional, superimposed manner onto the non-physical parts of the human body model 2, allowing the user to visually identify and learn about body parts that are connected to the part being studied (that are added to the part being studied).
  • a first virtual extension such as virtual upper and lower limbs
  • the mixed reality display 3 can also switch between displaying and hiding the parts that do not have a physical presence as necessary.
  • the mixed reality display 3 can easily change the content of the first virtual augmentation 31 to be projected by installing image data, and by projecting various images and videos onto one human model, it is possible to obtain the same effect as having multiple human models.
  • various types of learning can be done by changing the contents of the first virtual extension 31 or by switching between displaying and hiding the parts that do not have a physical body, which improves the efficiency and effectiveness of learning, while also reducing the introduction (procurement) and operating costs associated with owning multiple human body models, and requires less storage space when not in use.
  • the first virtual augmentation may use not only images of standard organs, etc., but also images of affected areas of individual patients collected during prior examinations, etc.
  • simulated training learning
  • by installing or updating data related to the first virtual augmentation it is possible to learn about the examination of new or special cases, and to practice becoming proficient in new treatment methods.
  • the learning system 1a is another embodiment (second embodiment) of the learning system 1, and includes a human body model 2a, a mixed reality display 3a, and a learning target instrument 4a.
  • the learning system 1a has a structure and effects in common with the learning system 1 of the first embodiment, so the common structure and effects will be omitted, and the different structure and effects will be described later.
  • the mixed reality display 3a is not shown, but is referred to as the mixed reality display "3a" for the convenience of explaining the difference from the mixed reality display 3.
  • the human body model 2a can be the projection target for the first virtual expansion and the second virtual expansion described below, and in this embodiment is a life-size training model doll having a head, torso, and the upper half of the thighs (see Figure 3 (a); no special mechanisms such as a chest drain insertion part are provided).
  • the mixed reality display 3a is a head-mounted display (goggles type) capable of implementing MR technology, and is configured to be able to visualize, in addition to the first virtual augmentation 31 described above, a second virtual augmentation 32, which is a facial expression model of the patient, superimposed on the facial portion of the human body model 2a (see Figures 3(b) and (c)).
  • the second virtual augmentation 32 is an image of a person's face, and the data of this image is installed in the memory function unit of the mixed reality display, and then constructed by the image generation function unit as an image in which the image is superimposed on the facial portion of the human body model 2a (superimposed mode), and this image is projected via the display unit.
  • the learning object tool 4a is a tool for the learner to learn a procedure, and in this embodiment is an ultrasound diagnostic device and its probe (examination tool) (see FIGS. 3(a) to 3(c)).
  • the second virtual extension 32 is visualized (applied) superimposed on the facial portion of the human body model 2a (see Figures 3(b) and (c)). This allows the learner to study while appropriately viewing the facial expression model projected as the second virtual extension 32. Furthermore, according to the learning system 1a, in addition to directly acquiring skills, learners can also train by observing changes in the patient's (human body model 2a's) facial expression due to a sudden change in the patient's condition before the start of the procedure, and by having a conversation with the patient to ease their anxiety and tension.
  • the learning system 1a and the learning method using it it is possible to train in the procedure or treatment to be performed as if the student were dealing with a real patient, without having to prepare an actual patient or a role-playing patient.
  • This provides the student with visual stimulation as if they were actually performing treatment on a patient in a real setting, and an even greater learning effect can be expected.
  • the learning system 1a is configured to capture facial images of the instructor or attendees taken by the mixed reality display 3a or an external terminal as a facial expression model of the patient role represented by the second virtual augmentation 32, and to display real-time facial expressions processed by image processing software.
  • the facial expression model and the learner are set to be able to converse with each other, so that training in responding flexibly to situations can be carried out.
  • the learning system 1a according to Modification 1 has the same structure and effects as the learning system 1a according to the second embodiment, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 1 is not shown in the drawings, it will be described using the same reference numerals as the second embodiment.
  • the facial expression model of the person in attendance plays the role of a patient, which creates a sense of tension during learning and is expected to have a high learning effect (on the other hand, it is possible that a good learning effect can be expected in a relaxed atmosphere by providing humor).
  • learning can be carried out including training in which the facial expression model of the patient is appropriately viewed and conversation appropriate to the situation is also included (for example, explanations or casual conversation to ease the patient's anxiety and tension).
  • a pressure sensor (not shown) is provided in the torso of the human body model 2a, and a position sensor (not shown) is provided in the tip of the learning target instrument 4a.
  • the mixed reality display 3a has a receiving function for receiving position information detected by the position sensor, a receiving function capable of receiving a pressure value detected by the pressure sensor, and a setting value storage function, and is configured such that when both or either one of the position information and the pressure value received by each receiving function exceeds a preset setting value, the second virtual augmentation 32 changes to an expression expressing discomfort or agony.
  • the learning system 1a according to Modification 2 has the same structure and effects as the learning system 1a according to the second embodiment (and Modification 1), so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 2 is not shown in the drawings, it will be described using the same reference numerals as in the second embodiment.
  • learner H presses the probe against the abdomen or other part of the human body model 2a, a pressure value is transmitted by a pressure sensor provided in the human body model 2a.
  • the mixed reality display 3a receives the transmitted pressure value, and if the pressure value is inappropriate, the facial expression model represented by the second virtual augmentation 32 displayed on the screen of the mixed reality display 3a changes to an expression expressing discomfort or distress.
  • the learning system 1a of the second modification and the learning method using it when the learning target tool 4a is applied to the human body model 2a in the training simulating the above-mentioned examination, if a treatment that would cause pain in a human is performed, the facial expression model represented by the second virtual extension 32 changes to a facial expression expressing discomfort or agony.
  • This allows the learner H to immediately determine that he or she has performed an inappropriate treatment, and allows the learner H to train in performing the treatment while observing the facial expression in the same way as when performing the treatment on a real human.
  • the learning system 1a according to the modified example 3 has the same structure and effects as the learning system 1a according to the second embodiment, except for the above points, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to the modified example 3 is not shown in the figures, it will be described using the same reference numerals as the second embodiment.
  • the line of sight of the patient role can be changed and the learner H can recognize this.
  • actual patients and those receiving medical treatment may look at the face of the practitioner when they feel anxious or in pain, but according to the learning system 1a of the modified example 3 and the learning method using the same, the line of sight of the visualized facial expression model moves in the direction of the learner H as appropriate, allowing the learner to experience a sense of tension and realism similar to that of an actual treatment, which is expected to further improve the learning effect.
  • the aforementioned timely movement of the gaze can be performed automatically or manually.
  • automatic movements include a mode in which the gaze is a standard operation preset in the mixed reality display, and a mode in which the gaze is set to move toward the learner at the appropriate time when pressure is detected at a specific part of the human model by linking a pressure sensor or the like provided on the human model with the software of the mixed reality display, etc.
  • manual movements includes a mode in which a teaching instructor or the like intentionally moves the gaze.
  • a camera capable of photographing the learner H is provided at the position of the eye 321 on the head of the human body model 2a.
  • the camera is a fixed wide-angle camera, and the head of the human body model 2a is structured so that it can move left and right and up and down.
  • the image captured by the camera can be displayed in real time on the mixed reality display 3 worn by the learner H, and is also provided so as to be recordable on a hard disk or the like of a personal computer wirelessly connected to the mixed reality display 3.
  • the learning system 1a according to Modification 4 has the same structure and effects as the learning system 1a according to the second embodiment, except for the above points, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 4 is not shown in the drawings, it will be described using the same reference numerals as the second embodiment.
  • video can be captured from the viewpoint of observing the learner H during training (learning) simulating examinations and treatments, and the video can be obtained in real time.
  • the learner can check the video recorded on the hard disk of a personal computer on another monitor after training (learning).
  • the learner H can objectively view the facial expressions and actions of the therapist as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the therapist and the treated in a single training session.
  • the captured images can be simultaneously output to a large monitor, in which case the images can be shared with other students waiting in line in addition to the student currently undergoing training (studying). This makes it easier for the next student to begin training to understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire group of students.
  • the "camera” is the structure described above, but is not limited to this, and may be, for example, a movable structure or the like, as long as it is capable of photographing the learner.
  • a camera capable of photographing the learner may be provided at a position equivalent to the eye.
  • a stereo camera is installed as the camera, another learner playing the role of a patient, etc. can wear a mixed reality display and observe the image, thereby immersing himself in the three-dimensional environment from the perspective of the patient, etc., and also experiencing the psychology of the patient, etc.
  • the learning system 1a' uses a human body model 2a' (headless, torso only), and as a first virtual extension 31a' applied to the human body model 2a, a two-dimensional CT image 311 (corresponding to the above-mentioned "medical image"; the same applies below) of an organ set to an appropriate size to fit the human body model 2a and a three-dimensional anatomical image 312 are provided so as to be superimposed on the same screen.
  • Figures 4 to 7 show the entire image displayed on the mixed reality display 3a in the learning system 1a', which is viewed by the learner H.
  • the first virtual extension 31a' shown in this figure is a two-dimensional CT image 311 of the chest (multiple overlapping chest CT cross-sectional images).
  • a number of handles (three in this modified example) and a number of buttons (eight in this modified example) expressed by virtual extension are displayed around the human body model 2a'.
  • Each of the aforementioned handles is used to change the display position of the image, etc., and is operated by learner H by holding it on the display screen.
  • Each of the aforementioned buttons is used to switch the display of images, etc., and is operated by learner H by pressing it on the display screen.
  • the learning system 1a' allows learner H to grasp the first handle 313 in the displayed image and move it up and down in the image to display a chest CT transverse image of any location. For example, when learner H grasps the first handle 313 and moves it downward from the position of the first handle 313 shown in Figure 4, the position of the displayed chest CT transverse image will gradually decrease as shown in Figures 5(a)-(b). Conversely, when learner H raises the first handle 313 that he is grasping, the position of the chest CT transverse image will gradually increase (not shown). In other words, the raising and lowering operation of the first handle 313 is linked to the display position on the chest CT transverse image.
  • the learning system 1a' allows learner H to grasp the second handle 314 in the displayed image and move it in the forward and backward directions of the image to display a chest CT coronal section image of any location.
  • the position of the displayed chest CT coronal section image will gradually move toward the back as shown in FIGS. 6(a)-(b).
  • learner H pulls the grasped second handle 314 toward the front the position of the chest CT coronal section image will gradually move toward the front (not shown).
  • the operation of moving the second handle 314 forward and backward is linked to the display position on the chest CT coronal section image.
  • the learning system 1a' allows learner H to grasp the third handle 315 in the displayed image and move it left and right (left and right in Figs. 4 to 6, front and back in Fig. 7(a)) to display a chest CT sagittal image of any location.
  • learner H grasps the third handle 315 and pushes it from the position of the third handle 315 shown in Fig. 7(a) toward the back of the displayed image, the position of the displayed chest CT sagittal image will gradually move toward the back.
  • learner H pulls the grasped third handle 315 toward the viewer the position of the chest CT sagittal image will gradually move toward the viewer (not shown).
  • the forward and backward (left and right) movement of the third handle 315 is linked to the display position in the chest CT sagittal image.
  • the first button 316 is an ON/OFF switch for displaying the 2D CT image 311.
  • the learning system 1a' can display the 2D CT image 311 (a 2D CT image of the chest in Figures 4 to 7(a)) (projecting the 2D CT image 311 in a superimposed manner onto the human body model 2a').
  • the second button 317 is an ON/OFF switch for displaying the 3D anatomical image 312.
  • the learning system 1a' can display the 3D anatomical image 312 (in FIG. 7(b) a 3D anatomical image of the chest) (projecting the 3D anatomical image 312 superimposed on the human body model 2a').
  • the images shown in Figs. 7(a) and (b) can be alternately displayed.
  • learner H can study while appropriately viewing the stereoscopic life-size 2D CT image 311 and 3D anatomical image 312 superimposed and projected by the first virtual extension 31a', and can ultimately learn the spatial positional relationship between the 2D CT image 311 and the 3D anatomical image 312 by associating the two dimensions with the three dimensions.
  • the 2D CT image 311 and the 3D anatomical image 312 may be images of a healthy standard, or images of the organs, bones, etc. of a patient or individual with a lesion or specific characteristics.
  • images of the organs, bones, etc. of a patient with a lesion they can be used not only by learners but also by medical professionals including doctors to consider treatment plans and for pre-operative meetings before carrying out actual surgery or treatment.
  • “Able to superimpose 2D CT images and 3D anatomical images on the same screen” means that 2D CT images and 3D anatomical images can be superimposed on the same screen, and includes both cases where 2D CT images and 3D anatomical images are alternately displayed in the same position by switching, and where they are simultaneously displayed overlapping each other.
  • the learning system 1b is another embodiment (third embodiment) of the learning system 1, and includes a human body model 2b, a mixed reality display 3, and a learning target instrument 4. Since the learning system 1b has the same structures and effects as the learning system 1 of the first embodiment in terms of the mixed reality display 3 and the learning target instrument 4, the description of the common structures and effects will be omitted, and the structure and effects of the human body model 2b, which are different, will be described later.
  • the mixed reality display 3 is not illustrated, but for convenience of description, the mixed reality display is designated by the symbol "3".
  • the human body model 2b can be a projection target of the first virtual augmentation 31 and the second virtual augmentation 32 (see FIG. 4).
  • the human body model 2b is a life-size training model doll having a head, a torso, and the upper half of the thighs, and a cavity is formed in the chest, and a pseudo-structure 21 (see the dashed line in FIG. 4) that reproduces the arrangement, shape, and texture of the skin, muscles, bones, pleura, lungs, and blood vessels is applied to the cavity.
  • the skin, muscles, pleura, lungs, and blood vessels are made of soft materials, and the bones are made of hard materials.
  • the pseudo-structure may be a purchased model (ready-made product) of an individual organ or the like and assembled, or may be manufactured by the user using a 3D printer as described later.
  • the pseudo-structure When the pseudo-structure is produced in-house, it can be produced using a 3D printer, a resin film material (wrap), a foamed resin material (sponge), various hard and soft resin materials, or a combination of these. Furthermore, the pseudo-structure produced in-house or outsourced can be not only a standard organ, but also a reproduction of an individual patient's organ collected during a prior examination. In this case, compared to training using a general model, it is possible to carry out simulated training that is more in line with the case and incorporates visual and tactile sensations before performing surgery on a patient with a specific condition.
  • a pseudo structure 21 that reproduces the shape and hardness of the area to be treated, such as the skin, is applied, so that a tactile response closer to that of the human body can be obtained. This allows the learner H to receive tactile stimulation as if he or she were actually performing treatment on a patient in the field, and a higher learning effect can be expected.
  • the human body model 2b has a structure with a pseudo structure 21 in which each element is arranged in a superimposed manner, so that in training for the procedure of puncturing a target organ as shown in FIG. 4 (i.e., using the learning target instrument 4), learner H can learn the techniques with a sense of touch close to that of an actual human body, such as palpation to find between the bones, the amount of force to use when puncturing the skin, membrane, and muscle while feeling their resistance, the procedure of removing the needle between the bones without damaging the muscle fibers or blood vessels, and the amount of force and depth to use when reaching the needle into the target organ.
  • learner H can learn the techniques with a sense of touch close to that of an actual human body, such as palpation to find between the bones, the amount of force to use when puncturing the skin, membrane, and muscle while feeling their resistance, the procedure of removing the needle between the bones without damaging the muscle fibers or blood vessels, and the amount of force and depth to use when reaching the needle into the target organ.
  • the image projected in the first virtual extension 31 is set to be projected at a position overlapping with the equivalent in the pseudo structure 21.
  • the bone part of the pseudo structure 21 and the image of the bone projected in the first virtual extension 31 are projected at an overlapping position.
  • This allows the learner to visually recognize the course of the projected blood vessels, etc., and the arrangement of the organs and bones (first virtual extension 31) via the mixed reality display 3, and also the projected facial expression model (second virtual extension 32), so it goes without saying that learning through vision is also possible (however, in FIG. 4, the configuration of the pseudo structure 21 is emphasized to make it easier to understand, and the image projected in the first virtual extension 31 as shown in FIG. 2 is omitted, and only the position is shown).
  • the learning system 1c is another embodiment (fourth embodiment) of the learning system 1, and includes a human body model 2, a mixed reality display 3c, and a learning target instrument 4c. Since the learning system 1c has the same structure and effects as the learning system 1 of the first embodiment in terms of the structure and effects of the human body model 2, the description of the common structure and effects will be omitted, and the structure and effects of the different mixed reality display 3c and learning target instrument 4c will be described later.
  • the mixed reality display 3c is omitted from the illustration, but for the convenience of explaining the difference from the mixed reality display 3, the mixed reality display is marked with the symbol "3c".
  • the mixed reality display 3c is a head-mounted display (goggles type) capable of implementing MR technology, and in addition to the first virtual augmentation 31 and second virtual augmentation 32 described above, it is configured to be able to visualize a third virtual augmentation 33, which is an examination device (in this embodiment, a CT examination device), superimposed on the classroom R2, which is the indoor space in which the human body model 2 is installed (see Figure 5).
  • a third virtual augmentation 33 which is an examination device (in this embodiment, a CT examination device), superimposed on the classroom R2, which is the indoor space in which the human body model 2 is installed (see Figure 5).
  • the third virtual augmentation 33 is an image of the inspection device 331, and the data of these images is installed in the memory function unit of the mixed reality display, and then constructed by the image generation function unit as an image in which the image is superimposed (superimposed) on the indoor space of classroom R2, and this image is projected via the display unit.
  • the gantry portion is projected into empty space, and the cradle portion is projected and superimposed onto a general bed on which the human body model 2 is placed.
  • the learning target device 4c is an AED (Automated External Defibrillator).
  • the learner can view the third virtual extension 33 projected onto the space in the classroom R2 via the mixed reality display 3c, and can study by regarding the classroom R2 as an examination room and the human body dummy 2 (on which the first virtual extension 31 and the second virtual extension 32 are projected) installed there as the subject to be examined (see FIG. 5).
  • the learner can then study or train on the preparation of the examination device 331 and the check procedures before starting while viewing the image of the third virtual extension 33, and can also study or train on the learner's movements and other movements according to the positions of the examination device 331 and the human body dummy 2 after starting.
  • the learning system 1c since a contrast agent is used in a CT scan, there is a possibility that the subject may develop drug-induced shock.
  • the learning system 1c and a learning method using it it is possible to conduct resuscitation training using the learning subject device 4c (AED) on the human body model 2, assuming that such a shock occurs.
  • the learning system 1c reproduces a special environment within the classroom R2, and makes it possible to perform simulations for dealing with situations that may arise in that environment.
  • learning system 1c and a learning method using it allow trainees to practice the procedures or treatments they need to perform as if they were treating a patient in a space with actual testing equipment, without having to prepare actual testing equipment. This provides the trainee with visual stimulation as if they were treating a patient in an actual setting, and is expected to have an even greater learning effect. Furthermore, learning system 1c and a learning method using it make it easy to have a simulated experience as if there were virtual testing equipment, even when actual testing equipment does not exist, and is expected to help trainees become accustomed to the surrounding environment (examination room or operating room).
  • the external output of the mixed reality display can be used to project the image seen by learner H onto another display, allowing learner H's perspective, situation, and other experiences (successful or unsuccessful examples, etc.) to be shared with other students. Furthermore, instructors can also provide guidance and advice to students while watching the other displays.
  • the various virtual augmentations displayed on the mixed reality display 3 (3a, 3c) can be given a function to superimpose a message on the area to be treated or on the correctness of the selected instrument for a particular disease or symptom on that area or instrument. For example, when the appropriate puncture area or range is touched, a pop-up message such as "correct” can be displayed, improving the effectiveness of self-study.
  • the first virtual augmentation 31 and the second virtual augmentation 32 displayed on the mixed reality display 3 (3a, 3c) in each of the above-mentioned learning systems 1 (1a, 1a', 1b, 1c) can select and project facial expressions and body types of people of all ages and genders from preset data. This allows for more practical learning while visually recognizing differences in body types and organs due to age and gender. In addition, since there is no need to prepare human body models of different ages, genders, and body types when learning, it is possible to reduce the introduction (procurement) and operating costs associated with owning multiple human body models, and less storage space is required when not in use.
  • Each of the learning systems 1 (1a, 1a', 1b, 1c) described above also makes it possible to learn about rare anatomical or clinical cases.
  • medical accidents can occur due to encountering rare anatomical or clinical cases, and there have been reported cases leading to the death of the patient.
  • the surgeon's lack of knowledge or experience can cause damage to blood vessels during treatment, leading to serious medical accidents.
  • each learning system 1 (1a, 1a', 1b, 1c)
  • it is possible to select whether to display 3D anatomical images making it possible to reproduce and learn about rare anatomical or clinical cases, and this can also be used to learn about the risks, safety, and medical safety of procedures in rare cases such as those mentioned above.
  • each of the learning systems 1a, 1a', 1b, and 1c described above it is possible to reproduce and display the change in the patient's facial gaze as a second virtual extension, so that the patient's gaze turns toward the surgeon, which increases the sense of realism (especially the sense of tension) of the learner playing the role of the surgeon, and further improvements in the learning effect can be expected.
  • 2D CT images are not observed at life-size, and currently there is no environment in which 2D CT images and 3D anatomical images can be studied in a life-size, stereoscopic manner. There have been no simulators to date that allow life-size 2D CT images and 3D anatomical images to be superimposed simultaneously and that allow learning to associate the spatial positional relationship between the 2D and 3D dimensions.
  • a human body model which is a machine or tool
  • each of learning systems 1a, 1a', 1b, and 1c adds interactivity based on the above-mentioned effects.
  • Conventionally there have been no learning systems that use a human body model or the like that is anthropomorphized and has interactivity, but according to the learning method using each of learning systems 1a, 1a', 1b, and 1c, it is expected that the learning effect will be more efficient and superior than learning using a conventional human body model or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Medicinal Chemistry (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Instructional Devices (AREA)

Abstract

Provided are a learning method and a learning system by which, while viewing a virtual extension of a physical anatomical model projected onto a human body model via a worn mixed-reality display, a learner can learn a procedure or a measure for a skill to be performed, using tools being learned, for a real human body model. This learning system 1 comprises: a human body model 2 that is to be projected; a mixed-reality display 3 that is capable of visualizing a first virtual extension 31, which is a physical anatomical model, in a superimposed manner over all or part of the human body model 2, the mixed-reality display 3 being worn on the head of a learner H; and a tool 4 being learned, which is a medical tool or an inspection tool with which a skill is to be learned.

Description

学習方法、及び、学習システムLearning method and learning system

 本発明は、学習方法、及び、学習システムに関する。詳しくは、学習者が、装着した複合現実ディスプレイを介して人体模型に投影された物理解剖学的モデルの仮想拡張を視認しながら、現実の人体模型に対して学習対象器具を使用し、行うべき手技の手順又は処置を学習可能なものに関する。 The present invention relates to a learning method and a learning system. More specifically, the present invention relates to a learning method and a learning system that enable a learner to learn the procedure or steps of a procedure to be performed by using a learning tool on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.

 従来から、大学の医学系学部、看護等の高校や専門学校等の医療の学習現場では、医療用シミュレータ等といわれる人体模型を使用して、人体構造や手技の学習及び訓練が行われており、例えば、下記非特許文献1に記載されているようなものが挙げられる。 Traditionally, in medical learning settings such as medical faculties at universities, nursing high schools and vocational schools, human body models known as medical simulators have been used to study and train about human anatomy and procedures, such as those described in Non-Patent Document 1 below.

 非特許文献1に記載された人体模型は、学習目的(穿刺や静脈カテーテル管理等の医療技術を指導及び実習)に合わせた人体の一部(下顎~胸部~右肩)のモデルであり、モデル部位の骨格や血管の走行が正確に再現されている。穿刺位置の選定からカテーテル挿入まで実践に即したトレーニングが可能とされている。 The human body mannequin described in Non-Patent Document 1 is a model of part of the human body (lower jaw to chest to right shoulder) tailored to the purpose of learning (instruction and practice of medical techniques such as puncture and intravenous catheter management), and the bone structure and blood vessels of the modeled parts are accurately reproduced. It is said to enable practical training from selecting the puncture position to inserting the catheter.

 一方、近年、実体物である人体模型ではなく、VR(Virtual Reality)を利用した擬似体験システムによる学習方法が提案されており、例えば特許文献1に記載されたものが挙げられる。 On the other hand, in recent years, learning methods have been proposed that use simulated experience systems that utilize VR (Virtual Reality) rather than actual human body models, such as the one described in Patent Document 1.

 特許文献1に記載された擬似体験システムは、映像表示装置と、教育、研究または訓練の対象物と同一あるいは略同一の形状に形成された本体部、および、該本体部に設けられ、前記映像表示装置で表示される前記対象物を模した被操作体映像の動きと、前記本体部の動作とを同期させるための信号を発信可能な信号発信部を有するコントローラと、該コントローラ、および、前記映像表示装置と接続され、前記信号発信部から発信された信号を受信可能な信号受信部、該信号受信部に接続され、受信した前記信号に基づいて同本体部の動作を解析し、動作データを算出可能な演算部、前記対象物の形状のデータに基づいて前記被操作体映像を生成可能な映像生成部、該映像生成部が生成する前記被操作体映像が、前記本体部の動作に追従して動くように、前記演算部が算出する動作データを同期可能な同期処理部、および、該同期処理部で処理された前記被操作体映像を前記映像表示装置に出力可能な映像出力部を有するコンピュータとを備えるものである。 The simulated experience system described in Patent Document 1 includes a video display device, a main body formed in the same or nearly the same shape as an object of education, research, or training, a controller provided on the main body and having a signal transmitter capable of transmitting a signal for synchronizing the movement of an image of a controlled object simulating the object displayed on the video display device with the operation of the main body, a signal receiver connected to the controller and the video display device and capable of receiving a signal transmitted from the signal transmitter, a calculation unit connected to the signal receiver and capable of analyzing the operation of the main body based on the received signal and calculating operation data, an image generation unit capable of generating an image of the controlled object based on data on the shape of the object, a synchronization processing unit capable of synchronizing the operation data calculated by the calculation unit so that the image of the controlled object generated by the image generation unit moves in accordance with the operation of the main body, and a computer having an image output unit capable of outputting the image of the controlled object processed by the synchronization processing unit to the video display device.

 特許文献1に記載された擬似体験システムによれば、教育等の対象物と同一あるいは略同一の形状を有する本体部に触れながら、映像表示装置に表示された対象物を模した被操作映像の操作を行うことで、ユーザの視覚と触覚が刺激され、直感的で没入感の高い疑似体験を得ることができるものである。 The simulated experience system described in Patent Document 1 allows the user to touch a main body that has the same or nearly the same shape as an educational object, and to operate an image to be operated that imitates the object displayed on an image display device, stimulating the user's sense of sight and touch, providing an intuitive and immersive simulated experience.

株式会社アヴィス 人体模型、医学模型、医療シミュレータ、人体ダミー販売 ヒューマンボディ 医療・看護シミュレータ KY11347-300 CVC穿刺挿入シミュレータ2 インターネット<URL:http://humanbody.jp/simulator/item/ky11347-300.html>Avis Co., Ltd. Human body models, medical models, medical simulators, human dummy sales Human Body Medical and nursing simulator KY11347-300 CVC puncture insertion simulator 2 Internet <URL: http://humanbody.jp/simulator/item/ky11347-300.html>

特開2020-038272号公報JP 2020-038272 A

 既存の技術である非特許文献1に記載された人体模型は、実体物であるため学習に際して現実感や触感があり、学習対象器具を使用する際の手順確認には好適である。しかしながら、同人体模型は、学習目的に合わせた人体の一部のモデルであることが多く、加えて、学習目的外の周辺人体構造が模型の構造に含まれていないものが通常であるため、学習目的外の利用が不可能であるか又は不適である。また、全身モデルの人体模型は存在するが、人体の一部のモデルと比較して高価であり、構造が精緻な部分とそうではない部分の差があることから、汎用的な使用に好適なものとは言い難い。 The human body dummies described in Non-Patent Document 1, which are existing technology, are real objects that provide a sense of realism and texture during learning, making them ideal for confirming procedures when using the target tool. However, these human body dummies are often models of parts of the human body tailored to the learning purpose, and in addition, the structure of the model usually does not include surrounding human body structures that are not intended for learning, making them impossible or unsuitable for use outside of learning purposes. Also, while full-body human body dummies do exist, they are expensive compared to models of parts of the human body, and there is a difference between parts that are sophisticated and parts that are not, making them difficult to say that they are suitable for general-purpose use.

 一方、特許文献1に記載された擬似体験システムによれば、映像表示装置に表示された臓器等の対象物を視認し、且つ、同対象物を模したコントローラを触れながらの学習は行うことができるものの、対象物はあくまでも実体の無い仮想現実である。つまり、学習対象器具を適用する人体模型のような実体物が存在しないので同擬似体験システムは、学習対象器具を使用する手技や手順確認のような実技訓練には向いていないものであった。 On the other hand, the simulated experience system described in Patent Document 1 allows learning by visually recognizing objects such as organs displayed on a video display device and touching a controller that imitates the objects, but the objects are still virtual reality with no substance. In other words, since there is no actual object such as a human body model to which the target tools are applied, the simulated experience system is not suitable for practical training such as confirming procedures and procedures that use the target tools.

 本発明は、以上の点を鑑みて創案されたものであり、学習者が、装着した複合現実ディスプレイを介して人体模型に投影された物理解剖学的モデルの仮想拡張を視認しながら、現実の人体模型に対して学習対象器具を使用し、行うべき手技の手順又は処置を学習可能な学習方法、及び、学習システムを提供することを目的とする。 The present invention has been devised in light of the above, and aims to provide a learning method and learning system that enables a learner to learn the procedure or steps of a procedure to be performed by using the tools to be studied on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.

 上記の目的を達成するために、本発明の学習方法は、投影対象となる人体模型、該人体模型の一部又は全部に対して物理解剖学的モデルである第1仮想拡張を重畳的に視覚化可能な複合現実ディスプレイ、及び、手技学習対象となる医療器具又は検査器具である学習対象器具、を使用して行われ、学習者が、前記複合現実ディスプレイを装着し、同複合現実ディスプレイを介して前記人体模型及びこれに投影された前記第1仮想拡張を視認する、第1ステップと、該第1ステップを経た前記学習者が、前記人体模型及び前記第1仮想拡張を視認しながら、前記学習対象器具を手に取って同人体模型に適用し、行うべき手技の手順又は処置を学習する、第2ステップと、を備える。 In order to achieve the above object, the learning method of the present invention is carried out using a human body model as a projection target, a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model, and a learning target instrument, which is a medical or examination instrument, for learning a procedure, and includes a first step in which a learner wears the mixed reality display and visually views the human body model and the first virtual augmentation projected thereon through the mixed reality display, and a second step in which the learner, having completed the first step, picks up the learning target instrument and applies it to the human body model while visualizing the human body model and the first virtual augmentation, thereby learning the procedure or steps of the procedure to be performed.

 ここで、第1ステップにおいて、学習者が複合現実ディスプレイを装着することで前準備が完了し、そして、学習者は、複合現実ディスプレイを介して人体模型及びこれに投影された第1仮想拡張を視認することができる。 Here, in the first step, the learner completes the preparation by wearing the mixed reality display, and the learner can view the human body model and the first virtual augmentation projected onto it through the mixed reality display.

 なお、用語「学習者」は、医療従事者を目指す医学部や看護学部、専門学校等の就業前の学生のみならず、既に医療従事者となった者も対象として含む意味で使用している。例えば、医学部等の新卒者であれば、学校で使用した器具と異なる種類の器具等への習熟や学校教育以上の実践的なトレーニングを行うこともあり、また、医学部等の既卒者であっても、新しく導入された器具等或いは新しい施術方法の習熟や、更なる技術向上のための継続的な学習として、トレーニングを行うこともあることから、これらの者も「学習者」に含まれる。本発明の学習方法は、本質的には学習者が使用するものであるが、この「学習者」から教授者(学習前後の説明において手本として本発明を実施する者)を除外するものではない。 The term "learner" is used to include not only pre-employment students in medical schools, nursing schools, vocational schools, etc. who aim to become medical professionals, but also those who are already medical professionals. For example, new graduates from medical schools, etc. may become familiar with different types of equipment than those used in school, or may undergo practical training beyond what they received in school. Also, even those who have already graduated from medical schools, etc. may undergo training to become familiar with newly introduced equipment, new treatment methods, or as part of continuous learning to further improve their skills. Therefore, these individuals are also included in the term "learner." The learning method of the present invention is essentially used by learners, but this does not exclude instructors (those who practice the present invention as models in explaining the method before and after learning).

 「第1仮想拡張」としては、例えば、人の臓器や骨の画像が挙げられ、臓器等の画像が人体模型における適当な箇所に投影されていることで、学習における没入感が高まり、また、臓器等位置の事前確認も行うことができる。また、「第1仮想拡張」は、人の臓器等の単一の画像のほか、複数の画像が重なった態様(重畳的態様)で投影されるものも含まれる。重畳的態様での投影の一例としては、骨の画像と、同骨の画像の下方に位置する臓器の画像が重なって投影されるような態様が挙げられ、この場合、複数投影された画像の配置を参照しながら、より実践的な学習が可能となる。 Examples of the "first virtual augmentation" include images of human organs and bones. Projecting images of organs, etc. onto appropriate locations on the human body model increases the sense of immersion in learning and also allows for advance confirmation of the positions of organs, etc. Furthermore, the "first virtual augmentation" includes not only single images of human organs, etc., but also projection of multiple images in an overlapping manner (superimposed manner). One example of projection in an overlapping manner is a manner in which an image of a bone and an image of an organ located below the same bone image are projected in an overlapping manner. In this case, more practical learning is possible by referring to the arrangement of the multiple projected images.

 「第1仮想拡張」で投影される画像のデータは、複合現実ディスプレイにインストールされたもの又は機体に接続された補助記憶装置に記憶されたものであってもよいし、複合現実ディスプレイに有線或いは無線方式で接続されるサーバ等の外部装置から受信するものであってもよい。また、「第1仮想拡張」に関する画像処理は、複合現実ディスプレイに備える機能によって行われる態様であってもよいし、複合現実ディスプレイに無線方式等で接続されるサーバ等の外部装置で処理されるデータを受信する態様であってもよい。 The image data projected in the "first virtual augmentation" may be installed in the mixed reality display or stored in an auxiliary storage device connected to the aircraft, or may be received from an external device such as a server connected to the mixed reality display by wire or wireless means. Furthermore, image processing related to the "first virtual augmentation" may be performed by a function provided in the mixed reality display, or may receive data processed by an external device such as a server connected to the mixed reality display by wireless means or the like.

 なお、複合現実ディスプレイは、人体模型において実体の無い部分(実体不在部分。空間ともいえる)にも第1仮想拡張を投影することができる。例えば、眼前にある実体の人体模型が胴部のみである場合において、同人体模型に対して実体不在部分である頭部や下肢、腕等の第1仮想拡張を三次元で重畳的に視覚化することで、あたかも同人体模型に頭部等が実在するかのように付加されたように表現することができる。更に、必要に応じて実体不在部分の表示、非表示を切り替えることもできる。つまり、複合現実ディスプレイによれば、複数の人体模型を所有していなくても様々な学習を行うことができ、実体不在部分の表示、非表示の切り替えによって学習の効率及び効果の向上が期待でき、複数の人体模型の所有に伴う導入及び運用コストの低減を図ることもできる。 The mixed reality display can also project the first virtual augmentation onto disembodied parts of the human model (parts that have no substance, which can also be considered space). For example, if the only part of the solid human model in front of the viewer's eyes is the torso, the first virtual augmentation of the disembodied parts of the human model, such as the head, lower limbs, and arms, can be visualized in three dimensions as if they were added to the human model as if it were a real object. Furthermore, the display of the disembodied parts can be switched on and off as needed. In other words, the mixed reality display allows a variety of learning to be done even if the viewer does not own multiple human models, and switching between displaying and hiding the disembodied parts is expected to improve the efficiency and effectiveness of learning, and can also reduce the introduction and operating costs associated with owning multiple human models.

 そして、第2ステップにおいて、学習者は、手技の手順又は処置を学習するにあたり、実際の施術や検査等に使用する学習対象器具を手に取り、第1仮想拡張が投影された人体模型に対して学習対象器具を適用することができる。 In the second step, the learner can pick up the tool to be studied that will be used in the actual treatment, examination, etc., when learning the procedure or treatment of the procedure, and apply the tool to be studied to the human body model onto which the first virtual augmentation is projected.

 これにより、学習者は、患者等を前にしたような臨場感が視覚的に得られ、且つ、感触を伴わない虚像ではなく、現実の学習対象器具を手にして人体模型に対し学習対象器具を適用するので、実際の作業に近い手応えが触覚的に得られる。この結果、学習者は、実際の現場で患者等に処置等を行っているかのような視覚的触覚的な刺激を受ける(換言すると、リアルな視覚や触覚を得られる)ので、高い学習効果が期待できる。 This allows the learner to visually get a sense of realism as if they were in front of a patient, and because they hold the actual tools being studied and apply them to the human body model, rather than using virtual images that do not involve any sensation, they get a tactile response that is close to that of actual work. As a result, the learner receives visual and tactile stimulation that makes them feel as if they are performing treatment on a patient in an actual location (in other words, they get realistic visual and tactile sensations), and a high learning effect can be expected.

 「学習対象器具」のうち、医療器具としては、例えば、穿刺針やドレナージ用のチューブ、縫合用針、注射器、鉗子、メス等の医療用刃物類、骨折治療に使用するプレートやピン等が挙げられ、検査器具としては、例えば、超音波診断装置におけるプローブ、心電図測定装置の電極、内視鏡等が挙げられる。なお、前述した医療器具と検査器具は、あくまで例示であり、各種器具が対象となり得ることは言うまでもない。 Among the "learning instruments," medical instruments include, for example, puncture needles, drainage tubes, suture needles, syringes, forceps, scalpels and other medical blades, plates and pins used in fracture treatment, etc., and examination instruments include, for example, probes in ultrasound diagnostic equipment, electrodes in electrocardiogram measuring equipment, endoscopes, etc. It goes without saying that the medical instruments and examination instruments mentioned above are merely examples, and various instruments can be the subject of the study.

 ところで、従来行われていた施術や検査の学習においては、検査部位や症例に応じた様々な種類の人体模型を準備することが一般的であり、複数の人体模型を準備するとすれば、調達コストの負担が大きく、また、不使用時には多くの収納スペースを要することになる。これに対し、本発明の学習方法によれば、複合現実ディスプレイで投影する第1仮想拡張の内容を変更することもでき、一の人体模型に様々な画像や映像(第1仮想拡張)を投影すれば、実際上複数の人体模型を所有するのと同様の効果を得られる。つまり、本発明の学習方法によれば、従来の学習方法と比較して、調達コストの負担が少なく、且つ、不使用時の収納スペースが少なくて済むものとなっている。 In the past, when learning about treatments and examinations, it was common to prepare various types of human dummies depending on the examination area and case, and preparing multiple human dummies would result in high procurement costs and require a lot of storage space when not in use. In contrast, the learning method of the present invention makes it possible to change the content of the first virtual augmentation projected by the mixed reality display, and by projecting various images or videos (first virtual augmentation) onto one human dummies, it is possible to obtain the same effect as actually owning multiple human dummies. In other words, the learning method of the present invention requires less procurement costs and less storage space when not in use compared to conventional learning methods.

 更に、本発明の学習方法によれば、複合現実ディスプレイが人体模型において実体の無い部分(実体不在部分。空間ともいえる)にも第1仮想拡張を投影することができる。例えば、眼前にある実体の人体模型が胴部のみである場合において、同人体模型に対して実体不在部分である頭部や下肢、腕等の第1仮想拡張を三次元で重畳的に視覚化することで、あたかも同人体模型に頭部等が実在するかのように付加されたように表現することができる。また、必要に応じて実体不在部分の表示、非表示を切り替えることもできる。つまり、本発明の学習方法によれば、複合現実ディスプレイによって複数の人体模型を所有していなくても様々な学習を行うことができ、実体不在部分の表示、非表示の切り替えによって学習の効率及び効果の向上が期待でき、複数の人体模型の所有に伴う導入及び運用コストの低減を図ることもできる。 Furthermore, according to the learning method of the present invention, the mixed reality display can project the first virtual augmentation onto the non-physical parts of the human model (parts that have no substance, which can also be considered space). For example, if the only part of the physical human model in front of the viewer's eyes is the torso, the first virtual augmentation of the non-physical parts of the human model, such as the head, lower limbs, and arms, can be visualized in a three-dimensional, superimposed manner, making it appear as if the head and other parts are added to the human model as if they were real. In addition, the non-physical parts can be switched between display and non-display as necessary. In other words, according to the learning method of the present invention, the mixed reality display can be used to carry out various types of learning even if the viewer does not own multiple human models, and switching between display and non-display of the non-physical parts can be expected to improve the efficiency and effectiveness of learning, and the introduction and operation costs associated with owning multiple human models can be reduced.

 また、前述の学習方法は、複合現実ディスプレイが、患者役の表情モデルである第2仮想拡張を、人体模型の顔面部分に対して重畳的に視覚化可能であり、第1ステップ及び第2ステップにおいて、第2仮想拡張が人体模型に適用され、少なくとも第2ステップにおいて、学習者は、第2仮想拡張により投影された患者役の表情モデルも適宜視認しながら学習を行うか、又は、同患者役の表情モデルを適宜視認し且つ状況に応じた会話も交えながら学習を行うものであってもよい。 In addition, the above-mentioned learning method may be such that the mixed reality display is capable of visualizing a second virtual augmentation, which is a facial expression model of the patient, superimposed on the facial portion of the human body model, and in the first and second steps, the second virtual augmentation is applied to the human body model, and in at least the second step, the learner studies while appropriately viewing the facial expression model of the patient projected by the second virtual augmentation, or studies while appropriately viewing the facial expression model of the patient and engaging in conversation according to the situation.

 本態様の学習方法によれば、少なくとも第2ステップにおいて、第2仮想拡張が人体模型の顔面部分に対して重畳的に視覚化(適用)されることにより、学習者は、第2仮想拡張により投影された患者役の表情モデルも適宜視認しながら学習を行うか、又は、同患者役の表情モデルを適宜視認し且つ状況に応じた会話も交えながら学習を行うことができる。 According to this learning method, at least in the second step, the second virtual augmentation is visualized (applied) superimposed on the facial portion of the human body model, allowing the learner to study while also visually checking the facial expression model of the patient role projected by the second virtual augmentation as appropriate, or to study while visually checking the facial expression model of the patient role and also engaging in conversation according to the situation.

 つまり、本態様の学習方法によれば、実際の患者や患者役を準備することなく(換言すると、人体模型を相手にしているに過ぎないにも関わらず)、実際の患者を相手にしているかのようにして、行うべき手技の手順又は処置の訓練を行うことができ、学習者にとって実際の現場で患者等に処置等を行っているかのような視覚的な刺激を受け(換言すると、リアルな視覚を得られ)、更なる高い学習効果が期待できる。 In other words, this learning method allows trainees to practice the procedure or treatment as if they were dealing with an actual patient, without having to prepare an actual patient or a role-playing patient (in other words, even though they are only dealing with a human model), and trainees are provided with visual stimulation as if they were actually treating a patient in an actual setting (in other words, they get a realistic visual experience), which is expected to result in even greater learning effects.

 なお、第2仮想拡張の人体模型への適用は、第2ステップのみならず、第1ステップで適用されてもよい。この場合、直接的な技術の習得のみならず、手技開始前における急激な病状変化による患者の表情の変化を観察したり、患者の不安や緊張を緩和するための会話を行ったりする訓練も行うことができる。 The second virtual augmentation may be applied to the human body model not only in the second step, but also in the first step. In this case, not only can the trainee directly learn the technique, but he or she can also train to observe the changes in the patient's facial expression due to a sudden change in the patient's condition before the procedure begins, and to talk to the patient to ease their anxiety and tension.

 更に、第2仮想拡張で表現される患者役の表情モデルとしては、例えば、複合現実ディスプレイに予め設定された定型のモデルであってもよいし、学習指導教員や一緒に指導を受ける同席者であってもよい。学習指導教員や同席者を患者役の表情モデルとした場合、学習に際して緊張感が提供されて高い学習効果が期待できる(反対にユーモアが提供されて和やかな雰囲気の下での高い学習効果が期待できる可能性もありうる)。 Furthermore, the facial expression model of the patient represented by the second virtual augmentation may be, for example, a standard model preset in the mixed reality display, or it may be the academic instructor or a fellow attendee receiving instruction. If the academic instructor or fellow attendee is used as the facial expression model of the patient, a sense of tension is provided during learning, which is expected to result in a high learning effect (on the other hand, it may be possible to provide humor and expect a high learning effect in a relaxed atmosphere).

 更にまた、学習指導教員や同席者を患者役の表情モデルとする場合、映像処理ソフトウェアを使用し、カメラで撮影したリアルタイムの表情を表示できるようにしてもよい。また、第2仮想拡張で表現される患者役の定型のモデルの表情に合わせて、例えば「痛い!」等の音声が発生する設定と併用してもよく、学習指導教員や同席者を患者役のリアルタイム表情モデルとする場合は、モデルと学習者とが会話可能に設定することで臨機応変な対応の訓練を行いうるようにしてもよい。 Furthermore, when the learning instructor or other attendees are used as facial expression models for the patient role, image processing software may be used to display real-time facial expressions captured by a camera. This may also be used in conjunction with a setting that generates a sound such as "Ouch!" in accordance with the facial expression of a standard model playing the patient role represented in the second virtual augmentation. When the learning instructor or other attendees are used as real-time facial expression models for the patient role, the model and the learner may be set to be able to converse, allowing training in how to respond flexibly to situations.

 また、前述の学習方法は、第2仮想拡張において視覚化された表情モデルのうち、少なくとも目に関する描写において目線が学習者の方向へ適時移動可能に設定されていると共に、複合現実ディスプレイを介して同学習者が目線を認識可能に設定されているものであってもよい。 The above-mentioned learning method may also be such that, among the facial expression models visualized in the second virtual augmentation, at least the depiction of the eyes is set so that the gaze can be moved toward the learner at any time, and the gaze can be recognized by the learner via the mixed reality display.

 本態様の学習方法によれば、学習者が第2仮想拡張により投影された患者役の表情モデルも適宜視認しながら学習を行う等の際に、患者役の目線を変化させることができると共に、学習者はこれを認識することができる。 According to this learning method, when the learner is studying while visually checking the facial expression model of the patient role projected by the second virtual augmentation, the line of sight of the patient role can be changed and the learner can recognize this.

 実際の患者や診察を受ける者は、不安や痛みを感じた場合に、施術者(医者や検査技師等)の顔(表情)を見ることがある。つまり、本態様の学習方法によれば、人体模型に適用し視覚化された表情モデルの少なくとも目線が学習者の方向へ適時移動する(換言すると目線が学習者へ向く)ことによって、実際の施術に近い緊張感や現実感を学習者に体感させ、学習効果の更なる向上が期待できる。 When actual patients or people undergoing medical examinations feel anxious or in pain, they may look at the face (expression) of the practitioner (doctor, technician, etc.). In other words, according to this learning method, at least the line of sight of the facial expression model visualized and applied to the human body mannequin moves in the direction of the learner at appropriate times (in other words, the line of sight is directed toward the learner), allowing the learner to experience a sense of tension and realism close to that of an actual treatment, which is expected to further improve the effectiveness of learning.

 本態様の学習方法において、目線の適時移動は、例えば、複合現実ディスプレイに予め設定された(プログラムされた)定型の動作である態様が挙げられるが、これに限定するものではなく、学習者の訓練状況を観察している学習指導教員や同席者がソフトウェア等を使用して意図的に目線を移動させる操作を行う態様であってもよいし、人体模型に設けた感圧センサー又は圧力センサーと複合現実ディスプレイのソフトウェア等とをリンクさせ、所定の圧力を感知した際に目線が学習者の方向へ適時移動するように設定されている態様であってもよい。更に、人体模型に設けた感圧センサー等とリンクした態様である場合、感知した圧力の強弱に応じて目線の移動の有無や時間に差が出るように設定してもよい。 In this learning method, the timely movement of the gaze may be, for example, a standard action preset (programmed) in the mixed reality display, but is not limited to this. It may also be that a supervising instructor or attendee observing the learner's training status uses software or the like to intentionally move the gaze, or a pressure sensor or pressure sensor provided in the human model is linked to the software of the mixed reality display, and the gaze is set to move toward the learner at the appropriate time when a specified pressure is detected. Furthermore, when linked to a pressure sensor or the like provided in the human model, the presence or absence of gaze movement and the duration of the gaze movement may be set to vary depending on the strength of the detected pressure.

 なお、本態様の学習方法についても、第2仮想拡張の人体模型への適用を第2ステップのみならず第1ステップで適用してもよい。また、本態様の学習方法における表情モデルについても、複合現実ディスプレイに予め設定された定型のモデルであってもよいし、学習指導教員や同席者等であってもよい。 In addition, in the learning method of this aspect, the second virtual augmentation may be applied to the human body model in the first step as well as in the second step. In addition, the facial expression model in the learning method of this aspect may be a standard model preset in the mixed reality display, or may be that of a teaching instructor or a person in attendance, etc.

 また、前述の学習方法は、第1仮想拡張が、人体模型に当て嵌まる適当なサイズに設定された医用画像及び3次元解剖画像を同一画面上で重畳可能に設けられたものであってもよい。 In addition, the above-mentioned learning method may be such that the first virtual augmentation allows medical images and three-dimensional anatomical images, set to an appropriate size to fit the human body model, to be superimposed on the same screen.

 なお、ここで「医用画像」とは、2次元コンピュータ断層撮影画像(以下「2次元CT画像」という)、磁気共鳴画像(以下「2次元MRI画像」という)、超音波画像(以下「エコー画像」という)、X線撮影画像(以下「X線画像」という)、核医学検査画像(以下「RI画像」という)等を含む意味であり、また、他の種類の医用画像を除外するものではない。 In this context, "medical images" includes two-dimensional computed tomography images (hereinafter referred to as "2D CT images"), magnetic resonance images (hereinafter referred to as "2D MRI images"), ultrasound images (hereinafter referred to as "echo images"), X-ray images (hereinafter referred to as "X-ray images"), nuclear medicine images (hereinafter referred to as "RI images"), etc., and does not exclude other types of medical images.

 従来、実際の診療等の医療現場では、医用画像(例えば、2次元CT画像や2次元MRI画像)の実物大での観察は行われておらず、前述した2次元の医用画像と3次元解剖画像とを実物大で立体的に学習できる環境が存在しなかったため、2次元の医用画像と3次元解剖画像との関係性の理解について、多くの学生や経験の浅い実務者(医師や検査技師)が苦労していた。  Traditionally, in actual medical practice, medical images (e.g., 2D CT images or 2D MRI images) were not observed in life-size, and there was no environment in which the aforementioned 2D medical images and 3D anatomical images could be studied in life-size and stereoscopically. As a result, many students and inexperienced practitioners (doctors and technicians) had difficulty understanding the relationship between 2D medical images and 3D anatomical images.

 しかしながら、本態様の学習方法によれば、学習者は、第1仮想拡張により重畳して投影された立体的な実物大の医用画像と3次元解剖画像を適宜視認しながら学習することができる。つまり、医用画像(特に2次元のもの)と3次元解剖画像(3次元)の空間的位置関係を関連付けて学習することができ、これにより、多くの学生や経験の浅い実務者(医師や検査技師)が苦労していた医用画像と3次元解剖画像との関係性の理解の為のトレーニングと、更なる理解の深化を図ることができ、学習効果の更なる向上が期待できる。そして、学習者による医用画像と3次元解剖画像との関係性の理解が深化することにより、最終的には医用画像(特に2次元医用画像)を見るだけで、学習者(即ち将来の実務者)が臓器等の位置を直感的に理解できるようになることが期待される。 However, according to the learning method of this aspect, the learner can learn while appropriately viewing the stereoscopic life-size medical images and 3D anatomical images superimposed and projected by the first virtual expansion. In other words, the learner can learn by associating the spatial positional relationship between the medical images (especially 2D ones) and the 3D anatomical images (3D), which allows training to understand the relationship between the medical images and the 3D anatomical images, which many students and inexperienced practitioners (doctors and technicians) have struggled with, and further deepening their understanding, and further improving the learning effect can be expected. Furthermore, it is expected that by deepening the learner's understanding of the relationship between the medical images and the 3D anatomical images, the learner (i.e., future practitioners) will ultimately be able to intuitively understand the positions of organs, etc., just by looking at the medical images (especially 2D medical images).

 本態様の学習方法において、医用画像と3次元解剖画像は、例えば、健康的な標準の画像である態様が挙げられるが、これに限定するものではなく、病巣や特定の特徴を有する患者又は個人の臓器や骨等の画像である態様、撮影された個人の医用画像及び該医用画像を基礎として構築された3次元解剖画像として使用する態様、等であってもよい。病巣を有する患者の臓器や骨等の画像を適用した場合は、学習者のみならず、医師を含む医療従事者が実際の手術や治療を行う前の施術方針の検討や術前の打ち合わせに利用することもできる。 In the learning method of this aspect, the medical images and 3D anatomical images may be, for example, standard healthy images, but are not limited to this, and may be images of organs, bones, etc. of a patient or individual with a lesion or specific characteristics, or may be used as photographed medical images of an individual and 3D anatomical images constructed based on the medical images. When images of organs, bones, etc. of a patient with a lesion are applied, they can be used not only by learners but also by medical professionals including doctors in considering treatment plans and pre-operative meetings before actual surgery or treatment.

 「医用画像及び3次元解剖画像を同一画面上で重畳可能」とは、医用画像と3次元解剖画像を同一画面上で重畳させることができればよく、同時に重複させて表示する態様及び切替により医用画像と3次元解剖画像とが同位置に交互に表示させる態様のいずれの場合も含まれる。 "Able to superimpose medical images and 3D anatomical images on the same screen" means that medical images and 3D anatomical images can be superimposed on the same screen, and includes both simultaneous overlapping display and switching between medical images and 3D anatomical images that are displayed alternately in the same position.

 また、前述の学習方法は、人体模型が、少なくとも学習対象とする所定部分に対して、皮膚、筋肉、骨、血管、膜あるいは臓器のいずれか一つを再現した疑似的構造物が適用されたものであるか、又は、これら疑似的構造物を複数組み合わせて適用されたものであるものであってもよい。 Furthermore, the above-mentioned learning method may be such that a pseudo-structure reproducing any one of skin, muscle, bone, blood vessel, membrane or organ is applied to at least a specific part of the human body model that is the learning target, or a combination of multiple pseudo-structures may be applied.

 本態様の学習方法によれば、検査や治療を模した訓練において人体模型に対して学習対象器具を適用する際に、検査や治療を行う部位に皮膚等を再現した疑似的構造物が適用されていることで、より人体に近い手応えが触覚的に得られる。これにより、学習者にとっては、実際の現場で患者等に処置等を行っているかのような触覚的な刺激を受け(換言すると、リアルな触覚を得られ)、より高い学習効果が期待できる。 According to this learning method, when applying the learning tool to the human body model in training that simulates examination or treatment, a pseudo structure that reproduces skin, etc. is applied to the area where the examination or treatment is performed, so that a tactile response closer to that of the human body can be obtained. This allows the learner to receive tactile stimulation as if they were performing treatment on a patient in an actual location (in other words, they can get a realistic sense of touch), and a higher learning effect can be expected.

 そして、人体模型は、学習対象とする所定部分に対して皮膚、筋肉、骨、血管、膜あるいは臓器の疑似的構造物を複数組み合わせて適用されたものであるものであってもよい。例えば、疑似的構造物として皮膚、骨及び対象臓器が選択され、これらが重畳的に(積層的に)設けられている場合、対象臓器へ穿刺する施術の訓練において、学習者は、触診により骨の間を探る触診、皮膚を貫く力加減、針で骨の間を抜く施術、対象臓器に針を至らせる力加減や深さの加減を、実際の触覚で学習することができる。更に、筋肉や膜、血管の疑似的構造物が適用されていれば、穿刺の際における筋肉や膜の抵抗を感じたり、筋繊維や血管を傷つけないようにこれらの間を抜く手技を学習したりすることもできる。 The human body model may be one in which multiple pseudo structures of skin, muscle, bone, blood vessel, membrane or organ are applied to a specific part to be studied. For example, if skin, bone and target organ are selected as pseudo structures and these are arranged in a superimposed (layered) manner, in training for puncturing the target organ, the learner can learn by actual touch how to palpate the gap between the bones, how much force to use to pierce the skin, how to insert the needle between the bones, and how much force and depth to use to insert the needle into the target organ. Furthermore, if pseudo structures of muscle, membrane and blood vessel are applied, the learner can feel the resistance of the muscle or membrane when puncturing, and learn the technique of inserting the needle between the muscle fibers or blood vessels without damaging them.

 なお、疑似的構造物の適用箇所について、「少なくとも学習対象とする所定部分に対して」としているが、人体模型の一部のみならず、全体に対して前述の疑似的構造物を適用した態様であってもよい。人体模型の全体に前述の疑似的構造物を適用した場合、構造は複雑化するものの、一の人体模型で多種の検査や施術の訓練に対応可能となり、利便性が良く汎用性が更に向上する。 Note that although the pseudo-structure is applied to "at least the specified portion to be studied," the pseudo-structure may be applied to the entire human body model, not just a part of it. If the pseudo-structure is applied to the entire human body model, the structure becomes more complex, but one human body model can be used to train a variety of tests and procedures, improving convenience and versatility.

 また、前述の学習方法は、擬似的構造物が、皮膚、筋肉、血管、膜、臓器については軟質素材製であり、骨については準硬質素材製又は硬質素材製であるものであってもよい。 In addition, in the above-mentioned learning method, the simulated structures may be made of soft materials for skin, muscles, blood vessels, membranes, and organs, and semi-hard or hard materials for bones.

 本態様の学習方法によれば、前述した検査や治療を模した訓練において、人体模型に対して学習対象器具を適用する際に、検査や治療を行う部位に応じた硬さの素材で形成した疑似的構造物が適用されていることで、更に人体に近い手応えが触覚的に得られる。これにより、学習者にとっては、実際の現場で患者等に処置等を行っているかのような触覚的な刺激を受け(換言すると、リアルな触覚を得られ)、更なる高い学習効果が期待できる。 According to this learning method, when applying the learning tool to the human body model in the training simulating the above-mentioned examination or treatment, a pseudo structure made of a material with a hardness corresponding to the part of the body to be examined or treated is applied, providing a tactile response that is even closer to that of the human body. This allows the learner to receive tactile stimulation as if they were actually performing treatment on a patient in an actual setting (in other words, they can get a realistic sense of touch), and an even greater learning effect can be expected.

 なお、「軟質素材」としては、樹脂やゴム等の素材であって、対象となる皮膚、筋肉、血管、膜、臓器に近い硬さのものが好適に使用される。「硬質素材」としては、樹脂やゴム、石質素材や金属材等の素材であって、対象となる骨に近い硬さのものが好適に使用され、また、軟骨を再現するにあたっては硬めに調製された樹脂やゴム等の準硬質素材を使用してもよい。 As "soft materials", materials such as resin and rubber are preferably used that have a hardness close to that of the target skin, muscles, blood vessels, membranes, and organs. As "hard materials", materials such as resin, rubber, stone materials, and metal materials are preferably used that have a hardness close to that of the target bones. In addition, semi-hard materials such as resins and rubbers that have been prepared to be harder may be used when reproducing cartilage.

 また、前述の学習方法は、人体模型及び学習対象器具には、位置センサー、及び、感圧センサー又は圧力センサーが設けられ、位置センサーが検出する位置情報、及び、感圧センサー又は圧力センサーが検出する圧力値の両方又はいずれか一方が、予め設定された設定値を超えた際に、第2仮想拡張が不快或いは苦悶を表現する表情に変化するように設定されているものであってもよい。 In addition, the above-mentioned learning method may be configured such that the human body model and the learning target device are provided with a position sensor and a pressure sensor or pressure sensor, and when the position information detected by the position sensor and/or the pressure value detected by the pressure sensor or pressure sensor exceed a preset value, the second virtual augmentation is changed to an expression expressing discomfort or distress.

 本態様の学習方法によれば、前述した検査や治療を模した訓練において、人体模型に対して学習対象器具を適用する際に、仮に人間であれば痛覚を感じるような施術を行った際に、第2仮想拡張で表される表情モデルが不快或いは苦悶を表現する表情に変化するので、学習者自身で不適当な施術を行ったことが即時に判断でき、且つ、実際の人間に施術するのと同様に表情を伺いながら(表情を視認しながら)施術する訓練を行うことができる。 According to this learning method, when applying the training tool to the human body model in training that mimics the above-mentioned examination or treatment, if a treatment that would be painful for a human is performed, the facial expression model represented by the second virtual extension changes to one that expresses discomfort or agony, allowing the learner to immediately determine that they have performed an inappropriate treatment, and to train in performing treatment while observing (visually checking) the facial expression, just as they would when performing treatment on a real human.

 なお、人体であれば痛覚を感じるような施術であるか否かは、人体模型及び学習対象器具に設けられた位置センサー、及び、感圧センサー又は圧力センサーが検出する位置情報及び圧力値の両方又はいずれか一方が、予め設定された設定値を超えるか否かにより判定される。 Whether or not a treatment would cause pain in a human body is determined by whether or not the position information and/or pressure value detected by the position sensor and pressure sensor or pressure sensor installed on the human body model and the training target device exceed a preset value.

 例えば、学習対象器具である針に設けられた位置センサーによって穿刺深さに関する位置情報が送信され、複合現実ディスプレイが直接受信するか又は位置情報の分析を行う外部装置を介して複合現実ディスプレイが受信した位置情報が不適当(例えば、穿刺位置が深過ぎる等)であれば、複合現実ディスプレイに表示される第2仮想拡張で表される表情モデルが不快或いは苦悶を表現する表情に変化する。また、例えば、人体模型における学習対象とする所定部分に感圧センサー又は圧力センサーが設けられている場合であって、同部分に学習対象器具である検査機器(例えば超音波診断装置におけるプローブ等)を押し当てる訓練の際に、人体模型に設けられた感圧センサー等によって圧力値が送信され、複合現実ディスプレイが直接受信するか又は圧力値の分析を行う外部装置を介して複合現実ディスプレイが受信した圧力値が不適当(例えば、押し当てる力が強過ぎる等)であれば、複合現実ディスプレイに表示される第2仮想拡張で表される表情モデルが不快或いは苦悶を表現する表情に変化する。 For example, if position information on the puncture depth is transmitted by a position sensor attached to the needle, which is the learning target instrument, and the position information received by the mixed reality display directly or via an external device that analyzes the position information is inappropriate (e.g., the puncture position is too deep), the facial expression model represented by the second virtual augmentation displayed on the mixed reality display changes to an expression expressing discomfort or distress. Also, for example, if a pressure sensor or pressure sensor is provided on a specific part of the human body model that is the learning target, and during training to press an examination device (e.g., a probe in an ultrasound diagnostic device), which is the learning target instrument, against the part, a pressure value is transmitted by a pressure sensor or the like attached to the human body model, and if the pressure value received by the mixed reality display directly or via an external device that analyzes the pressure value is inappropriate (e.g., the pressing force is too strong), the facial expression model represented by the second virtual augmentation displayed on the mixed reality display changes to an expression expressing discomfort or distress.

 更に、不快或いは苦悶を表現する表情への変化に合わせて、例えば「痛い!」等の音声が発生する設定と併用してもよく、この場合、より臨場感が増し、緊張感を持って訓練を行うことができる。 Furthermore, this can be used in conjunction with a setting that generates a sound such as "Ouch!" in response to changes in facial expression that express discomfort or distress, which increases the sense of realism and allows the training to be done with a sense of tension.

 また、前述の学習方法は、人体模型には、頭部を有するものについては眼部の位置に、又は、頭部を有しないものについては眼部に相当する位置に、学習者を撮影可能なカメラが設けられているものであってもよい。 In addition, the above-mentioned learning method may be such that the human body model is provided with a camera capable of photographing the learner at the eye position if the model has a head, or at a position equivalent to the eye if the model does not have a head.

 本態様の学習方法によれば、検査や治療を模した訓練(学習)中の学習者を観察する目線での映像(換言すると人体模型側から撮影された映像)を撮影し、その映像を得ることができる。つまり、実際の施術の際に患者等から見た施術者の表情や行動を、学習者も客観視することができ、患者等からどのように見られているかといった患者等の心理も体感することができ、一度の訓練で施術側と被施術側の両方について学習することができる。 According to this learning method, it is possible to capture video from the perspective of a learner observing training (learning) that simulates examinations and treatments (in other words, video taken from the side of the human body model), and obtain the video. In other words, the learner can objectively view the facial expressions and behavior of the therapist as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the therapist and the treated in a single training session.

 なお、前述の「カメラ」は、学習者を撮影可能に設けられていればよいが、例えば、固定型の構造であれば画角が広いものであることが好ましく、また、可動型の構造であってもよい。可動型の構造としては、例えば、頭部又は眼部(眼部に相当する位置にあるものを含む)が手動、自動、リモートでの操作によって学習者の方向を向いて同学習者を撮影可能な構造のもの等が挙げられる。 The aforementioned "camera" may be provided in any way that allows it to photograph the learner, but if it is a fixed structure, it is preferable that it has a wide angle of view, and it may also be a movable structure. Examples of movable structures include a structure in which the head or eye unit (including those located in a position equivalent to the eye unit) can be operated manually, automatically, or remotely to face the learner and photograph the learner.

 そして、前述のカメラで撮影される映像は、学習者の装着する複合現実ディスプレイにリアルタイムで表示される態様であってもよいし、複合現実ディスプレイと接続されるパーソナルコンピュータ(以下「パソコン」)のハードディスク等に記録される態様であってもよい。複合現実ディスプレイにリアルタイムで表示される態様の場合は、例えば、学習者の見ている人体模型の映像の横に開くサブウィンドウに表示される態様であってもよいし、学習者の見ている人体模型の映像と適宜切り替えて全画面表示される態様であってもよい。また、パソコンのハードディスク等に記録される態様の場合は、例えば、訓練(学習)後に学習者等が他のモニタ等で確認することができる。更に撮影される映像は、大型モニタに同時出力してもよく、この場合は訓練(学習)中の学習者以外の待機中の学習者も映像を共有することができ、次に訓練に着手する学習者は患者側の視点や心境を理解しやすいため、学習者グループ全体の学習効率向上が期待できる。 The images captured by the camera may be displayed in real time on a mixed reality display worn by the learner, or may be recorded on a hard disk or the like of a personal computer (hereinafter referred to as "PC") connected to the mixed reality display. In the case of a real time display on a mixed reality display, the images may be displayed in a sub-window that opens next to the image of the human body model that the learner is looking at, or may be displayed full screen by appropriately switching with the image of the human body model that the learner is looking at. In addition, in the case of a recording on a hard disk or the like of a PC, the learner, etc. can check the images on another monitor or the like after the training (learning). Furthermore, the captured images may be simultaneously output to a large monitor, in which case the images can be shared with other learners waiting other than the learner currently training (learning), and the learner who will next begin training can easily understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire learner group.

 また、前述の学習方法は、人体模型が教場として利用可能な室内空間に設置されると共に、複合現実ディスプレイが、検査装置及び/又は治療装置モデルである第3仮想拡張を室内空間へ重畳的に視覚化可能であり、第1ステップ及び第2ステップの両方又はいずれか一方において、学習者が、複合現実ディスプレイを介して室内空間に投影された第3仮想拡張を視認しながら、行うべき処置又は手順を学習するものであってもよい。 In addition, the above-mentioned learning method may be such that a human body model is installed in an indoor space that can be used as a teaching space, and a mixed reality display is capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space, and in both or either of the first and second steps, the learner learns the procedure or procedure to be performed while visually viewing the third virtual augmentation projected onto the indoor space via the mixed reality display.

 なお、本態様で用いる表現「検査装置及び/又は治療装置」とは、検査装置及び治療装置の両方である場合と、検査装置又は治療装置のいずれか一方のみである場合の、いずれの態様も含む意味である。 In addition, the expression "examination device and/or treatment device" used in this embodiment means both an examination device and a treatment device, and either an examination device or a treatment device.

 本態様の学習方法によれば、複合現実ディスプレイを介して室内空間に投影された第3仮想拡張を視認することで室内空間を検査室や処置室に見立てると共に、そこに設置される人体模型を検査又は治療等される対象者に見立てて、学習を行うことができる。 According to this learning method, by visually viewing the third virtual augmentation projected onto the indoor space via the mixed reality display, the indoor space can be viewed as an examination room or treatment room, and the human body model placed there can be viewed as a subject to be examined or treated, etc., for learning purposes.

 このとき、第1ステップ及び第2ステップの両方又はいずれか一方において、学習者が、複合現実ディスプレイを介して室内空間に投影された第3仮想拡張を更に視認しながら、行うべき処置又は手順を学習するものであってもよい。例えば、室内空間に投影された第3仮想拡張(検査装置モデル等)の画像を見ながら、開始前の検査装置等の準備やチェック手順等の学習や訓練を行うことができ、開始後の検査装置等と人体模型との位置に応じた立ち回りや学習者の動きの学習や訓練も行うことができる。 In this case, in both or either of the first and second steps, the learner may learn the procedure or procedure to be performed while further viewing the third virtual augmentation projected into the indoor space via the mixed reality display. For example, while viewing the image of the third virtual augmentation (such as an inspection device model) projected into the indoor space, the learner may learn or practice preparation of the inspection device, etc. before starting, check procedures, etc., and may also learn or practice how to handle the learner and his/her movements according to the positions of the inspection device, etc. and the human body model after starting.

 第3仮想拡張で投影される画像としては、例えば、MRI(Magnetic Resonance Imaging)検査装置やCT(Computed Tomography)検査装置、レントゲン検査装置等、放射線治療装置や陽子線治療装置、内視鏡装置(内視鏡及びこれと使用される術具)等が挙げられ、仮想拡張で投影された検査装置等に対して人体模型を移動させる、あるいは、仮想拡張で投影された検査装置のアーム部分を動かして人体模型に適用するといった、学習者の動きの学習や訓練も行うことができる。 Examples of images projected by the third virtual augmentation include MRI (Magnetic Resonance Imaging) inspection equipment, CT (Computed Tomography) inspection equipment, X-ray inspection equipment, radiation therapy equipment, proton beam therapy equipment, and endoscopic equipment (endoscopes and the surgical tools used therewith). Learners can learn and train their movements by moving a human body model relative to the inspection equipment projected by virtual augmentation, or by moving the arm of the inspection equipment projected by virtual augmentation and applying it to the human body model.

 つまり、本態様の学習方法によれば、実際の検査装置や治療装置を準備することなく、実際の検査装置や治療装置がある空間で患者を相手にしているかのようにして、行うべき手技の手順又は処置の訓練を行うことができ、学習者にとって実際の現場で患者等に処置等を行っているかのような視覚的な刺激を受け(換言すると、リアルな視覚を得られ)、更なる高い学習効果が期待できる。 In other words, this learning method allows students to practice the procedure or treatment they need to perform without having to prepare actual examination or treatment equipment, as if they were treating a patient in a space with actual examination or treatment equipment. This provides students with visual stimulation as if they were actually treating a patient in a real setting (in other words, they get a realistic visual experience), and is expected to have an even greater learning effect.

 上記の目的を達成するために、本発明の学習システムは、投影対象となる人体模型と、該人体模型の一部又は全部に対して物理解剖学的モデルである第1仮想拡張を重畳的に視覚化可能であり且つ学習者の頭部に装着可能な複合現実ディスプレイと、手技学習対象となる医療器具又は検査器具である学習対象器具と、を備える。 In order to achieve the above object, the learning system of the present invention comprises a human body model to be projected, a mixed reality display that can visualize a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the human body model and can be worn on the learner's head, and a learning target instrument that is a medical instrument or examination instrument to be used for procedural learning.

 ここで、学習者が複合現実ディスプレイを装着することで、複合現実ディスプレイを介して人体模型及びこれに投影された第1仮想拡張を視認することができる。そして、学習者は、手技の手順又は処置を学習するにあたり、実際の施術や検査等に使用する学習対象器具を手に取り、第1仮想拡張が投影された人体模型に対して学習対象器具を適用することができる。 Here, by wearing the mixed reality display, the learner can view the human body model and the first virtual augmentation projected onto it via the mixed reality display. When learning the procedure or treatment of a procedure, the learner can pick up the tool to be studied that will be used in the actual treatment, examination, etc., and apply the tool to be studied to the human body model onto which the first virtual augmentation is projected.

 これにより、学習者は、患者等を前にしたような臨場感及び没入感が視覚的に得られ、且つ、感触を伴わない虚像ではなく、現実の学習対象器具を手にして人体模型に対し学習対象器具を適用するので、実際の作業に近い手応えが触覚的に得られる。この結果、学習者は、実際の現場で患者等に処置等を行っているかのような視覚的触覚的な刺激を受ける(換言すると、リアルな視覚や触覚を得られる)ので、高い学習効果が期待できる。 This allows the learner to visually obtain a sense of realism and immersion, as if they were in front of a patient, and because they hold the actual tools being studied and apply them to the human body model, rather than using a virtual image that does not involve any sensation, they can obtain a tactile response that is close to that of actual work. As a result, the learner receives visual and tactile stimulation that makes them feel as if they are actually performing treatment on a patient in an actual location (in other words, they get realistic visual and tactile sensations), and a high learning effect can be expected.

 「人体模型」は、前述した第1仮想拡張の投影対象となり得るものであればよく、例えば、いわゆる実習モデル人形等と呼ばれる実物大のものが好適に使用される。また、人体模型は、後述の通り、複合現実ディスプレイが人体模型における実体不在部分にも第1仮想拡張を投影することができるため、必ずしも全身サイズである必要はなく、学習に特に必要な部分のみからなる形態(例えば上肢、下肢、頭部が無い胴部のみ)のものであってもよい。 The "human body model" may be anything onto which the first virtual augmentation described above can be projected, and for example, a life-size model, such as a so-called training model doll, is preferably used. Furthermore, as described below, the human body model does not necessarily have to be full-body size, since the mixed reality display can project the first virtual augmentation onto parts of the human body model that do not have a physical body, and it may be in a form consisting of only the parts particularly necessary for learning (for example, only the torso without the upper limbs, lower limbs, or head).

 「学習対象器具」は、手技学習対象となる医療器具又は検査器具であればよく、各種器具が対象となり得る。医療器具としては、例えば、針やチューブ、メス等の医療用刃物類などが挙げられ、検査器具としては、例えば、超音波診断装置におけるプローブ、心電図測定装置の電極等が挙げられる。 The "learning subject instrument" may be any medical or testing instrument that is the subject of skill learning, and various instruments may be the subject of learning. Examples of medical instruments include needles, tubes, scalpels, and other medical blades, and examples of testing instruments include probes in ultrasound diagnostic equipment and electrodes in electrocardiogram measuring equipment.

 「複合現実ディスプレイ」は、少なくとも、人体模型の一部又は全部に対して物理解剖学的モデルである第1仮想拡張を重畳的に視覚化可能な機能を有し、学習者の頭部に装着可能な構造であるものであればよく、いわゆるXR(Extended reality)技術を実施可能なヘッドマウントディスプレイ(ゴーグル、グラス、ヘルメット等)が使用される。なお、XR(Extended reality)技術の中でも、人体模型と第1仮想拡張の融合性が高く、三次元的表現に優れるMR(Mixed Reality)技術のものが好適に使用されるが、これに限定するものではなく、例えば、AR(Augmented Reality)技術のものも使用可能でこれを除外するものではない)。 The "mixed reality display" is at least capable of visualizing the first virtual augmentation, which is a physical anatomical model, on a part or all of the human body model in a superimposed manner, and is constructed so as to be wearable on the learner's head. A head-mounted display (goggles, glasses, helmet, etc.) capable of implementing so-called XR (Extended reality) technology is used. Among XR (Extended reality) technologies, MR (Mixed Reality) technology, which has a high degree of integration between the human body model and the first virtual augmentation and excels in three-dimensional expression, is preferably used, but is not limited to this, and for example, AR (Augmented Reality) technology can also be used and is not excluded.)

 「第1仮想拡張」としては、例えば、人の臓器や骨の画像が挙げられ、臓器等の画像が人体模型における適当な箇所に投影されていることで、学習における没入感が高まり、また、臓器等位置の事前確認も行うことができる。また、「第1仮想拡張」は、人の臓器等の単一の画像のほか、複数の画像が重なった態様(重畳的態様)で投影されるものも含まれる。重畳的態様での投影の一例としては、骨の画像と、同骨の画像の下方に位置する臓器の画像が重なって投影されるような態様が挙げられ、この場合、複数投影された画像の配置を参照しながら、より実践的な学習が可能となる。 Examples of the "first virtual augmentation" include images of human organs and bones. Projecting images of organs, etc. onto appropriate locations on the human body model increases the sense of immersion in learning and also allows for advance confirmation of the positions of organs, etc. Furthermore, the "first virtual augmentation" includes not only single images of human organs, etc., but also projection of multiple images in an overlapping manner (superimposed manner). One example of projection in an overlapping manner is a manner in which an image of a bone and an image of an organ located below the same bone image are projected in an overlapping manner. In this case, more practical learning is possible by referring to the arrangement of the multiple projected images.

 「第1仮想拡張」で投影される画像のデータは、複合現実ディスプレイにインストールされたもの又は機体に接続された補助記憶装置に記憶されたものであってもよいし、複合現実ディスプレイに有線或いは無線方式で接続されるサーバ等の外部装置から受信するものであってもよい。また、「第1仮想拡張」に関する画像処理は、複合現実ディスプレイに備える機能によって行われる態様であってもよいし、複合現実ディスプレイに無線方式等で接続されるサーバ等の外部装置で処理されるデータを受信する態様であってもよい。 The image data projected in the "first virtual augmentation" may be installed in the mixed reality display or stored in an auxiliary storage device connected to the aircraft, or may be received from an external device such as a server connected to the mixed reality display by wire or wireless means. Furthermore, image processing related to the "first virtual augmentation" may be performed by a function provided in the mixed reality display, or may receive data processed by an external device such as a server connected to the mixed reality display by wireless means or the like.

 つまり、本発明の学習システムによれば、複合現実ディスプレイで投影する第1仮想拡張の内容を変更することもでき、一の人体模型に様々な画像や映像(第1仮想拡張)を投影すれば、実際上複数の人体模型を所有するのと同様の効果を得られるので、前述した従来行われていた施術や検査の学習のシステムと比較して、調達コストの負担が少なく、且つ、不使用時の収納スペースが少なくて済むものとなっている。 In other words, according to the learning system of the present invention, the content of the first virtual augmentation projected by the mixed reality display can be changed, and by projecting various images or videos (first virtual augmentation) onto one human body model, it is possible to obtain the same effect as having multiple human body models, which means that compared to the previously described conventional systems for learning about treatments and examinations, the system requires less procurement costs and requires less storage space when not in use.

 更に、本発明の学習システムによれば、複合現実ディスプレイが人体模型における実体不在部分にも第1仮想拡張を投影することができるので、あたかも同人体模型に第1仮想拡張で投影された実体不在部分が実在するかのように付加されたように表現することができる。また、必要に応じて実体不在部分の表示、非表示を切り替えることもできる。つまり、本発明の学習システムによれば、複合現実ディスプレイによって複数の人体模型を所有していなくても様々な学習を行うことができ、実体不在部分の表示、非表示の切り替えによって学習の効率及び効果の向上が期待でき、複数の人体模型の所有に伴う導入及び運用コストの低減を図ることもできる。 Furthermore, according to the learning system of the present invention, the mixed reality display can project the first virtual augmentation onto the non-physical parts of the human model, so that the non-physical parts projected by the first virtual augmentation can be expressed as if they were added to the human model as if they were actually present. In addition, the non-physical parts can be switched between display and non-display as needed. In other words, according to the learning system of the present invention, the mixed reality display allows various learning activities to be carried out even if the user does not own multiple human models, and switching between display and non-display of the non-physical parts is expected to improve the efficiency and effectiveness of learning, and can also reduce the introduction and operating costs associated with owning multiple human models.

 また、前述の学習システムは、複合現実ディスプレイが、患者役の表情モデルである第2仮想拡張を、人体模型の顔面部分に対して重畳的に視覚化可能に設定されているものであってもよい。 The learning system described above may also be configured so that the mixed reality display is configured to be able to visualize a second virtual augmentation, which is a facial expression model of the patient, superimposed on the face of the human body model.

 本態様の学習システムによれば、第2仮想拡張が人体模型の顔面部分に対して重畳的に視覚化(適用)することができる。これにより、学習者は、第2仮想拡張により投影された患者役の表情モデルも適宜視認しながら学習を行うか、又は、同患者役の表情モデルを適宜視認し且つ状況に応じた会話も交えながら学習を行うことができる。また、この学習は、この場合、直接的な技術の習得のみならず、手技開始前における急激な病状変化による患者の表情の変化を観察したり、患者の不安や緊張を緩和するための会話を行ったりする訓練も併せて行うことができる。 According to the learning system of this embodiment, the second virtual augmentation can be visualized (applied) superimposed on the facial portion of the human body model. This allows the learner to study while also visually checking the facial expression model of the patient role projected by the second virtual augmentation as needed, or to study while visually checking the facial expression model of the patient role as needed and engaging in conversation according to the situation. Furthermore, in this case, this learning is not only for directly acquiring skills, but also for training in observing changes in the patient's facial expression due to a sudden change in the patient's condition before the start of the procedure, and in conversation to ease the patient's anxiety and tension.

 そして、本態様の学習システムによれば、実際の患者や患者役を準備することなく、実際の患者を相手にしているかのようにして、行うべき手技の手順又は処置の訓練を行うことができ、学習者にとって実際の現場で患者等に処置等を行っているかのような視覚的な刺激を受け、更なる高い学習効果が期待できる。 Furthermore, with this learning system, it is possible to train in the procedure or treatment to be performed as if the student were dealing with a real patient, without having to prepare an actual patient or a role-playing patient. This provides the student with visual stimulation as if they were actually performing treatment on a patient in a real setting, and is expected to have an even greater learning effect.

 なお、第2仮想拡張で表現される患者役の表情モデルとしては、前述の通り定型のモデル或いは同席者であってもよく、同席者の場合は映像処理ソフトウェアにより処理されたリアルタイムの表情を表示できるようにしてもよい。また、前述の通り、第2仮想拡張で表現される定型のモデルの表情に合わせて音声が発生する設定と併用してもよく、同席者を表情モデルとする場合はモデルと学習者とが会話可能に設定し、臨機応変な対応の訓練を行いうるようにしてもよい。 The facial expression model of the patient represented in the second virtual augmentation may be a standard model or a person present, as described above, and in the case of a person present, it may be possible to display real-time facial expressions processed by video processing software. Also, as described above, this may be used in conjunction with a setting in which sound is generated in accordance with the facial expression of the standard model represented in the second virtual augmentation, and if a person present is used as the facial expression model, it may be set so that the model and the learner can converse, allowing training in how to respond flexibly to situations.

 また、前述の学習システムは、第2仮想拡張において視覚化された表情モデルのうち、少なくとも目に関する描写において目線が学習者の方向へ適時移動可能に設定されていると共に、複合現実ディスプレイを介して同学習者が目線を認識可能に設定されているものであってもよい。 The learning system described above may also be configured such that the gaze of at least the depiction of the eyes in the facial expression model visualized in the second virtual augmentation can be moved toward the learner at any time, and the gaze can be recognized by the learner via the mixed reality display.

 本態様の学習システムによれば、学習者が第2仮想拡張により投影された患者役の表情モデルも適宜視認しながら学習を行う等の際に、患者役の目線を変化させることができると共に、学習者はこれを認識することができる。 With this type of learning system, when the learner is studying while visually checking the facial expression model of the patient role projected by the second virtual augmentation, the line of sight of the patient role can be changed and the learner can recognize this.

 実際の患者や診察を受ける者は、不安や痛みを感じた場合に、施術者の顔を見ることがある。つまり、本態様の学習方法によれば、人体模型に適用し視覚化された表情モデルの少なくとも目線が学習者の方向へ適時移動することによって、実際の施術に近い緊張感や現実感を学習者に体感させ、学習効果の更なる向上が期待できる。 Actual patients and those receiving medical treatment may look at the face of the practitioner when they feel anxious or in pain. In other words, according to this learning method, at least the line of sight of the facial expression model visualized and applied to the human body mannequin moves toward the learner at appropriate times, allowing the learner to experience a sense of tension and realism close to that of an actual treatment, which is expected to further improve the effectiveness of learning.

 本態様の学習システムにおいて、目線の適時移動は、例えば、複合現実ディスプレイに予め設定された定型の動作である態様、学習者の訓練状況を観察している学習指導教員等がソフトウェア等を使用して意図的に目線を移動させる操作を行う態様、人体模型に設けた感圧センサー等と複合現実ディスプレイのソフトウェア等とをリンクさせ、所定の圧力を感知した際に目線が学習者の方向へ適時移動するように設定されている態様等が挙げられる。更に、人体模型に設けた感圧センサー等とリンクした態様である場合、感知した圧力の強弱に応じて目線の移動の有無や時間に差が出るように設定してもよい。 In the learning system of this aspect, the timely movement of the gaze can be, for example, a standardized action preset in the mixed reality display, a learning instructor observing the learner's training status using software or the like to intentionally move the gaze, or a pressure sensor or the like provided on the human model is linked to the software or the like of the mixed reality display, and the gaze is set to move toward the learner at the appropriate time when a specified pressure is detected. Furthermore, when linked to a pressure sensor or the like provided on the human model, the presence or absence of gaze movement and the duration of the gaze movement can be set to vary depending on the strength of the detected pressure.

 なお、本態様の学習システムについても、第2仮想拡張の人体模型への適用を第2ステップのみならず第1ステップで適用してもよい。また、本態様の学習システムにおける表情モデルについても、複合現実ディスプレイに予め設定された定型のモデルであってもよいし、学習指導教員や同席者等であってもよい。 In addition, in the learning system of this embodiment, the second virtual augmentation may be applied to the human body model not only in the second step but also in the first step. Also, the facial expression model in the learning system of this embodiment may be a standard model preset in the mixed reality display, or may be that of a learning instructor or a companion, etc.

 また、前述の学習システムは、第1仮想拡張が、人体模型に当て嵌まる適当なサイズに設定された医用画像及び3次元解剖画像を同一画面上で重畳可能に設けられたものであってもよい。なお、ここで「医用画像」とは、前述した2次元CT画像等を含む意味であり、また、他の種類の医用画像を除外するものではない。 Furthermore, the above-mentioned learning system may be one in which the first virtual augmentation is provided so that medical images and three-dimensional anatomical images, set to an appropriate size to fit the human body model, can be superimposed on the same screen. Note that "medical images" here includes the above-mentioned two-dimensional CT images, etc., and does not exclude other types of medical images.

 本態様の学習システムによれば、学習者は、第1仮想拡張により重畳して投影された立体的な実物大の医用画像と3次元解剖画像を適宜視認しながら学習することができる。つまり、医用画像(特に2次元のもの)と3次元解剖画像(3次元)の空間的位置関係を関連付けて同一画面上で学習することができ、これにより、多くの学生や経験の浅い実務者が苦労していた医用画像と3次元解剖画像との関係性の理解の為のトレーニングと、更なる理解の深化を図ることができ、学習効果の更なる向上が期待できる。そして、学習者による医用画像と3次元解剖画像との関係性の理解が深化することにより、最終的には医用画像(特に2次元医用画像)を見るだけで、学習者が臓器等の位置を直感的に理解できるようになることが期待される。 According to the learning system of this aspect, the learner can study while appropriately viewing the stereoscopic life-size medical images and 3D anatomical images superimposed and projected by the first virtual extension. In other words, the spatial positional relationship between the medical images (particularly 2D ones) and the 3D anatomical images (3D) can be associated and studied on the same screen, which allows training to understand the relationship between the medical images and the 3D anatomical images, which many students and inexperienced practitioners have struggled with, and further deepens their understanding, and is expected to further improve the learning effect. Furthermore, it is expected that as the learner's understanding of the relationship between the medical images and the 3D anatomical images deepens, the learner will ultimately be able to intuitively understand the position of organs, etc., just by looking at the medical images (particularly the 2D medical images).

 「医用画像及び3次元解剖画像を同一画面上で重畳可能」とは、医用画像と3次元解剖画像を同一画面上で重畳させることができればよく、同時に重複させて表示する態様及び切替により医用画像と3次元解剖画像とが同位置に交互に表示させる態様のいずれの場合も含まれる。 "Able to superimpose medical images and 3D anatomical images on the same screen" means that medical images and 3D anatomical images can be superimposed on the same screen, and includes both simultaneous overlapping display and switching between medical images and 3D anatomical images that are displayed alternately in the same position.

 本態様の学習システムにおいて、医用画像と3次元解剖画像は、健康的な標準の画像である態様、病巣や特定の特徴を有する患者又は個人の臓器や骨等の画像である態様、撮影された個人の医用画像及び該医用画像を基礎として構築された3次元解剖画像として使用する態様、等が挙げられる。そして、病巣を有する患者の臓器や骨等の画像を適用した場合は、学習者のみならず、医師を含む医療従事者が実際の手術や治療を行う前の施術方針の検討や術前の打ち合わせに利用することもできる。 In this learning system, the medical images and 3D anatomical images may be standard healthy images, images of organs, bones, etc. of a patient or individual with a lesion or specific characteristics, or images of a photographed individual's medical images and 3D anatomical images constructed based on the medical images. When images of organs, bones, etc. of a patient with a lesion are used, they can be used not only by learners but also by medical professionals, including doctors, in considering treatment plans and pre-operative meetings before carrying out actual surgery or treatment.

 また、前述の学習システムは、人体模型及び学習対象器具には、位置センサー、及び、感圧センサー又は圧力センサーが設けられ、複合現実ディスプレイが、位置センサーが検出する位置情報、及び、感圧センサー又は圧力センサーが検出する圧力値を受信可能な受信機能、及び、設定値記憶機能を有し、受信機能で受信する位置情報及び圧力値の両方又はいずれか一方が、予め設定された設定値を超えた際に、第2仮想拡張が不快或いは苦悶を表現する表情に変化するように設定されているものであってもよい。 In addition, the learning system described above may be configured such that the human body model and the learning object instrument are provided with a position sensor and a pressure sensor or pressure sensor, the mixed reality display has a receiving function capable of receiving the position information detected by the position sensor and the pressure value detected by the pressure sensor or pressure sensor, and a set value storage function, and when both or either one of the position information and the pressure value received by the receiving function exceeds a preset value, the second virtual augmentation is configured to change to an expression expressing discomfort or distress.

 本態様の学習システムによれば、前述した検査や治療を模した訓練において人体模型に対して学習対象器具を適用する際に、仮に人間であれば痛覚を感じるような施術を行った際に、第2仮想拡張で表される表情モデルが不快或いは苦悶を表現する表情に変化する。これにより、学習者自身で不適当な施術を行ったことが即時に判断でき、且つ、実際の人間に施術するのと同様に表情を伺いながら施術する訓練を行うことができる。 According to this learning system, when applying the learning tool to the human body model in training that mimics the above-mentioned examination or treatment, if a treatment that would be painful for a human is performed, the facial expression model represented by the second virtual extension changes to one that expresses discomfort or agony. This allows the learner to immediately determine if they have performed an inappropriate treatment, and allows training to be performed while observing the facial expression in the same way as when performing treatment on a real human.

 なお、人体であれば痛覚を感じるような施術であるか否かは、人体模型及び学習対象器具に設けられた位置センサー、及び、感圧センサー又は圧力センサーが検出する位置情報及び圧力値の両方又はいずれか一方が、予め設定された設定値を超えるか否かにより判定される。 Whether or not a treatment would cause pain in a human body is determined by whether or not the position information and/or pressure value detected by the position sensor and pressure sensor or pressure sensor installed on the human body model and the training target device exceed a preset value.

 例えば、学習対象器具である針に設けられた位置センサーによって穿刺深さに関する位置情報が送信され、複合現実ディスプレイが直接受信するか又は位置情報の分析を行う外部装置を介して複合現実ディスプレイが受信した位置情報が不適当であれば、複合現実ディスプレイに表示される第2仮想拡張で表される表情モデルが不快或いは苦悶を表現する表情に変化する。また、例えば、人体模型における学習対象とする所定部分に感圧センサー又は圧力センサーが設けられている場合であって、同部分に学習対象器具である検査機器を押し当てる訓練の際に、人体模型に設けられた感圧センサー等によって圧力値が送信され、複合現実ディスプレイが直接受信するか又は圧力値の分析を行う外部装置を介して複合現実ディスプレイが受信した圧力値が不適当であれば、複合現実ディスプレイに表示される第2仮想拡張で表される表情モデルが不快或いは苦悶を表現する表情に変化する。更に、不快或いは苦悶を表現する表情への変化に合わせて、音声が発生する設定と併用してもよく、この場合、より臨場感が増し、緊張感を持って訓練を行うことができる。 For example, if position information on the puncture depth is transmitted by a position sensor attached to the needle, which is the learning target instrument, and the position information received by the mixed reality display directly or via an external device that analyzes the position information is inappropriate, the facial expression model represented by the second virtual extension displayed on the mixed reality display changes to an expression expressing discomfort or agony. Also, for example, in a case where a pressure sensor or pressure sensor is provided on a specific part of the human body model that is the learning target, and during training in which the testing device, which is the learning target instrument, is pressed against the same part, a pressure value is transmitted by a pressure sensor or the like attached to the human body model, and if the pressure value received by the mixed reality display directly or via an external device that analyzes the pressure value is inappropriate, the facial expression model represented by the second virtual extension displayed on the mixed reality display changes to an expression expressing discomfort or agony. Furthermore, a setting in which sound is generated in accordance with the change to an expression expressing discomfort or agony may be used in combination, in which case the sense of realism is increased and training can be performed with a sense of tension.

 また、前述の学習システムは、人体模型が、少なくとも学習対象とする所定部分に対して、皮膚、筋肉、骨、血管、膜あるいは臓器のいずれか一つを再現した疑似的構造物が適用されたものであるか、又は、これら該疑似的構造物を複数組み合わせて適用されたものであってもよい。 Furthermore, the aforementioned learning system may be such that a pseudo-structure reproducing one of skin, muscle, bone, blood vessel, membrane or organ is applied to at least a specific part of the human body model that is the subject of learning, or a combination of multiple such pseudo-structures may be applied.

 本態様の学習システムによれば、検査や治療を模した訓練において人体模型に対して学習対象器具を適用する際に、検査や治療を行う部位に皮膚等を再現した疑似的構造物が適用されていることで、より人体に近い手応えが触覚的に得られる。これにより、学習者にとっては、実際の現場で患者等に処置等を行っているかのような触覚的な刺激を受け、より高い学習効果が期待できる。 In this learning system, when applying the learning tool to the human body model in training that simulates examinations and treatments, a pseudo structure that reproduces skin, etc. is applied to the area where the examination or treatment is performed, so that a tactile response closer to that of the human body is obtained. This allows the learner to receive tactile stimulation as if they were actually performing treatment on a patient in the actual field, and a higher learning effect can be expected.

 そして、人体模型は、学習対象とする所定部分に対して皮膚、筋肉、骨、血管、膜あるいは臓器の疑似的構造物を複数組み合わせて適用されたものであるものであってもよい。例えば、疑似的構造物として皮膚、骨及び対象臓器が選択され、これらを重畳的に設けた場合、対象臓器へ穿刺する施術の訓練において、学習者は、触診により骨の間を探る触診、皮膚を貫く力加減、針で骨の間を抜く施術、対象臓器に針を至らせる力加減や深さの加減を、実際の触覚で学習することができる。更に、筋肉や膜、血管の疑似的構造物が適用されていれば、穿刺の際における筋肉や膜の抵抗を感じたり、筋繊維や血管を傷つけないようにこれらの間を抜く手技を学習したりすることもできる。 The human body model may also be one in which multiple pseudo structures of skin, muscle, bone, blood vessel, membrane or organ are applied to a specific part to be studied. For example, if skin, bone and target organ are selected as pseudo structures and arranged in a superimposed manner, in training for puncturing the target organ, the learner can learn, through actual touch, how to palpate the gap between the bones, how much force to use to pierce the skin, how to insert the needle between the bones, and how much force and depth to use to insert the needle into the target organ. Furthermore, if pseudo structures of muscle, membrane and blood vessel are applied, the learner can feel the resistance of the muscle or membrane when puncturing, and learn the technique of inserting the needle between the muscle fibers or blood vessels without damaging them.

 なお、疑似的構造物の適用箇所について、人体模型の一部のみならず、全体に対して前述の疑似的構造物を適用した態様であってもよく、この場合は構造が複雑化するものの、一の人体模型で多種の検査や施術の訓練に対応可能となり、利便性が良く汎用性が更に向上する。 In addition, the pseudo-structure may be applied not only to a part of the human body model, but also to the entire body. In this case, although the structure becomes more complex, one human body model can be used to train a variety of tests and procedures, which is convenient and further improves versatility.

 また、前述の学習システムは、擬似的構造物が、皮膚、筋肉、血管、膜、臓器については軟質素材製であり、骨については準硬質素材製又は硬質素材製であるものであってもよい。 Furthermore, in the above-mentioned learning system, the simulated structures may be made of soft materials for skin, muscles, blood vessels, membranes, and organs, and semi-hard or hard materials for bones.

 本態様の学習システムによれば、前述した検査や治療を模した訓練において、人体模型に対して学習対象器具を適用する際に、検査や治療を行う部位に応じた硬さの素材で形成した疑似的構造物が適用されていることで、更に人体に近い手応えが触覚的に得られる。これにより、学習者にとっては、実際の現場で患者等に処置等を行っているかのような触覚的な刺激を受け、更なる高い学習効果が期待できる。 According to this learning system, when applying the learning tool to the human body model in the training that mimics the above-mentioned examination or treatment, a pseudo structure made of a material with a hardness corresponding to the part of the body to be examined or treated is applied, so that a tactile response that is even closer to that of the human body can be obtained. This provides the learner with a tactile stimulation that makes them feel as if they are actually performing treatment on a patient in the actual field, and an even greater learning effect can be expected.

 また、前述の学習システムは、人体模型には、頭部を有するものについては眼部の位置に、又は、頭部を有しないものについては眼部に相当する位置に、学習者を撮影可能なカメラが設けられているものであってもよい。 In addition, the learning system described above may be such that the human body model is provided with a camera capable of photographing the learner at the eye position if the model has a head, or at a position equivalent to the eye if the model does not have a head.

 本態様の学習システムによれば、検査や治療を模した訓練(学習)中の学習者を観察する目線での映像を撮影し、その映像を得ることができる。つまり、実際の施術の際に患者等から見た施術者の表情や行動を、学習者も客観視することができ、患者等からどのように見られているかといった患者等の心理も体感することができ、一度の訓練で施術側と被施術側の両方について学習することができる。 According to the learning system of this embodiment, it is possible to capture video from the perspective of an observer of a learner undergoing training (learning) that simulates an examination or treatment, and obtain the video. In other words, the learner can objectively view the facial expressions and behavior of the practitioner as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the practitioner and the treated in a single training session.

 なお、前述の「カメラ」は、学習者を撮影可能に設けられていればよいが、例えば、固定型の構造であれば画角が広いものであることが好ましく、また、可動型の構造であってもよい。可動型の構造としては、例えば、頭部又は眼部(眼部に相当する位置にあるものを含む)が手動、自動、リモートでの操作によって学習者の方向を向いて同学習者を撮影可能な構造のもの等が挙げられる。 The aforementioned "camera" may be provided in any way that allows it to photograph the learner, but if it is a fixed structure, it is preferable that it has a wide angle of view, and it may also be a movable structure. Examples of movable structures include a structure in which the head or eye unit (including those located in a position equivalent to the eye unit) can be operated manually, automatically, or remotely to face the learner and photograph the learner.

 そして、前述のカメラで撮影される映像は、学習者の装着する複合現実ディスプレイにリアルタイムで表示される態様であってもよいし、複合現実ディスプレイと接続されるパソコンのハードディスク等に記録される態様であってもよい。複合現実ディスプレイにリアルタイムで表示される態様の場合は、例えば、学習者の見ている人体模型の映像の横に開くサブウィンドウに表示される態様であってもよいし、学習者の見ている人体模型の映像と適宜切り替えて全画面表示される態様であってもよい。また、パソコンのハードディスク等に記録される態様の場合は、例えば、訓練(学習)後に学習者等が他のモニタ等で確認することができる。更に撮影される映像は、大型モニタに同時出力してもよく、この場合は訓練(学習)中の学習者以外の待機中の学習者も映像を共有することができ、次に訓練に着手する学習者は患者側の視点や心境を理解しやすいため、学習者グループ全体の学習効率向上が期待できる。 The images captured by the camera may be displayed in real time on a mixed reality display worn by the learner, or may be recorded on a hard disk of a personal computer connected to the mixed reality display. In the case of a real-time display on a mixed reality display, the images may be displayed in a sub-window that opens next to the image of the human body model that the learner is looking at, or may be displayed full screen by appropriately switching with the image of the human body model that the learner is looking at. In addition, in the case of a recording on a hard disk of a personal computer, the learner can check the images on another monitor after the training (learning). Furthermore, the captured images may be simultaneously output to a large monitor, in which case the images can be shared with other learners waiting other than the learner currently training (learning), and the learner who will next begin training can easily understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire group of learners.

 また、前述の学習システムは、複合現実ディスプレイが、検査装置及び/又は治療装置モデルである第3仮想拡張を、人体模型を設置する室内空間へ重畳的に視覚化可能に設定されているものであってもよい。 The learning system described above may also be configured so that the mixed reality display can visualize a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space in which the human body model is placed.

 なお、本態様で用いる表現「検査装置及び/又は治療装置」とは、検査装置及び治療装置の両方である場合と、検査装置又は治療装置のいずれか一方のみである場合の、いずれの態様も含む意味である。 In addition, the expression "examination device and/or treatment device" used in this embodiment means both an examination device and a treatment device, and either an examination device or a treatment device.

 本態様の学習システムによれば、複合現実ディスプレイを介して室内空間に投影された第3仮想拡張を視認することで室内空間を検査室や処置室に見立てると共に、そこに設置される人体模型を検査又は治療等される対象者に見立てて、学習を行うことができる。例えば、室内空間に投影された第3仮想拡張(検査装置モデル等)の画像を見ながら、開始前の検査装置等の準備やチェック手順等の学習や訓練を行うことができ、開始後の検査装置等と人体模型との位置に応じた立ち回りや学習者の動きの学習や訓練も行うことができる。 According to the learning system of this aspect, by visually viewing the third virtual augmentation projected into the indoor space via a mixed reality display, the indoor space can be viewed as an examination room or treatment room, and the human body model placed there can be viewed as the subject to be examined or treated, etc., for learning. For example, while viewing an image of the third virtual augmentation (examination device model, etc.) projected into the indoor space, learning or training can be performed on preparation of the examination device, etc., and check procedures before starting, and learning or training on how to handle the situation and the learner's movements according to the positions of the examination device, etc. and the human body model after starting can also be performed.

 つまり、本態様の学習方法によれば、実際の検査装置や治療装置を準備することなく、実際の検査装置や治療装置がある空間で患者を相手にしているかのようにして、行うべき手技の手順又は処置の訓練を行うことができ、学習者にとって実際の現場で患者等に処置等を行っているかのような視覚的な刺激を受け、更なる高い学習効果が期待できる。 In other words, this learning method allows students to practice the procedures or treatments they need to perform without having to prepare actual examination or treatment equipment, as if they were treating a patient in a space with actual examination or treatment equipment. This provides students with visual stimulation as if they were actually treating a patient in a real setting, and is expected to have an even greater learning effect.

 本発明の学習方法、及び、学習システムによれば、学習者が、装着した複合現実ディスプレイを介して人体模型に投影された物理解剖学的モデルの仮想拡張を視認しながら、現実の人体模型に対して学習対象器具を使用し、行うべき手技の手順又は処置を学習することができる。 The learning method and learning system of the present invention allow a learner to learn the procedure or steps of a procedure to be performed by using the learning tool on a real human body model while viewing a virtual extension of a physical anatomical model projected onto the human body model via a worn mixed reality display.

本発明の第1実施形態に係る学習システムの構成を示す概略図である。1 is a schematic diagram showing a configuration of a learning system according to a first embodiment of the present invention; 図1に示す学習システムにおいて、人体模型及びこれに投影された第1仮想拡張を示すイメージ図である。FIG. 2 is an image diagram showing a human body model and a first virtual augmentation projected onto the human body model in the learning system shown in FIG. 1 . 本発明の第2実施形態に係る学習システムを示しており、(a)は人体模型に第1仮想拡張及び第2仮想拡張が投影される前の状態を示す斜視図、(b)は人体模型に第1仮想拡張及び第2仮想拡張が投影された状態を示す斜視図、(c)は(b)で投影された第1仮想拡張から骨の画像のみを消去した状態を示す斜視図、である。1 shows a learning system for a second embodiment of the present invention, in which (a) is an oblique view showing the state before the first virtual extension and the second virtual extension are projected onto the human body model, (b) is an oblique view showing the state after the first virtual extension and the second virtual extension have been projected onto the human body model, and (c) is an oblique view showing the state after only the bone image has been erased from the first virtual extension projected in (b). 図3に示す第2実施形態に係る学習システムの変形例(変形例5)であり、人体模型に第1仮想拡張及び第2仮想拡張(胸部CT像)、仮想の操作ボタン等が投影された状態を示す正面図である。This is a modified example (variant example 5) of the learning system according to the second embodiment shown in Figure 3, and is a front view showing a state in which the first virtual extension and the second virtual extension (chest CT image), virtual operation buttons, etc. are projected onto the human body model. 図4に示す学習システムの使用状態説明図であり、(a)は操作によって人体模型に投影された第2仮想拡張(胸部CT横断像)の表示位置が胸部高さ方向略中間位置まで引き下げられた状態を示しており、(b)は操作によって(a)の位置よりも更に表示位置が引き下げられた状態を示している。5 is an explanatory diagram of the usage state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual extension (transverse chest CT image) projected onto the human body model has been lowered to approximately the middle position in the chest height direction by operation, and (b) shows a state in which the display position has been lowered further than the position in (a) by operation. 図4に示す学習システムの使用状態説明図であり、(a)は操作によって人体模型に投影された第2仮想拡張(胸部CT冠状断像)の表示位置が胸部奥行き方向略中間位置である状態を示しており、(b)は操作によって(a)の位置よりも更に表示位置が奥に位置した状態を示している。5 is an explanatory diagram of the usage state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual expansion (chest CT coronal image) projected onto the human body model by operation is approximately in the middle position in the chest depth direction, and (b) shows a state in which the display position is positioned further back than the position in (a) by operation. 図4に示す学習システムの使用状態説明図であり、(a)は操作によって人体模型に投影された第2仮想拡張(胸部CT矢状断像)の表示位置が胸部幅方向略中間位置である状態を示しており、(b)は操作によって(a)の映像に3次元解剖画像を同時重畳させた状態を示している。5 is an explanatory diagram of the use state of the learning system shown in FIG. 4, in which (a) shows a state in which the display position of the second virtual extension (chest CT sagittal image) projected onto the human body model by operation is approximately the middle position in the chest width direction, and (b) shows a state in which a 3D anatomical image is simultaneously superimposed on the image of (a) by operation. 本発明の第3実施形態に係る学習システムで使用する人体模型の構成を示す説明図である。FIG. 13 is an explanatory diagram showing the configuration of a human body model used in a learning system according to a third embodiment of the present invention. 本発明の第4実施形態に係る学習システムにおいて、教場に投影された第3仮想拡張を示すイメージ図である。An image diagram showing a third virtual augmentation projected onto a classroom in a learning system according to the fourth embodiment of the present invention.

 図1~図9を参照して、本発明の実施の形態を更に詳細に説明する。以下の説明は、第1実施形態、第2実施形態、変形例1、変形例2、変形例3、変形例4、変形例5、第3実施形態、第4実施形態の順序により行う。また、図面各図における符号は、煩雑さを軽減し理解を容易にする範囲内で付しており、同一符号が付される複数の同等物についてはその一部にのみ符号を付す場合がある。 The embodiments of the present invention will be described in more detail with reference to Figures 1 to 9. The following description will be given in the order of the first embodiment, the second embodiment, variation 1, variation 2, variation 3, variation 4, variation 5, the third embodiment, and the fourth embodiment. Furthermore, the reference numerals in each drawing are given to the extent that they reduce complexity and facilitate understanding, and in cases where multiple equivalent parts are given the same reference numeral, the reference numeral may only be given to some of the parts.

 〔第1実施形態〕
 (学習システム1)
 図1~図2を参照する。教場R1内において使用される学習システム1は、投影対象となる人体模型2と、学習者Hが装着した複合現実ディスプレイ3と、学習者Hの手技学習対象となる学習対象器具4を備える。学習システム1の各部について、以下詳述する。
First Embodiment
(Learning System 1)
Please refer to Figures 1 and 2. A learning system 1 used in a classroom R1 includes a human body model 2 as a projection target, a mixed reality display 3 worn by a learner H, and a learning target tool 4 as a target for learning manual techniques by the learner H. Each part of the learning system 1 will be described in detail below.

 (人体模型2)
 人体模型2は、後述する第1仮想拡張の投影対象となり得るものであり、本実施形態においては、頭部と胴部を有する実物大の実習モデル人形であり、市販の胸腔ドレーン挿入シミュレータを使用している。
(Human body model 2)
The human body model 2 can be the projection target for the first virtual augmentation described below. In this embodiment, it is a life-size training model doll with a head and torso, and a commercially available chest drain insertion simulator is used.

 (複合現実ディスプレイ3)
 複合現実ディスプレイ3は、人体模型2の一部又は全部に対して物理解剖学的モデルである第1仮想拡張31を重畳的に視覚化可能な機器であり、学習者Hの頭部に装着可能に設けられている。本実施形態において複合現実ディスプレイ3は、MR技術を実施可能なヘッドマウントディスプレイ(ゴーグルタイプ)が使用される(図1参照)。
(Mixed reality display 3)
The mixed reality display 3 is a device capable of visualizing a first virtual augmentation 31, which is a physical anatomical model, in a superimposed manner on a part or all of the human body mannequin 2, and is provided so as to be wearable on the head of the learner H. In this embodiment, the mixed reality display 3 is a head-mounted display (goggle type) capable of implementing MR technology (see FIG. 1 ).

 本実施形態において第1仮想拡張31は、人の骨、臓器、血管、気管の画像であり、これら画像のデータは複合現実ディスプレイの記憶機能部にインストールされたものを、画像生成機能部によって複数の画像が重なった態様(重畳的態様)の画像として構築され、同画像がディスプレイ部を介して投影されるものである。 In this embodiment, the first virtual augmentation 31 is an image of a person's bones, organs, blood vessels, and trachea, and the data of these images is installed in the memory function unit of the mixed reality display, and the image generation function unit constructs an image in which multiple images are superimposed (superimposed mode), and the image is projected via the display unit.

 (学習対象器具4)
 学習対象器具4は、学習者Hの手技学習対象となるものであり、本実施形態においては胸腔ドレーンカテーテルと内筒(医療器具)である(図1~2参照)。
(Learning Object 4)
The learning object instrument 4 is a subject of procedure learning for the learner H, and in this embodiment is a chest drain catheter and an inner tube (medical instrument) (see FIGS. 1 and 2).

 〔学習システム1の作用、これを使用した学習方法〕
 図1~図2を参照して、学習システム1の作用、これを使用した学習方法について説明する。学習方法は、少なくとも下記の第1ステップ、第2ステップを備える。
[Function of learning system 1 and learning method using the same]
1 and 2, the operation of the learning system 1 and a learning method using the same will be described. The learning method includes at least the following first and second steps.

 具体的には、
 (1)第1ステップでは、学習者Hがその頭部に複合現実ディスプレイ3を装着し、複合現実ディスプレイ3を介して人体模型2及びこれに投影された第1仮想拡張31を視認し、
 (2)第2ステップでは、第1ステップを経た学習者Hが、人体模型2及び第1仮想拡張31を視認しながら、学習対象器具4を手に取って人体模型2に適用し、行うべき手技の手順又は処置を学習する。
in particular,
(1) In the first step, the learner H wears the mixed reality display 3 on his/her head and visually recognizes the human body model 2 and the first virtual augmentation 31 projected thereon through the mixed reality display 3;
(2) In the second step, the learner H, who has completed the first step, visually checks the human body model 2 and the first virtual extension 31, picks up the tool 4 to be studied and applies it to the human body model 2, thereby learning the procedure or treatment to be performed.

 学習システム1(及びこれを使用した学習方法)の作用について説明する。なお、第1ステップでは、学習者Hが学習を開始する前に、教授者が(又は学習者自身で)、人体模型2を適当な位置(図1においては寝台上)に設置し、複合現実ディスプレイ3を装着して人体模型2に対して第1仮想拡張31が正しく投影されるようにセッティングを行う。上記セッティングは、複合現実ディスプレイ3を装着した教授者等が、ディスプレイ上に表れる仮想コントローラを操作して投影される第1仮想拡張31を選択し、人体模型2に対して第1仮想拡張31を重畳的に視覚化させる位置等の調整を行う。 The operation of the learning system 1 (and the learning method using it) will now be described. In the first step, before learner H begins learning, the instructor (or the learner himself) places the human model 2 in an appropriate position (on the bed in FIG. 1), wears the mixed reality display 3, and sets it up so that the first virtual augmentation 31 is correctly projected onto the human model 2. In this setting, the instructor, wearing the mixed reality display 3, operates the virtual controller that appears on the display to select the first virtual augmentation 31 to be projected, and adjusts the position etc. so that the first virtual augmentation 31 is visualized superimposed on the human model 2.

 学習者Hは、複合現実ディスプレイ3を装着することで、複合現実ディスプレイ3を介して人体模型2及びこれに投影された第1仮想拡張31を視認することができる。そして、学習者Hは、手技の手順又は処置を学習するにあたり、実際の施術に使用する学習対象器具4を手に取り、第1仮想拡張31が投影された人体模型2に対して学習対象器具4を適用することができる。 By wearing the mixed reality display 3, learner H can view the human body model 2 and the first virtual augmentation 31 projected onto it via the mixed reality display 3. When learning the procedure or treatment of a procedure, learner H can pick up the tool 4 to be studied that will be used in the actual treatment and apply the tool 4 to the human body model 2 onto which the first virtual augmentation 31 is projected.

 これにより、学習者Hは、患者等を前にしたような臨場感及び没入感が視覚的に得られ、且つ、感触を伴わない虚像ではなく、現実の学習対象器具4を手にして人体模型2に対し学習対象器具4を適用するので、実際の作業に近い手応えが触覚的に得られる。この結果、学習者Hは、実際の現場で患者等に処置等を行っているかのような視覚的及び触覚的な刺激を受けるので、高い学習効果が期待できる。 As a result, learner H can visually obtain a sense of realism and immersion, as if he were in front of a patient, and because he holds the actual target tool 4 in his hands and applies it to the human body model 2, rather than using a virtual image that does not involve any sensation, he can obtain a tactile response that is close to that of actual work. As a result, learner H receives visual and tactile stimulation as if he were actually performing treatment on a patient in an actual workplace, and a high learning effect can be expected.

 なお、本実施形態において人体模型2は頭部と胴部のみであるが、複合現実ディスプレイ3は、人体模型2の実体不在部分にも、仮想の上肢や下肢等である第1仮想拡張を三次元で重畳的に投影することができ、学習する部位と連続する(学習する部位に付加される)体の部位についても併せて視認し学習することもできる。 In this embodiment, the human body model 2 only includes the head and torso, but the mixed reality display 3 can project a first virtual extension, such as virtual upper and lower limbs, in a three-dimensional, superimposed manner onto the non-physical parts of the human body model 2, allowing the user to visually identify and learn about body parts that are connected to the part being studied (that are added to the part being studied).

 更に、複合現実ディスプレイ3は、必要に応じて実体不在部分の表示、非表示を切り替えることもできる。加えて、複合現実ディスプレイ3は、画像データのインストールにより、投影する第1仮想拡張31の内容を容易に変更することもでき、一の人体模型に様々な画像や映像を投影すれば、実際上複数の人体模型を所有するのと同様の効果を得られる。 Furthermore, the mixed reality display 3 can also switch between displaying and hiding the parts that do not have a physical presence as necessary. In addition, the mixed reality display 3 can easily change the content of the first virtual augmentation 31 to be projected by installing image data, and by projecting various images and videos onto one human model, it is possible to obtain the same effect as having multiple human models.

 これにより、複数タイプの人体模型を所有していなくても、第1仮想拡張31の内容変更又は実体不在部分の表示、非表示の切り替えによって様々な学習を行うことができ、学習の効率及び効果の向上が図れると共に、複数の人体模型の所有に伴う導入(調達)及び運用コストの低減を図ることもでき、且つ、不使用時の収納スペースが少なくて済むものとなっている。 As a result, even if a user does not own multiple types of human body models, various types of learning can be done by changing the contents of the first virtual extension 31 or by switching between displaying and hiding the parts that do not have a physical body, which improves the efficiency and effectiveness of learning, while also reducing the introduction (procurement) and operating costs associated with owning multiple human body models, and requires less storage space when not in use.

 更に、第1仮想拡張は、標準的な臓器等の画像のみならず、事前の診察等で収集した個別の患者の患部等の画像を使用してもよい。この場合、特定の病状の患者に対する治療や手術を行う前に、一般的な模型を使用した訓練と比較して、より症例に則していると共に、視覚及び触覚を交えた模擬訓練(学習)を行うことができる。更に、第1仮想拡張に係るデータをインストール又はアップデートすることにより、新規な症例や特殊な症例の診察に関する学習や、新しい治療方法の習熟訓練も行うことができる。 Furthermore, the first virtual augmentation may use not only images of standard organs, etc., but also images of affected areas of individual patients collected during prior examinations, etc. In this case, before performing treatment or surgery on a patient with a specific condition, it is possible to conduct simulated training (learning) that is more in line with the case and incorporates visual and tactile sensations, compared to training using general models. Furthermore, by installing or updating data related to the first virtual augmentation, it is possible to learn about the examination of new or special cases, and to practice becoming proficient in new treatment methods.

 〔第2実施形態〕
 (学習システム1a)
 図3を参照する。学習システム1aは、学習システム1の他の実施形態(第2実施形態)であり、人体模型2aと、複合現実ディスプレイ3aと、学習対象器具4aを備える。なお、学習システム1aは、その構造及び作用効果が第1実施形態の学習システム1と一部共通するため、共通する構造及び作用効果については説明を省略し、相違する構造及び作用効果について後述する。なお、本実施形態の説明において、複合現実ディスプレイ3aの図示は省略しているが、複合現実ディスプレイ3との差を説明する便宜上、複合現実ディスプレイ「3a」と符号を付している。
Second Embodiment
(Learning System 1a)
Please refer to Fig. 3. The learning system 1a is another embodiment (second embodiment) of the learning system 1, and includes a human body model 2a, a mixed reality display 3a, and a learning target instrument 4a. The learning system 1a has a structure and effects in common with the learning system 1 of the first embodiment, so the common structure and effects will be omitted, and the different structure and effects will be described later. In the description of this embodiment, the mixed reality display 3a is not shown, but is referred to as the mixed reality display "3a" for the convenience of explaining the difference from the mixed reality display 3.

 (人体模型2a)
 人体模型2aは、第1仮想拡張及び後述する第2仮想拡張の投影対象となり得るものであり、本実施形態においては、頭部、胴部及び大腿部の上半分を有する実物大の実習モデル人形である(図3(a)参照。胸腔ドレーン挿入部分等の機構は特に設けられていない)。
(Human body model 2a)
The human body model 2a can be the projection target for the first virtual expansion and the second virtual expansion described below, and in this embodiment is a life-size training model doll having a head, torso, and the upper half of the thighs (see Figure 3 (a); no special mechanisms such as a chest drain insertion part are provided).

 (複合現実ディスプレイ3a)
 複合現実ディスプレイ3aは、MR技術を実施可能なヘッドマウントディスプレイ(ゴーグルタイプ)であり、前述した第1仮想拡張31に加えて、人体模型2aの顔面部分に対して患者役の表情モデルである第2仮想拡張32を重畳的に視覚化可能に設定されている(図3(b)、(c)参照)。
(Mixed reality display 3a)
The mixed reality display 3a is a head-mounted display (goggles type) capable of implementing MR technology, and is configured to be able to visualize, in addition to the first virtual augmentation 31 described above, a second virtual augmentation 32, which is a facial expression model of the patient, superimposed on the facial portion of the human body model 2a (see Figures 3(b) and (c)).

 本実施形態において第2仮想拡張32は、人の顔の画像であり、これら画像のデータは複合現実ディスプレイの記憶機能部にインストールされたものを、画像生成機能部によって人体模型2aの顔面部分に画像が重なった態様(重畳的態様)の画像として構築され、同画像がディスプレイ部を介して投影されるものである。 In this embodiment, the second virtual augmentation 32 is an image of a person's face, and the data of this image is installed in the memory function unit of the mixed reality display, and then constructed by the image generation function unit as an image in which the image is superimposed on the facial portion of the human body model 2a (superimposed mode), and this image is projected via the display unit.

 (学習対象器具4a)
 学習対象器具4aは、学習者の手技学習対象となるものであり、本実施形態においては超音波診断装置及びそのプローブ(検査器具)である(図3(a)~(c)参照)。
(Learning target instrument 4a)
The learning object tool 4a is a tool for the learner to learn a procedure, and in this embodiment is an ultrasound diagnostic device and its probe (examination tool) (see FIGS. 3(a) to 3(c)).

 学習システム1a及びこれを使用した学習方法によれば、第1仮想拡張31に加えて、第2仮想拡張32が人体模型2aの顔面部分に対して重畳的に視覚化(適用)される(図3(b)、(c)参照)。これにより、学習者は、第2仮想拡張32として投影された表情モデルを適宜視認しながら学習することができる。また、学習システム1aによれば、直接的な技術の習得のみならず、手技開始前における急激な病状変化による患者(人体模型2a)の表情の変化を観察したり、患者の不安や緊張を緩和するための会話を行ったりする訓練も併せて行うことができる。 According to the learning system 1a and the learning method using the same, in addition to the first virtual extension 31, the second virtual extension 32 is visualized (applied) superimposed on the facial portion of the human body model 2a (see Figures 3(b) and (c)). This allows the learner to study while appropriately viewing the facial expression model projected as the second virtual extension 32. Furthermore, according to the learning system 1a, in addition to directly acquiring skills, learners can also train by observing changes in the patient's (human body model 2a's) facial expression due to a sudden change in the patient's condition before the start of the procedure, and by having a conversation with the patient to ease their anxiety and tension.

 そして、学習システム1a及びこれを使用した学習方法によれば、実際の患者や患者役を準備することなく、実際の患者を相手にしているかのようにして、行うべき手技の手順又は処置の訓練を行うことができ、学習者にとって実際の現場で患者等に処置等を行っているかのような視覚的な刺激を受け、更なる高い学習効果が期待できる。 Furthermore, with the learning system 1a and the learning method using it, it is possible to train in the procedure or treatment to be performed as if the student were dealing with a real patient, without having to prepare an actual patient or a role-playing patient. This provides the student with visual stimulation as if they were actually performing treatment on a patient in a real setting, and an even greater learning effect can be expected.

 〔変形例1〕
 変形例1に係る学習システム1aは、第2仮想拡張32で表現される患者役の表情モデルとして、複合現実ディスプレイ3a又は外部端末で撮影した学習指導教員や同席者の顔画像を取り込み、映像処理ソフトウェアにより処理されたリアルタイムの表情を表示できるように設定されている。加えて、表情モデルと学習者とが会話可能に設定し、臨機応変な対応の訓練を行いうるようにされている。
[Modification 1]
The learning system 1a according to the first modification is configured to capture facial images of the instructor or attendees taken by the mixed reality display 3a or an external terminal as a facial expression model of the patient role represented by the second virtual augmentation 32, and to display real-time facial expressions processed by image processing software. In addition, the facial expression model and the learner are set to be able to converse with each other, so that training in responding flexibly to situations can be carried out.

 なお、変形例1に係る学習システム1aは、上記の点を除き、第2実施形態の学習システム1aと構造及び作用効果が同一であるため、同一の構造及び作用効果の説明を省略し、相違点の説明における学習システムや各部名称に係る符号も同一符号を使用する。また、変形例1に係る学習システム1aは、図示を省略しているが、第2実施形態と同じ符号を付して説明する。 Note that, except for the above points, the learning system 1a according to Modification 1 has the same structure and effects as the learning system 1a according to the second embodiment, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 1 is not shown in the drawings, it will be described using the same reference numerals as the second embodiment.

 変形例1の学習システム1a及びこれを使用した学習方法によれば、同席者を患者役の表情モデルとしているので、学習に際して緊張感が提供されて高い学習効果が期待できる(反対にユーモアが提供されて和やかな雰囲気の下での高い学習効果が期待できる可能性もありうる)。更に、変形例1の学習システム1aを使用した学習方法によれば、患者役の表情モデルを適宜視認し、且つ、状況に応じた会話(例えば、患者の不安や緊張を緩和するための説明や雑談)も交えるといった訓練を含む学習を行うことができる。 According to the learning system 1a of the first modified example and the learning method using the same, the facial expression model of the person in attendance plays the role of a patient, which creates a sense of tension during learning and is expected to have a high learning effect (on the other hand, it is possible that a good learning effect can be expected in a relaxed atmosphere by providing humor). Furthermore, according to the learning method using the learning system 1a of the first modified example, learning can be carried out including training in which the facial expression model of the patient is appropriately viewed and conversation appropriate to the situation is also included (for example, explanations or casual conversation to ease the patient's anxiety and tension).

 〔変形例2〕
 変形例2に係る学習システム1aは、人体模型2aの胴部に圧力センサー(図示省略)が設けられ、学習対象器具4a先部に位置センサー(図示省略)が設けられている。また、学習システム1aは、複合現実ディスプレイ3aが、位置センサーが検出する位置情報の受信機能、圧力センサーが検出する圧力値を受信可能な受信機能、及び、設定値記憶機能を有し、各受信機能で受信する位置情報及び圧力値の両方又はいずれか一方が予め設定された設定値を超えた際に、第2仮想拡張32が不快或いは苦悶を表現する表情に変化するように設定されている。
[Modification 2]
In the learning system 1a according to the second modification, a pressure sensor (not shown) is provided in the torso of the human body model 2a, and a position sensor (not shown) is provided in the tip of the learning target instrument 4a. In addition, in the learning system 1a, the mixed reality display 3a has a receiving function for receiving position information detected by the position sensor, a receiving function capable of receiving a pressure value detected by the pressure sensor, and a setting value storage function, and is configured such that when both or either one of the position information and the pressure value received by each receiving function exceeds a preset setting value, the second virtual augmentation 32 changes to an expression expressing discomfort or agony.

 なお、変形例2に係る学習システム1aは、上記の点を除き、第2実施形態(及び変形例1)の学習システム1aと構造及び作用効果が同一であるため、同一の構造及び作用効果の説明を省略し、相違点の説明における学習システムや各部名称に係る符号も同一符号を使用する。また、変形例2に係る学習システム1aは、図示を省略しているが、第2実施形態と同じ符号を付して説明する。 Note that, except for the above points, the learning system 1a according to Modification 2 has the same structure and effects as the learning system 1a according to the second embodiment (and Modification 1), so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 2 is not shown in the drawings, it will be described using the same reference numerals as in the second embodiment.

 変形例2の学習システム1aを使用して行う超音波検査(いわゆるエコー検査)の訓練の際に、学習者Hがプローブを人体模型2aの腹部等に押し当てると、人体模型2aに設けられた圧力センサーによって圧力値が送信される。複合現実ディスプレイ3aは、送信された圧力値を受信し、圧力値が不適当であれば、複合現実ディスプレイ3aの画面に表示される第2仮想拡張32で表される表情モデルが不快或いは苦悶を表現する表情に変化する。 When training in ultrasound examination (so-called echo examination) using the learning system 1a of the second modified example, learner H presses the probe against the abdomen or other part of the human body model 2a, a pressure value is transmitted by a pressure sensor provided in the human body model 2a. The mixed reality display 3a receives the transmitted pressure value, and if the pressure value is inappropriate, the facial expression model represented by the second virtual augmentation 32 displayed on the screen of the mixed reality display 3a changes to an expression expressing discomfort or distress.

 つまり、変形例2の学習システム1a及びこれを使用した学習方法によれば、前述した検査を模した訓練において人体模型2aに対して学習対象器具4aを適用する際に、仮に人間であれば痛覚を感じるような施術を行った際に、第2仮想拡張32で表される表情モデルが不快或いは苦悶を表現する表情に変化する。これにより、学習者H自身で不適当な施術を行ったことが即時に判断でき、且つ、実際の人間に施術するのと同様に表情を伺いながら施術する訓練を行うことができる。 In other words, according to the learning system 1a of the second modification and the learning method using it, when the learning target tool 4a is applied to the human body model 2a in the training simulating the above-mentioned examination, if a treatment that would cause pain in a human is performed, the facial expression model represented by the second virtual extension 32 changes to a facial expression expressing discomfort or agony. This allows the learner H to immediately determine that he or she has performed an inappropriate treatment, and allows the learner H to train in performing the treatment while observing the facial expression in the same way as when performing the treatment on a real human.

 〔変形例3〕
 変形例3に係る学習システム1aは、第2仮想拡張32において視覚化された表情モデル(図3(b)、(c))のうち、目(眼部321)に関する描写において目線が学習者Hの方向へ適時移動可能に設定されていると共に、複合現実ディスプレイ3aを介して学習者Hが目線を認識可能に設定されている。
[Modification 3]
In the learning system 1a relating to variant example 3, in the depiction of the eyes (eye portion 321) in the facial expression model (Figures 3 (b) and (c)) visualized in the second virtual extension 32, the gaze is set so as to be able to move toward the learner H as needed, and the gaze is set so that the learner H can recognize it via the mixed reality display 3a.

 なお、変形例3に係る学習システム1aは、上記の点を除き、第2実施形態の学習システム1aと構造及び作用効果が同一であるため、同一の構造及び作用効果の説明を省略し、相違点の説明における学習システムや各部名称に係る符号も同一符号を使用する。また、変形例3に係る学習システム1aは、図示を省略しているが、第2実施形態と同じ符号を付して説明する。 The learning system 1a according to the modified example 3 has the same structure and effects as the learning system 1a according to the second embodiment, except for the above points, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to the modified example 3 is not shown in the figures, it will be described using the same reference numerals as the second embodiment.

 変形例3の学習システム1a及びこれを使用した学習方法によれば、学習者Hが第2仮想拡張32により投影された患者役の表情モデルも適宜視認しながら学習を行う等の際に、患者役の目線を変化させることができると共に、学習者Hはこれを認識することができる。なお、実際の患者や診察を受ける者は不安や痛みを感じると施術者の顔を見ることがあるが、変形例3の学習システム1a及びこれを使用した学習方法によれば、視覚化された表情モデルの目線が学習者Hの方向へ適時移動することによって、実際の施術に近い緊張感や現実感を学習者に体感させ、学習効果の更なる向上が期待できる。  According to the learning system 1a of the modified example 3 and the learning method using the same, when the learner H studies while visually checking the facial expression model of the patient role projected by the second virtual extension 32 as appropriate, the line of sight of the patient role can be changed and the learner H can recognize this. Note that actual patients and those receiving medical treatment may look at the face of the practitioner when they feel anxious or in pain, but according to the learning system 1a of the modified example 3 and the learning method using the same, the line of sight of the visualized facial expression model moves in the direction of the learner H as appropriate, allowing the learner to experience a sense of tension and realism similar to that of an actual treatment, which is expected to further improve the learning effect.

 前述の目線の適時移動は、自動又は手動で行うことができる。自動の事例としては、複合現実ディスプレイに予め設定された定型の動作である態様、人体模型に設けた感圧センサー等と複合現実ディスプレイのソフトウェア等とのリンクにより人体模型の所定部位で圧力感知した際に目線が学習者の方向へ適時移動するように設定されている態様が挙げられ、手動の事例としては学習指導教員等が意図的に目線を移動させる操作を行う態様が挙げられる。 The aforementioned timely movement of the gaze can be performed automatically or manually. Examples of automatic movements include a mode in which the gaze is a standard operation preset in the mixed reality display, and a mode in which the gaze is set to move toward the learner at the appropriate time when pressure is detected at a specific part of the human model by linking a pressure sensor or the like provided on the human model with the software of the mixed reality display, etc. An example of manual movements includes a mode in which a teaching instructor or the like intentionally moves the gaze.

 〔変形例4〕
 変形例4に係る学習システム1aは、人体模型2aの頭部における眼部321の位置に、学習者Hを撮影可能なカメラ(図示省略)が設けられている。また、カメラは固定型の広角カメラであり、人体模型2aは頭部が左右上下に稼働する構造となっている。そして、前述のカメラで撮影される映像は、学習者Hの装着する複合現実ディスプレイ3にリアルタイムで表示可能であり、且つ、複合現実ディスプレイ3と無線接続されたパソコンのハードディスク等に記録可能に設けられている。
[Modification 4]
In the learning system 1a according to the fourth modification, a camera (not shown) capable of photographing the learner H is provided at the position of the eye 321 on the head of the human body model 2a. The camera is a fixed wide-angle camera, and the head of the human body model 2a is structured so that it can move left and right and up and down. The image captured by the camera can be displayed in real time on the mixed reality display 3 worn by the learner H, and is also provided so as to be recordable on a hard disk or the like of a personal computer wirelessly connected to the mixed reality display 3.

 なお、変形例4に係る学習システム1aは、上記の点を除き、第2実施形態の学習システム1aと構造及び作用効果が同一であるため、同一の構造及び作用効果の説明を省略し、相違点の説明における学習システムや各部名称に係る符号も同一符号を使用する。また、変形例4に係る学習システム1aは、図示を省略しているが、第2実施形態と同じ符号を付して説明する。 The learning system 1a according to Modification 4 has the same structure and effects as the learning system 1a according to the second embodiment, except for the above points, so a description of the same structure and effects will be omitted, and the same reference numerals will be used for the learning system and the names of each part in the description of the differences. Also, although the learning system 1a according to Modification 4 is not shown in the drawings, it will be described using the same reference numerals as the second embodiment.

 変形例4の学習システム1a及びこれを使用した学習方法によれば、検査や治療を模した訓練(学習)中の学習者Hを観察する目線での映像を撮影し、リアルタイムでその映像を得ることができ、また、パソコンのハードディスク等に記録された映像を訓練(学習)後に学習者等が他のモニタ等で確認することができる。つまり、実際の施術の際に患者等から見た施術者の表情や行動を、学習者Hも客観視することができ、患者等からどのように見られているかといった患者等の心理も体感することができ、一度の訓練で施術側と被施術側の両方について学習することができる。 According to the learning system 1a of the fourth modification and the learning method using the same, video can be captured from the viewpoint of observing the learner H during training (learning) simulating examinations and treatments, and the video can be obtained in real time. In addition, the learner can check the video recorded on the hard disk of a personal computer on another monitor after training (learning). In other words, the learner H can objectively view the facial expressions and actions of the therapist as seen by the patient during actual treatment, and can also experience the psychology of the patient, such as how they are perceived by the patient, making it possible to learn about both the therapist and the treated in a single training session.

 更に、撮影される映像は、大型モニタに同時出力してもよく、この場合は訓練(学習)中の学習者以外の待機中の学習者も映像を共有することができ、次に訓練に着手する学習者は患者側の視点や心境を理解しやすいため、学習者グループ全体の学習効率向上が期待できる。 Furthermore, the captured images can be simultaneously output to a large monitor, in which case the images can be shared with other students waiting in line in addition to the student currently undergoing training (studying). This makes it easier for the next student to begin training to understand the patient's perspective and state of mind, which is expected to improve the learning efficiency of the entire group of students.

 本変形例において前述の「カメラ」は、前述の構造であるが、これに限定するものではなく、例えば、可動型の構造等、学習者を撮影可能に設けられていればよい。また、人体模型が頭部を有しないものである場合は、眼部に相当する位置に、学習者を撮影可能なカメラを設ける態様であってもよい。また、カメラとしてステレオカメラを設置すると、患者等の役の他の学習者が複合現実ディスプレイを装着してその映像を観察することで、立体感のある患者等役の目線の環境に没入し、患者等の心理も体感することもできる。 In this modified example, the "camera" is the structure described above, but is not limited to this, and may be, for example, a movable structure or the like, as long as it is capable of photographing the learner. In addition, if the human body model does not have a head, a camera capable of photographing the learner may be provided at a position equivalent to the eye. Furthermore, if a stereo camera is installed as the camera, another learner playing the role of a patient, etc. can wear a mixed reality display and observe the image, thereby immersing himself in the three-dimensional environment from the perspective of the patient, etc., and also experiencing the psychology of the patient, etc.

 〔変形例5〕
 図4~7に示す変形例5に係る学習システム1a’は、人体模型2a’(頭部無し、胴部のみ)を使用し、これに適用される第1仮想拡張31a’として人体模型2aに当て嵌まる適当なサイズに設定された臓器の2次元CT画像311(前述した「医用画像」に相当。以下同じ)及び3次元解剖画像312を同一画面上で重畳可能に設けたものである。図4~7では、学習システム1a’において複合現実ディスプレイ3aに表示される全体映像を示しており、学習者Hはこれを視認することになる。
[Modification 5]
The learning system 1a' according to the fifth modification shown in Figures 4 to 7 uses a human body model 2a' (headless, torso only), and as a first virtual extension 31a' applied to the human body model 2a, a two-dimensional CT image 311 (corresponding to the above-mentioned "medical image"; the same applies below) of an organ set to an appropriate size to fit the human body model 2a and a three-dimensional anatomical image 312 are provided so as to be superimposed on the same screen. Figures 4 to 7 show the entire image displayed on the mixed reality display 3a in the learning system 1a', which is viewed by the learner H.

 図4を参照する。同図で示す第1仮想拡張31a’は、胸部の2次元CT画像311(胸部CT横断像を多数重ねたもの)である。そして、人体模型2a’周辺には、仮想拡張により表現された複数のハンドル(本変形例では3つ)及び複数のボタン(本変形例では8つ)が表示される。前述の各ハンドルは画像の表示位置等を変更する為に使用されるものであり、学習者Hがこれを表示画面中で握って操作を行う。また、前述の各ボタンは画像の切替表示等を行う為に使用されるものであり、学習者Hがこれを表示画面中で押して操作を行う。 Refer to Figure 4. The first virtual extension 31a' shown in this figure is a two-dimensional CT image 311 of the chest (multiple overlapping chest CT cross-sectional images). A number of handles (three in this modified example) and a number of buttons (eight in this modified example) expressed by virtual extension are displayed around the human body model 2a'. Each of the aforementioned handles is used to change the display position of the image, etc., and is operated by learner H by holding it on the display screen. Each of the aforementioned buttons is used to switch the display of images, etc., and is operated by learner H by pressing it on the display screen.

 学習システム1a’は、学習者Hが表示映像内で第1ハンドル313を握り、映像の上下方向に移動させることで、任意箇所の胸部CT横断像を表示させることができる。例えば、学習者Hが第1ハンドル313を握って図4に示す第1ハンドル313位置から下方に下げると、図5(a)~(b)に示すように表示される胸部CT横断像の位置が徐々に下降することになる。なお、逆に、握った第1ハンドル313を上げれば胸部CT横断像の位置が徐々に上昇することになる(図示省略)。つまり、第1ハンドル313の昇降操作と胸部CT横断像における表示位置が連動するものとなっている。 The learning system 1a' allows learner H to grasp the first handle 313 in the displayed image and move it up and down in the image to display a chest CT transverse image of any location. For example, when learner H grasps the first handle 313 and moves it downward from the position of the first handle 313 shown in Figure 4, the position of the displayed chest CT transverse image will gradually decrease as shown in Figures 5(a)-(b). Conversely, when learner H raises the first handle 313 that he is grasping, the position of the chest CT transverse image will gradually increase (not shown). In other words, the raising and lowering operation of the first handle 313 is linked to the display position on the chest CT transverse image.

 また、学習システム1a’は、学習者Hが表示映像内で第2ハンドル314を握り、映像の前後方向に移動させることで、任意箇所の胸部CT冠状断像を表示させることができる。例えば、学習者Hが第2ハンドル314を握って図6に示す第1ハンドル313位置から表示映像奥方向に押すと、図6(a)~(b)に示すように表示される胸部CT冠状断像の位置が徐々に奥側に移動することになる。なお、逆に、握った第2ハンドル314を手前方向に引けば胸部CT冠状断像の位置が徐々に手前側に移動することになる(図示省略)。つまり、第2ハンドル314の前後移動操作と胸部CT冠状断像における表示位置が連動するものとなっている。 Furthermore, the learning system 1a' allows learner H to grasp the second handle 314 in the displayed image and move it in the forward and backward directions of the image to display a chest CT coronal section image of any location. For example, when learner H grasps the second handle 314 and pushes it from the position of the first handle 313 shown in FIG. 6 toward the back of the displayed image, the position of the displayed chest CT coronal section image will gradually move toward the back as shown in FIGS. 6(a)-(b). Conversely, if learner H pulls the grasped second handle 314 toward the front, the position of the chest CT coronal section image will gradually move toward the front (not shown). In other words, the operation of moving the second handle 314 forward and backward is linked to the display position on the chest CT coronal section image.

 更にまた、学習システム1a’は、学習者Hが表示映像内で第3ハンドル315を握り、映像の左右方向(図4~6において左右方向、図7(a)においては前後方向)に移動させることで、任意箇所の胸部CT矢状断像を表示させることができる。例えば、学習者Hが第3ハンドル315を握って図7(a)に示す第3ハンドル315位置から表示映像奥方向に押すと、表示される胸部CT矢状断像の位置が徐々に奥側に移動することになる。なお、逆に、握った第3ハンドル315を手前方向に引けば胸部CT矢状断像の位置が徐々に手前側に移動することになる(図示省略)。つまり、第3ハンドル315の前後(左右)移動操作と胸部CT矢状断像における表示位置が連動するものとなっている。 Furthermore, the learning system 1a' allows learner H to grasp the third handle 315 in the displayed image and move it left and right (left and right in Figs. 4 to 6, front and back in Fig. 7(a)) to display a chest CT sagittal image of any location. For example, when learner H grasps the third handle 315 and pushes it from the position of the third handle 315 shown in Fig. 7(a) toward the back of the displayed image, the position of the displayed chest CT sagittal image will gradually move toward the back. Conversely, if learner H pulls the grasped third handle 315 toward the viewer, the position of the chest CT sagittal image will gradually move toward the viewer (not shown). In other words, the forward and backward (left and right) movement of the third handle 315 is linked to the display position in the chest CT sagittal image.

 第1ボタン316は、2次元CT画像311の表示ON/OFFスイッチである。学習システム1a’は、学習者Hが表示映像内で第1ボタン316を押す(スイッチをONにする)ことで、2次元CT画像311(図4~7(a)では胸部の2次元CT画像)を表示させる(人体模型2a’に2次元CT画像311を重畳的に投影する)ことができる。 The first button 316 is an ON/OFF switch for displaying the 2D CT image 311. When the learner H presses the first button 316 (turns the switch ON) within the displayed image, the learning system 1a' can display the 2D CT image 311 (a 2D CT image of the chest in Figures 4 to 7(a)) (projecting the 2D CT image 311 in a superimposed manner onto the human body model 2a').

 第2ボタン317は、3次元解剖画像312の表示ON/OFFスイッチである。学習システム1a’は、学習者Hが表示映像内で第2ボタン317を押す(スイッチをONにする)ことで、3次元解剖画像312(図7(b)では胸部の3次元解剖画像)を表示させる(人体模型2a’に3次元解剖画像312を重畳的に投影する)ことができる。 The second button 317 is an ON/OFF switch for displaying the 3D anatomical image 312. When the learner H presses the second button 317 (turns the switch ON) within the displayed image, the learning system 1a' can display the 3D anatomical image 312 (in FIG. 7(b) a 3D anatomical image of the chest) (projecting the 3D anatomical image 312 superimposed on the human body model 2a').

 第1ボタン316と第2ボタン317は、これらを交互に押すことで、図7(a)及び(b)に示す映像が交互に表示させることができる。つまり、変形例5の学習システム1a’及びこれを使用した学習方法によれば、学習者Hは、第1仮想拡張31a’により重畳して投影された立体的な実物大の2次元CT画像311と3次元解剖画像312を適宜視認しながら学習することができ、ひいては、2次元CT画像311と3次元解剖画像312の空間的位置関係を、2次元と3次元を関連付けて学習することができる。 By alternately pressing the first button 316 and the second button 317, the images shown in Figs. 7(a) and (b) can be alternately displayed. In other words, according to the learning system 1a' of modified example 5 and the learning method using the same, learner H can study while appropriately viewing the stereoscopic life-size 2D CT image 311 and 3D anatomical image 312 superimposed and projected by the first virtual extension 31a', and can ultimately learn the spatial positional relationship between the 2D CT image 311 and the 3D anatomical image 312 by associating the two dimensions with the three dimensions.

 これにより、多くの学生や経験の浅い実務者が苦労していた2次元CT画像と3次元解剖画像との関係性の理解の為のトレーニングと、更なる理解の深化を図ることができ、学習効果の更なる向上が期待できる。そして、学習者Hによる2次元CT画像と3次元解剖画像との関係性の理解が深化することにより、最終的には2次元CT画像を見るだけで、学習者が臓器等の位置を直感的に理解できるようになることが期待される。 This will provide training in understanding the relationship between 2D CT images and 3D anatomical images, something that many students and inexperienced practitioners have struggled with, and will allow for a deeper understanding, which is expected to further improve learning outcomes. Furthermore, it is expected that by deepening Learner H's understanding of the relationship between 2D CT images and 3D anatomical images, he or she will ultimately be able to intuitively understand the position of organs, etc., simply by looking at the 2D CT images.

 変形例5の学習システム1a’において、2次元CT画像311と3次元解剖画像312は、健康的な標準の画像である態様、病巣や特定の特徴を有する患者又は個人の臓器や骨等の画像である態様等が挙げられる。そして、病巣を有する患者の臓器や骨等の画像を適用した場合は、学習者のみならず、医師を含む医療従事者が実際の手術や治療を行う前の施術方針の検討や術前の打ち合わせに利用することもできる。 In the learning system 1a' of variant 5, the 2D CT image 311 and the 3D anatomical image 312 may be images of a healthy standard, or images of the organs, bones, etc. of a patient or individual with a lesion or specific characteristics. When images of the organs, bones, etc. of a patient with a lesion are used, they can be used not only by learners but also by medical professionals including doctors to consider treatment plans and for pre-operative meetings before carrying out actual surgery or treatment.

 「2次元CT画像及び3次元解剖画像を同一画面上で重畳可能」とは、2次元CT画像と3次元解剖画像を同一画面上で重畳させることができればよく、切替により2次元CT画像と3次元解剖画像とが同位置に交互に表示させる態様及び同時に重複させて表示する態様のいずれの場合も含まれる。 "Able to superimpose 2D CT images and 3D anatomical images on the same screen" means that 2D CT images and 3D anatomical images can be superimposed on the same screen, and includes both cases where 2D CT images and 3D anatomical images are alternately displayed in the same position by switching, and where they are simultaneously displayed overlapping each other.

 〔第3実施形態〕
 (学習システム1b)
 図4を参照する。学習システム1bは、学習システム1の他の実施形態(第3実施形態)であり、人体模型2bと、複合現実ディスプレイ3と、学習対象器具4を備える。なお、学習システム1bは、複合現実ディスプレイ3と学習対象器具4の構造及び作用効果が第1実施形態の学習システム1と共通するため、共通する構造及び作用効果については説明を省略し、相違する人体模型2bの構造及び作用効果について後述する。また、本実施形態の説明において、複合現実ディスプレイ3の図示は省略しているが、説明する便宜上、複合現実ディスプレイ「3」と符号を付している。
Third Embodiment
(Learning System 1b)
Please refer to Fig. 4. The learning system 1b is another embodiment (third embodiment) of the learning system 1, and includes a human body model 2b, a mixed reality display 3, and a learning target instrument 4. Since the learning system 1b has the same structures and effects as the learning system 1 of the first embodiment in terms of the mixed reality display 3 and the learning target instrument 4, the description of the common structures and effects will be omitted, and the structure and effects of the human body model 2b, which are different, will be described later. In addition, in the description of this embodiment, the mixed reality display 3 is not illustrated, but for convenience of description, the mixed reality display is designated by the symbol "3".

 (人体模型2b)
 人体模型2bは、第1仮想拡張31及び第2仮想拡張32の投影対象となり得るものである(図4参照)。本実施形態において人体模型2bは、頭部、胴部及び大腿部の上半分を有する実物大の実習モデル人形であって、その胸部に空洞が形成され、同空洞には皮膚、筋肉、骨、胸膜、肺及び血管の配置、形状及び質感を再現した疑似的構造物21(図4の破線部分を参照)が適用されている。疑似的構造物21のうち、皮膚、筋肉、胸膜、肺、血管については軟質素材製で形成され、骨については硬質素材製で形成されている。なお、疑似的構造物は、個別臓器等の模型(既製品)を購入して組み立てるものであってもいし、後述するように3Dプリンタを使用して自身で製造したものであってもよい。
(Human body model 2b)
The human body model 2b can be a projection target of the first virtual augmentation 31 and the second virtual augmentation 32 (see FIG. 4). In this embodiment, the human body model 2b is a life-size training model doll having a head, a torso, and the upper half of the thighs, and a cavity is formed in the chest, and a pseudo-structure 21 (see the dashed line in FIG. 4) that reproduces the arrangement, shape, and texture of the skin, muscles, bones, pleura, lungs, and blood vessels is applied to the cavity. Of the pseudo-structure 21, the skin, muscles, pleura, lungs, and blood vessels are made of soft materials, and the bones are made of hard materials. The pseudo-structure may be a purchased model (ready-made product) of an individual organ or the like and assembled, or may be manufactured by the user using a 3D printer as described later.

 疑似的構造物は、これを内製する場合、3Dプリンタの使用により製造する方法、樹脂製膜材(ラップ)、発泡樹脂材(スポンジ)、各種硬軟樹脂材等、又はこれらを組み合わせる工作等の方法で行う態様が挙げられる。更に、内製又は外製される疑似的構造物は、標準的な臓器のみならず、事前の診察等で収集した個別の患者の臓器等を再現したものであってもよい。この場合、特定の病状の患者に対する手術を行う前に、一般的な模型を使用した訓練と比較して、より症例に則していると共に、視覚及び触覚を交えた模擬訓練を行うことができる。 When the pseudo-structure is produced in-house, it can be produced using a 3D printer, a resin film material (wrap), a foamed resin material (sponge), various hard and soft resin materials, or a combination of these. Furthermore, the pseudo-structure produced in-house or outsourced can be not only a standard organ, but also a reproduction of an individual patient's organ collected during a prior examination. In this case, compared to training using a general model, it is possible to carry out simulated training that is more in line with the case and incorporates visual and tactile sensations before performing surgery on a patient with a specific condition.

 学習システム1b及びこれを使用した学習方法によれば、手技の訓練において人体模型2bに対して学習対象器具4を適用する際に、皮膚等の治療を行う部位の形状等及び硬さを再現した疑似的構造物21が適用されていることで、より人体に近い手応えが触覚的に得られる。これにより、学習者Hにとっては、実際の現場で患者等に処置等を行っているかのような触覚的な刺激を受け、より高い学習効果が期待できる。 According to the learning system 1b and the learning method using the same, when the learning tool 4 is applied to the human body model 2b in training, a pseudo structure 21 that reproduces the shape and hardness of the area to be treated, such as the skin, is applied, so that a tactile response closer to that of the human body can be obtained. This allows the learner H to receive tactile stimulation as if he or she were actually performing treatment on a patient in the field, and a higher learning effect can be expected.

 更に、学習システム1bは、人体模型2bが、各要素が重畳的に設けられた疑似的構造物21を有する構造であるため、図4に示すような対象臓器へ穿刺する施術の訓練(すなわち、学習対象器具4の使用)において、学習者Hは、触診により骨の間を探る触診、穿刺の際に皮膚や膜、筋肉をこれらの抵抗を感じながら貫く力加減、針で筋繊維や血管を傷つけずに骨の間を抜く施術、対象臓器に針を至らせる力加減や深さの加減を、実際の人体に近い触覚で手技を学習することができる。 Furthermore, in the learning system 1b, the human body model 2b has a structure with a pseudo structure 21 in which each element is arranged in a superimposed manner, so that in training for the procedure of puncturing a target organ as shown in FIG. 4 (i.e., using the learning target instrument 4), learner H can learn the techniques with a sense of touch close to that of an actual human body, such as palpation to find between the bones, the amount of force to use when puncturing the skin, membrane, and muscle while feeling their resistance, the procedure of removing the needle between the bones without damaging the muscle fibers or blood vessels, and the amount of force and depth to use when reaching the needle into the target organ.

 なお、第1仮想拡張31で投影される画像は、疑似的構造物21における同等物と重複する位置に投影されるように設定されており、例えば、疑似的構造物21の骨部分と第1仮想拡張31で投影される骨の画像は重複する位置で投影される。これにより、学習者は、複合現実ディスプレイ3を介し、投影された血管等の走行、臓器や骨の配置(第1仮想拡張31)を視認することができ、投影された表情モデル(第2仮想拡張32)も視認することができるため、視覚を通じた学習も併せて可能であることは言うまでもない(但し、図4では、疑似的構造物21の構成が理解しやすいように強調しており、図2に示すような第1仮想拡張31で投影される画像を省略し、位置のみ示している)。 The image projected in the first virtual extension 31 is set to be projected at a position overlapping with the equivalent in the pseudo structure 21. For example, the bone part of the pseudo structure 21 and the image of the bone projected in the first virtual extension 31 are projected at an overlapping position. This allows the learner to visually recognize the course of the projected blood vessels, etc., and the arrangement of the organs and bones (first virtual extension 31) via the mixed reality display 3, and also the projected facial expression model (second virtual extension 32), so it goes without saying that learning through vision is also possible (however, in FIG. 4, the configuration of the pseudo structure 21 is emphasized to make it easier to understand, and the image projected in the first virtual extension 31 as shown in FIG. 2 is omitted, and only the position is shown).

 〔第4実施形態〕
 (学習システム1c)
 図5を参照する。学習システム1cは、学習システム1の他の実施形態(第4実施形態)であり、人体模型2と、複合現実ディスプレイ3cと、学習対象器具4cを備える。なお、学習システム1cは、人体模型2の構造及び作用効果が第1実施形態の学習システム1と共通するため、共通する構造及び作用効果については説明を省略し、相違する複合現実ディスプレイ3c及び学習対象器具4cの構造及び作用効果について後述する。また、本実施形態の説明において、複合現実ディスプレイ3cの図示は省略しているが、複合現実ディスプレイ3との差を説明する便宜上、複合現実ディスプレイ「3c」と符号を付している。
Fourth Embodiment
(Learning System 1c)
Please refer to Fig. 5. The learning system 1c is another embodiment (fourth embodiment) of the learning system 1, and includes a human body model 2, a mixed reality display 3c, and a learning target instrument 4c. Since the learning system 1c has the same structure and effects as the learning system 1 of the first embodiment in terms of the structure and effects of the human body model 2, the description of the common structure and effects will be omitted, and the structure and effects of the different mixed reality display 3c and learning target instrument 4c will be described later. In addition, in the description of this embodiment, the mixed reality display 3c is omitted from the illustration, but for the convenience of explaining the difference from the mixed reality display 3, the mixed reality display is marked with the symbol "3c".

 (複合現実ディスプレイ3c)
 複合現実ディスプレイ3cは、MR技術を実施可能なヘッドマウントディスプレイ(ゴーグルタイプ)であり、前述した第1仮想拡張31及び第2仮想拡張32に加えて、人体模型2を設置する室内空間である教場R2に対して、検査装置(本実施形態においてはCT検査装置)である第3仮想拡張33を重畳的に視覚化可能に設定されている(図5参照)。
(Mixed reality display 3c)
The mixed reality display 3c is a head-mounted display (goggles type) capable of implementing MR technology, and in addition to the first virtual augmentation 31 and second virtual augmentation 32 described above, it is configured to be able to visualize a third virtual augmentation 33, which is an examination device (in this embodiment, a CT examination device), superimposed on the classroom R2, which is the indoor space in which the human body model 2 is installed (see Figure 5).

 本実施形態において第3仮想拡張33は、検査装置331の画像であり、これら画像のデータは複合現実ディスプレイの記憶機能部にインストールされたものを、画像生成機能部によって教場R2の室内空間に画像が重なった態様(重畳的態様)の画像として構築され、同画像がディスプレイ部を介して投影されるものである。なお、検査装置331の画像のうち、ガントリ部分は何もない空間に投影されたものであり、クレードル部分は人体模型2を載置した一般的な寝台に重ねて投影したものである。 In this embodiment, the third virtual augmentation 33 is an image of the inspection device 331, and the data of these images is installed in the memory function unit of the mixed reality display, and then constructed by the image generation function unit as an image in which the image is superimposed (superimposed) on the indoor space of classroom R2, and this image is projected via the display unit. Note that, of the image of the inspection device 331, the gantry portion is projected into empty space, and the cradle portion is projected and superimposed onto a general bed on which the human body model 2 is placed.

 (学習対象器具4c)
 学習対象器具4cは、AED(Automated External Defibrillator;自動体外式除細動器)である。
(Learning target instrument 4c)
The learning target device 4c is an AED (Automated External Defibrillator).

 学習システム1c及びこれを使用した学習方法によれば、学習者は、複合現実ディスプレイ3cを介して教場R2内の空間に投影された第3仮想拡張33を視認することで、教場R2を検査室に見立てると共に、そこに設置される人体模型2(第1仮想拡張31及び第2仮想拡張32を投影している)を検査される対象者に見立てて、学習を行うことができる(図5参照)。そして、学習者は、第3仮想拡張33の画像を見ながら、開始前の検査装置331の準備やチェック手順等の学習や訓練を行うことができ、開始後の検査装置331と人体模型2との位置に応じた学習者の立ち回りやその他の動きの学習や訓練も行うことができる。 According to the learning system 1c and the learning method using the same, the learner can view the third virtual extension 33 projected onto the space in the classroom R2 via the mixed reality display 3c, and can study by regarding the classroom R2 as an examination room and the human body dummy 2 (on which the first virtual extension 31 and the second virtual extension 32 are projected) installed there as the subject to be examined (see FIG. 5). The learner can then study or train on the preparation of the examination device 331 and the check procedures before starting while viewing the image of the third virtual extension 33, and can also study or train on the learner's movements and other movements according to the positions of the examination device 331 and the human body dummy 2 after starting.

 ところで、CT検査においては造影剤が使用されるため、検査対象者が薬剤性ショックを発症する可能性があるが、学習システム1c及びこれを使用した学習方法によれば、発症した際を想定し、人体模型2に対して学習対象器具4c(AED)を使用する蘇生訓練も行うことができる。換言すると、学習システム1cは、教場R2内に特殊環境を再現し、その環境下で生じうる状況に対応するためのシミュレーションを可能とするものでもある。 Incidentally, since a contrast agent is used in a CT scan, there is a possibility that the subject may develop drug-induced shock. However, with the learning system 1c and a learning method using it, it is possible to conduct resuscitation training using the learning subject device 4c (AED) on the human body model 2, assuming that such a shock occurs. In other words, the learning system 1c reproduces a special environment within the classroom R2, and makes it possible to perform simulations for dealing with situations that may arise in that environment.

 つまり、学習システム1c及びこれを使用した学習方法によれば、実際の検査装置を準備することなく、実際の検査装置がある空間で患者を相手にしているかのようにして、行うべき手技の手順又は処置の訓練を行うことができ、学習者にとって実際の現場で患者等に処置等を行っているかのような視覚的な刺激を受け、更なる高い学習効果が期待できる。また、学習システム1c及びこれを使用した学習方法によれば、実際の検査装置が存在しない場合であって、仮想の検査装置があるかのような疑似体験を容易にすることができ、周辺環境(検査室や手術室)への慣れも生まれることが期待できる。 In other words, learning system 1c and a learning method using it allow trainees to practice the procedures or treatments they need to perform as if they were treating a patient in a space with actual testing equipment, without having to prepare actual testing equipment. This provides the trainee with visual stimulation as if they were treating a patient in an actual setting, and is expected to have an even greater learning effect. Furthermore, learning system 1c and a learning method using it make it easy to have a simulated experience as if there were virtual testing equipment, even when actual testing equipment does not exist, and is expected to help trainees become accustomed to the surrounding environment (examination room or operating room).

 前述した各学習システム1(1a、1a’、1b、1c)によれば、複合現実ディスプレイの外部出力を使用して、学習者Hが見ている映像を他のディスプレイに投影し、学習者Hの視点や状況等の体験(成功例又は失敗例等)を、他の学生と共有するようにしてもよい。更にまた、教授者も他のディスプレイを見ながら、学習中の学生に指導や注意を行うことができる。 According to each of the learning systems 1 (1a, 1a', 1b, 1c) described above, the external output of the mixed reality display can be used to project the image seen by learner H onto another display, allowing learner H's perspective, situation, and other experiences (successful or unsuccessful examples, etc.) to be shared with other students. Furthermore, instructors can also provide guidance and advice to students while watching the other displays.

 なお、従来の医学シミュレータは、医療行為の手技や手順の学習や習得の為の道具という位置づけであるが、前述した各学習システム1(1a、1a’、1b、1c)及びこれを使用した学習方法は、従来の従来の医学シミュレータでは為し得なかった、手技の危険性、安全性、医療安全に関する学習に重点を置いたものとなっている。 Although conventional medical simulators are positioned as tools for learning and mastering medical procedures and procedures, the above-mentioned learning systems 1 (1a, 1a', 1b, 1c) and the learning methods using them place emphasis on learning about the risks and safety of procedures and medical safety, something that could not be achieved with conventional medical simulators.

 また、複合現実ディスプレイ3(3a、3c)で表示する各種仮想拡張については、特定の疾患又は症状に対して、処置の対象部位、選択する器具の正誤をメッセージとして、その部位又は器具に重畳して表示する機能を付加することもできる。例えば、適切な穿刺部位又は範囲を触った場合に、「correct」等の文字がポップアップ表示される機能等が挙げられ、これにより自習効果が向上する。 Furthermore, the various virtual augmentations displayed on the mixed reality display 3 (3a, 3c) can be given a function to superimpose a message on the area to be treated or on the correctness of the selected instrument for a particular disease or symptom on that area or instrument. For example, when the appropriate puncture area or range is touched, a pop-up message such as "correct" can be displayed, improving the effectiveness of self-study.

 更にまた、前述した各学習システム1(1a、1a’、1b、1c)において複合現実ディスプレイ3(3a、3c)で表示する第1仮想拡張31及び第2仮想拡張32は、老若男女の表情や体型をプリセットデータから選択し投影することができる。これにより、年齢、性別による体型や臓器の相違等を視認しつつ、より実践的な学習することができる。加えて、学習を行うに際して、年齢、性別、体型の異なる人体模型を準備する必要が無いので、複数の人体模型の所有に伴う導入(調達)及び運用コストの低減を図ることもでき、且つ、不使用時の収納スペースが少なくて済むものとなっている。 Furthermore, the first virtual augmentation 31 and the second virtual augmentation 32 displayed on the mixed reality display 3 (3a, 3c) in each of the above-mentioned learning systems 1 (1a, 1a', 1b, 1c) can select and project facial expressions and body types of people of all ages and genders from preset data. This allows for more practical learning while visually recognizing differences in body types and organs due to age and gender. In addition, since there is no need to prepare human body models of different ages, genders, and body types when learning, it is possible to reduce the introduction (procurement) and operating costs associated with owning multiple human body models, and less storage space is required when not in use.

 加えて、前述した各学習システム1(1a、1a’、1b、1c)によれば、学生(現役の技師や医師も含む)が三次元画像を直感的に学ぶ(手技訓練を伴わない学習にも利用する)こともできる。仮に、医師、診療放射線技師、臨床検査技師であっても、経験を多く積まないと(多くの画像を見ないと)、脳内で二次元画像を三次元画像に置き換えることは容易ではないが、前述した各学習システム1(1a、1a’、1b、1c)によれば、視覚を通じた物理解剖学の学習を行うこともできる。 In addition, with each of the above-mentioned learning systems 1 (1a, 1a', 1b, 1c), students (including active technicians and doctors) can intuitively learn from three-dimensional images (and can also be used for learning that does not involve procedural training). Even for doctors, radiological technicians, and clinical laboratory technicians, it is not easy to replace two-dimensional images with three-dimensional images in the mind unless they have a lot of experience (if they do not see many images), but with each of the above-mentioned learning systems 1 (1a, 1a', 1b, 1c), it is also possible to learn physical anatomy through vision.

 前述した各学習システム1(1a、1a’、1b、1c)によれば、解剖学的又は臨床的レアケースの学習についても可能である。臨床現場での外科的手技では、解剖学的又は臨床的レアケースへの遭遇に起因する医療事故が発生することがあり、患者死亡に繋がる事例報告もある。例えば、血管の走行が一般的な人の解剖とは異なるケースや、血管の存在そのものが特異なケース等が挙げられ、このようなケースでは、術者の知識や経験不足によって処置の際に血管を損傷させ、重大な医療事故に繋がる可能性もある。  Each of the learning systems 1 (1a, 1a', 1b, 1c) described above also makes it possible to learn about rare anatomical or clinical cases. During surgical procedures in clinical settings, medical accidents can occur due to encountering rare anatomical or clinical cases, and there have been reported cases leading to the death of the patient. For example, there are cases where the course of blood vessels differs from that of general human anatomy, or where the existence of blood vessels themselves is unique. In such cases, the surgeon's lack of knowledge or experience can cause damage to blood vessels during treatment, leading to serious medical accidents.

 しかしながら、各学習システム1(1a、1a’、1b、1c)によれば、3次元解剖像の表示、非表示が選択可能であるため、解剖学的又は臨床的レアケースを再現表示し学習することも可能であり、前述したようなレアケースにおける手技の危険性、安全性、医療安全に関しても活用することができる。 However, with each learning system 1 (1a, 1a', 1b, 1c), it is possible to select whether to display 3D anatomical images, making it possible to reproduce and learn about rare anatomical or clinical cases, and this can also be used to learn about the risks, safety, and medical safety of procedures in rare cases such as those mentioned above.

 また、前述した各学習システム1a、1a’、1b、1cによれば、第2仮想拡張として患者等の顔の視線変化を含めて再現表示することができるため、患者等の視線が術者側を向くことによって、術者役の学習者の現実感(特に緊張感)が高まり、学習効果の更なる向上が期待できる。 In addition, with each of the learning systems 1a, 1a', 1b, and 1c described above, it is possible to reproduce and display the change in the patient's facial gaze as a second virtual extension, so that the patient's gaze turns toward the surgeon, which increases the sense of realism (especially the sense of tension) of the learner playing the role of the surgeon, and further improvements in the learning effect can be expected.

 更にまた、日常診療では、2次元CT画像の実物大での観察は行われておらず、現状では2次元CT画像と3次元解剖像を実物大で立体的に学習できる環境がなく、シミュレータに実物大の2次元CT画像と3次元解剖像を同時に重畳可能でその空間的位置関係を2次元と3次元を関連付けて学習可能なシミュレータはこれまでに見当たらない。しかしながら、前述した各学習システム1a、1a’、1b、1cによれば、人体模型(シミュレータ)に実物大の2次元CT画像と3次元解剖像を同時に重畳可能であり、この機能によって、学習者の多くが苦労する2次元CT画像と3次元解剖像の関係性の理解とそのためのトレーニングを強力に支援することができる。 Furthermore, in daily clinical practice, 2D CT images are not observed at life-size, and currently there is no environment in which 2D CT images and 3D anatomical images can be studied in a life-size, stereoscopic manner. There have been no simulators to date that allow life-size 2D CT images and 3D anatomical images to be superimposed simultaneously and that allow learning to associate the spatial positional relationship between the 2D and 3D dimensions. However, with each of the learning systems 1a, 1a', 1b, and 1c described above, it is possible to simultaneously superimpose life-size 2D CT images and 3D anatomical images on a human body model (simulator), and this function can provide strong support for understanding the relationship between 2D CT images and 3D anatomical images, which many learners struggle with, and for training to do so.

 そして、前述した各学習システムによれば、機械又は器具である人体模型(シミュレータ)をより擬人化して使用するものであり、特に、各学習システム1a、1a’、1b、1cによれば、前述した作用効果に基づく双方向性(インタラクティブ性)が付加されたものとなっている。従来、擬人化され且つ双方向性を具備する人体模型等を使用したような学習システムは存在しなかったが、各学習システム1a、1a’、1b、1cを使用した学習方法によれば、従来の人体模型等を使用した学習と比較して、効率的で優れた学習効果が期待できる。  And, according to each of the learning systems described above, a human body model (simulator), which is a machine or tool, is used in a more anthropomorphic way, and in particular, each of learning systems 1a, 1a', 1b, and 1c adds interactivity based on the above-mentioned effects. Conventionally, there have been no learning systems that use a human body model or the like that is anthropomorphized and has interactivity, but according to the learning method using each of learning systems 1a, 1a', 1b, and 1c, it is expected that the learning effect will be more efficient and superior than learning using a conventional human body model or the like.

 本明細書および特許請求の範囲で使用している用語と表現は、あくまでも説明上のものであって、なんら限定的なものではなく、本明細書および特許請求の範囲に記述された特徴およびその一部と等価の用語や表現を除外する意図はない。また、本発明の技術思想の範囲内で、種々の変形態様が可能であるということは言うまでもない。また、第一、第二等の言葉は、等級や重要度を意味するものではなく、一つの要素を他の要素から区別するために使用したものである。 The terms and expressions used in this specification and claims are merely explanatory and are not limiting in any way, and are not intended to exclude terms and expressions equivalent to the features described in this specification and claims or parts thereof. It goes without saying that various modifications are possible within the scope of the technical concept of the present invention. Furthermore, the terms first, second, etc. are used to distinguish one element from another element, and do not imply rank or importance.

  1、1a、1a’、1b、1c 学習システム
  2、2a、2a’、2b 人体模型
  21 疑似的構造物
  3、3a、3c 複合現実ディスプレイ
  31、31a’ 第1仮想拡張
  311 2次元CT画像
  312 3次元解剖画像
  313 第1ハンドル
  314 第2ハンドル
  315 第3ハンドル
  316 第1ボタン
  317 第2ボタン
  32 第2仮想拡張
  321 眼部
  33 第3仮想拡張
  331 検査装置
  4、4a、4c 学習対象器具
  H 学習者
  R1、R2 教場

 
1, 1a, 1a', 1b, 1c Learning system 2, 2a, 2a', 2b Human body model 21 Pseudo structure 3, 3a, 3c Mixed reality display 31, 31a' First virtual augmentation 311 Two-dimensional CT image 312 Three-dimensional anatomical image 313 First handle 314 Second handle 315 Third handle 316 First button 317 Second button 32 Second virtual augmentation 321 Eye 33 Third virtual augmentation 331 Inspection device 4, 4a, 4c Learning target instrument H Learner R1, R2 Classroom

Claims (18)

 投影対象となる人体模型、該人体模型の一部又は全部に対して物理解剖学的モデルである第1仮想拡張を重畳的に視覚化可能な複合現実ディスプレイ、及び、手技学習対象となる医療器具又は検査器具である学習対象器具、を使用して行われ、
 学習者が、前記複合現実ディスプレイを装着し、同複合現実ディスプレイを介して前記人体模型及びこれに投影された前記第1仮想拡張を視認する、第1ステップと、
 該第1ステップを経た前記学習者が、前記人体模型及び前記第1仮想拡張を視認しながら、前記学習対象器具を手に取って同人体模型に適用し、行うべき手技の手順又は処置を学習する、第2ステップと、
を備える
 学習方法。
The method is carried out using a human body mannequin as a projection target, a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, in a superimposed manner on a part or all of the human body mannequin, and a learning target instrument, which is a medical instrument or an examination instrument as a subject of procedure learning,
a first step in which a learner wears the mixed reality display and views the mannequin and the first virtual augmentation projected thereon through the mixed reality display;
a second step in which the learner, having completed the first step, picks up the learning target tool and applies it to the human body model while visually checking the human body model and the first virtual augmentation, thereby learning a procedure or treatment of a procedure to be performed;
A learning method that:
 前記複合現実ディスプレイが、患者役の表情モデルである第2仮想拡張を、前記人体模型の顔面部分に対して重畳的に視覚化可能であり、
 前記第1ステップ及び前記第2ステップにおいて、前記第2仮想拡張が前記人体模型に適用され、
 少なくとも前記第2ステップにおいて、前記学習者は、前記第2仮想拡張により投影された患者役の表情モデルも適宜視認しながら学習を行うか、又は、同患者役の表情モデルを適宜視認し且つ状況に応じた会話も交えながら学習を行う
 請求項1に記載の学習方法。
the mixed reality display is capable of visualizing a second virtual augmentation, the second virtual augmentation being a facial expression model of a patient character, superimposed on a facial portion of the mannequin;
In the first and second steps, the second virtual augmentation is applied to the mannequin;
2. The learning method according to claim 1, wherein, at least in the second step, the learner studies while appropriately viewing the facial expression model of the patient role projected by the second virtual augmentation, or studies while appropriately viewing the facial expression model of the patient role and also engaging in conversation according to the situation.
 前記第2仮想拡張において視覚化された表情モデルのうち、少なくとも目に関する描写において目線が学習者の方向へ適時移動可能に設定されていると共に、前記複合現実ディスプレイを介して同学習者が前記目線を認識可能に設定されている
 請求項2に記載の学習方法。
The learning method described in claim 2, wherein the facial expression model visualized in the second virtual augmentation is set so that the gaze can be moved toward the learner at any time in at least the eye depiction, and the gaze can be recognized by the learner via the mixed reality display.
 前記第1仮想拡張が、前記人体模型に当て嵌まる適当なサイズに設定された医用画像及び3次元解剖画像を同一画面上で重畳可能に設けられたものである
 請求項1又は請求項2に記載の学習方法。
3. The learning method according to claim 1 or 2, wherein the first virtual augmentation is provided such that a medical image and a three-dimensional anatomical image set to an appropriate size to fit the human body model can be superimposed on the same screen.
 前記人体模型が、少なくとも学習対象とする所定部分に対して、皮膚、筋肉、骨、血管、膜あるいは臓器のいずれか一つを再現した疑似的構造物が適用されたものであるか、又は、これら該疑似的構造物を複数組み合わせて適用されたものである
 請求項1又は請求項2に記載の学習方法。
3. The learning method according to claim 1 or 2, wherein the human body model has applied to it, at least to a specific part to be studied, a pseudo structure reproducing any one of skin, muscle, bone, blood vessel, membrane, or organ, or a combination of a plurality of such pseudo structures.
 前記擬似的構造物が、皮膚、筋肉、血管、膜、臓器については軟質素材製であり、骨については準硬質素材製又は硬質素材製である
 請求項5に記載の学習方法。
The learning method according to claim 5 , wherein the simulated structures are made of soft materials for skin, muscles, blood vessels, membranes, and organs, and are made of semi-hard or hard materials for bones.
 前記人体模型及び前記学習対象器具には、位置センサー、及び、感圧センサー又は圧力センサーが設けられ、前記位置センサーが検出する位置情報、及び、前記感圧センサー又は前記圧力センサーが検出する圧力値の両方又はいずれか一方が、予め設定された設定値を超えた際に、前記第2仮想拡張が不快或いは苦悶を表現する表情に変化するように設定されている
 請求項2に記載の学習方法。
The learning method according to claim 2, wherein the human body model and the learning target instrument are provided with a position sensor and a pressure sensor or a pressure sensor, and the second virtual augmentation is configured to change to an expression expressing discomfort or agony when the position information detected by the position sensor and/or the pressure value detected by the pressure sensor or the pressure sensor exceed a preset value.
 前記人体模型には、頭部を有するものについては眼部の位置に、又は、頭部を有しないものについては眼部に相当する位置に、学習者を撮影可能なカメラが設けられている
 請求項1、請求項2、請求項3又は請求項7のいずれかに記載の学習方法。
8. The learning method according to claim 1, wherein the human body mannequin has a camera capable of photographing the learner at an eye position if the human body mannequin has a head, or at a position equivalent to an eye position if the human body mannequin does not have a head.
 前記人体模型が教場として利用可能な室内空間に設置されると共に、前記複合現実ディスプレイが、検査装置及び/又は治療装置モデルである第3仮想拡張を前記室内空間へ重畳的に視覚化可能であり、
 前記第1ステップ及び前記第2ステップの両方又はいずれか一方において、学習者が、前記複合現実ディスプレイを介して前記室内空間に投影された前記第3仮想拡張を視認しながら、行うべき処置又は手順を学習する
 請求項1、請求項2、請求項3又は請求項7のいずれかに記載の学習方法。
the mannequin is installed in an indoor space that can be used as a classroom, and the mixed reality display is capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on the indoor space;
A learning method as described in claim 1, claim 2, claim 3 or claim 7, wherein in both or either of the first step and the second step, a learner learns the procedure or steps to be performed while visually viewing the third virtual augmentation projected into the indoor space via the mixed reality display.
 投影対象となる人体模型と、
 該人体模型の一部又は全部に対して物理解剖学的モデルである第1仮想拡張を重畳的に視覚化可能であり且つ学習者の頭部に装着可能な複合現実ディスプレイと、
 手技学習対象となる医療器具又は検査器具である学習対象器具と、
を備える
 学習システム。
A human body model to be projected,
a mixed reality display capable of visualizing a first virtual augmentation, which is a physical anatomical model, superimposed on a part or all of the mannequin and which is wearable on a head of a learner;
A learning subject instrument which is a medical instrument or an examination instrument that is a subject of procedure learning;
A learning system.
 前記複合現実ディスプレイが、患者役の表情モデルである第2仮想拡張を、前記人体模型の顔面部分に対して重畳的に視覚化可能に設定されている
 請求項10に記載の学習システム。
The learning system of claim 10 , wherein the mixed reality display is configured to be capable of visualizing a second virtual augmentation, which is a facial expression model of a patient, superimposed on a facial portion of the mannequin.
 前記第2仮想拡張において視覚化された表情モデルのうち、少なくとも目に関する描写において目線が学習者の方向へ適時移動可能に設定されていると共に、前記複合現実ディスプレイを介して同学習者が前記目線を認識可能に設定されている
 請求項11に記載の学習システム。
The learning system described in claim 11, wherein the facial expression model visualized in the second virtual augmentation is configured so that at least the eye depiction has a gaze that can be moved toward the learner at any time, and the gaze is configured so that the learner can recognize the gaze via the mixed reality display.
 前記第1仮想拡張が、前記人体模型に当て嵌まる適当なサイズに設定された医用画像及び3次元解剖画像を同一画面上で重畳可能に設けられたものである
 請求項10又は請求項11に記載の学習システム。
12. The learning system according to claim 10 or 11, wherein the first virtual augmentation is configured to allow medical images and three-dimensional anatomical images, each set to an appropriate size to fit the human body model, to be superimposed on the same screen.
 前記人体模型及び前記学習対象器具には、位置センサー、及び、感圧センサー又は圧力センサーが設けられ、
 前記複合現実ディスプレイが、前記位置センサーが検出する位置情報、及び、前記感圧センサー又は前記圧力センサーが検出する圧力値を受信可能な受信機能、及び、設定値記憶機能を有し、前記受信機能で受信する位置情報及び圧力値の両方又はいずれか一方が、予め設定された設定値を超えた際に、前記第2仮想拡張が不快或いは苦悶を表現する表情に変化するように設定されている
 請求項11に記載の学習システム。
The human body model and the learning object tool are provided with a position sensor and a pressure sensor or a pressure sensor;
The learning system described in claim 11, wherein the mixed reality display has a receiving function capable of receiving the position information detected by the position sensor and the pressure value detected by the pressure sensor or the pressure sensor, and a set value storage function, and is configured so that when both or either one of the position information and the pressure value received by the receiving function exceeds a predetermined set value, the second virtual augmentation changes to an expression expressing discomfort or distress.
 前記人体模型が、少なくとも学習対象とする所定部分に対して、皮膚、筋肉、骨、血管、膜あるいは臓器のいずれか一つを再現した疑似的構造物が適用されたものであるか、又は、これら該疑似的構造物を複数組み合わせて適用されたものである
 請求項10、請求項11、請求項12又は請求項14に記載の学習システム。
The learning system according to claim 10, claim 11, claim 12 or claim 14, wherein the human body model has applied to it, at least to a specific part to be studied, a pseudo structure reproducing any one of skin, muscle, bone, blood vessel, membrane or organ, or a combination of a plurality of such pseudo structures.
 前記擬似的構造物が、皮膚、筋肉、血管、膜、臓器については軟質素材製であり、骨については準硬質素材製又は硬質素材製である
 請求項15に記載の学習システム。
The learning system according to claim 15 , wherein the simulated structures are made of soft materials for skin, muscles, blood vessels, membranes, and organs, and are made of semi-hard or hard materials for bones.
 前記人体模型には、頭部を有するものについては眼部の位置に、又は、頭部を有しないものについては眼部に相当する位置に、学習者を撮影可能なカメラが設けられている
 請求項10、請求項11、請求項12又は請求項14のいずれかに記載の学習システム。
The learning system according to any one of claims 10, 11, 12 and 14, wherein the human body model is provided with a camera capable of photographing the learner at an eye position if the human body model has a head, or at a position equivalent to an eye if the human body model does not have a head.
 前記複合現実ディスプレイが、検査装置及び/又は治療装置モデルである第3仮想拡張を、前記人体模型を設置する室内空間へ重畳的に視覚化可能に設定されている
 請求項10、請求項11、請求項12又は請求項14に記載の学習システム。

 
The learning system of claim 10, claim 11, claim 12 or claim 14, wherein the mixed reality display is configured to be capable of visualizing a third virtual augmentation, which is a model of an examination device and/or a treatment device, superimposed on an indoor space in which the human body model is placed.

PCT/JP2024/000462 2023-01-18 2024-01-11 Learning method and learning system Ceased WO2024154647A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2024571725A JPWO2024154647A1 (en) 2023-01-18 2024-01-11

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023006226 2023-01-18
JP2023-006226 2023-01-18

Publications (1)

Publication Number Publication Date
WO2024154647A1 true WO2024154647A1 (en) 2024-07-25

Family

ID=91955894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/000462 Ceased WO2024154647A1 (en) 2023-01-18 2024-01-11 Learning method and learning system

Country Status (2)

Country Link
JP (1) JPWO2024154647A1 (en)
WO (1) WO2024154647A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119626060A (en) * 2024-12-19 2025-03-14 中国人民解放军陆军军医大学第二附属医院 An interactive puncture biopsy teaching system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004348091A (en) * 2003-03-26 2004-12-09 National Institute Of Advanced Industrial & Technology Physical model and operation support system using the same
JP2012181364A (en) * 2011-03-01 2012-09-20 Morita Mfg Co Ltd Training device for medical purpose and training component
JP2018112646A (en) * 2017-01-11 2018-07-19 村上 貴志 Surgery training system
JP2021096413A (en) * 2019-12-19 2021-06-24 国立大学法人北海道大学 Training device for tracheal suction
JP2022507622A (en) * 2018-11-17 2022-01-18 ノバラッド コーポレーション Use of optical cords in augmented reality displays

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004348091A (en) * 2003-03-26 2004-12-09 National Institute Of Advanced Industrial & Technology Physical model and operation support system using the same
JP2012181364A (en) * 2011-03-01 2012-09-20 Morita Mfg Co Ltd Training device for medical purpose and training component
JP2018112646A (en) * 2017-01-11 2018-07-19 村上 貴志 Surgery training system
JP2022507622A (en) * 2018-11-17 2022-01-18 ノバラッド コーポレーション Use of optical cords in augmented reality displays
JP2021096413A (en) * 2019-12-19 2021-06-24 国立大学法人北海道大学 Training device for tracheal suction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Not just viewing, but also dissection, with VR pre-operative simulation", 5 January 2017 (2017-01-05), XP093193328, Retrieved from the Internet <URL:https://www.moguravr.com/spectovive-vr-operation/> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119626060A (en) * 2024-12-19 2025-03-14 中国人民解放军陆军军医大学第二附属医院 An interactive puncture biopsy teaching system

Also Published As

Publication number Publication date
JPWO2024154647A1 (en) 2024-07-25

Similar Documents

Publication Publication Date Title
US11195340B2 (en) Systems and methods for rendering immersive environments
Issenberg et al. Simulation and new learning technologies
US20030031993A1 (en) Medical examination teaching and measurement system
US20120270197A1 (en) Physiology simulation garment, systems and methods
KR20180058656A (en) Reality - Enhanced morphological method
Mostafa et al. Designing NeuroSimVR: a stereoscopic virtual reality spine surgery simulator
Kuchenbecker et al. Evaluation of a vibrotactile simulator for dental caries detection
US20230169880A1 (en) System and method for evaluating simulation-based medical training
Simon et al. Design and evaluation of UltRASim: An immersive simulator for learning ultrasound-guided regional anesthesia basic skills
WO2024154647A1 (en) Learning method and learning system
Vincent-Lambert et al. A guide for the assessment of clinical competence using simulation
CN118486218A (en) Spinal endoscopic surgery simulation training system and method
Beltes et al. Dental Education Tools in Digital Dentistry
Coles Investigating augmented reality visio-haptic techniques for medical training
Dumay Medicine in virtual environments
Brown Simulation Technology
Haase et al. Virtual reality and habitats for learning microsurgical skills
Violante Virtual Reality Simulation Transforms Medical Education: Can It Advance Student’s Surgical Skills and Application?
Crossan The design and evaluation of a haptic veterinary palpation training simulator
Botelho et al. Virtual Reality for Pediatric Trauma Education-A Preliminary Face and Content Validation Study
Luursema et al. Stereopsis in medical virtual-learning-environments
Norkhairani et al. Simulation for laparoscopy surgery with haptic element for medical students in HUKM: a preliminary analysis
Sainsbury Development and evaluation summaries of a percutaneous nephrolithotomy (PCNL) surgical simulator
Woo Immersive Learning of Bimanual Haptic Intravenous Needle Insertion in Virtual Reality: Developing a Simulator for Nursing Students
Chan Development and comparison of augmented and virtual reality interactions for direct ophthalmoscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24744570

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2024571725

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE