WO2025206282A1 - Information processing system, prediction device, information processing method, control program, and recording medium - Google Patents
Information processing system, prediction device, information processing method, control program, and recording mediumInfo
- Publication number
- WO2025206282A1 WO2025206282A1 PCT/JP2025/012706 JP2025012706W WO2025206282A1 WO 2025206282 A1 WO2025206282 A1 WO 2025206282A1 JP 2025012706 W JP2025012706 W JP 2025012706W WO 2025206282 A1 WO2025206282 A1 WO 2025206282A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- subject
- information
- bone
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Definitions
- Patent Document 1 discloses a configuration for estimating whether a patient has osteoporosis based on their X-ray images.
- An information processing system includes a prediction unit that uses a prediction model to output predictive information from a first image and first data that show at least a portion of a first subject.
- the predictive model is generated by machine learning using a third image and second data that show at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the third image was captured as a dependent variable.
- the predictive information is information that indicates the possibility of an abnormality occurring in the bones of the first subject.
- An information processing method is an information processing method executed by one or more computers, and includes a prediction step of outputting predicted information using a prediction model from a first image and first data that show at least a portion of a first subject.
- the prediction model is generated by machine learning using a third image and second data that show at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the third image was captured as a target variable.
- the predicted information is information that indicates the possibility of an abnormality occurring in the bones of the first subject.
- FIG. 1 is a block diagram illustrating an example of a configuration of an information processing system according to a first embodiment of the present disclosure.
- FIG. 2 is a block diagram for explaining the operation of a prediction model according to the first embodiment.
- 10 is a flowchart illustrating an example of a learning process flow by a learning unit of the prediction device according to the first embodiment.
- 10 is a flowchart illustrating an example of the flow of a prediction process by a prediction unit of the prediction device according to the first embodiment.
- FIG. 3 is a diagram illustrating an example of a first image of a first subject according to the first embodiment.
- FIG. 10 is a diagram illustrating an example of segmentation of a second image according to the first embodiment.
- FIG. 10 is a block diagram illustrating an example of a configuration of an information processing system according to a second embodiment.
- FIG. 10 is a block diagram for explaining the operation of an estimation model according to a second embodiment.
- FIG. 10 is a block diagram for explaining the operation of a prediction model according to a second embodiment.
- 10 is a flowchart illustrating an example of a learning process performed by a learning unit according to the second embodiment.
- FIG. 10 is a diagram showing a plain X-ray image of the chest of a first subject according to the second embodiment.
- 10 is a flowchart illustrating an example of the flow of a prediction process performed by a prediction device according to a second embodiment.
- 10A and 10B are diagrams showing the results of prediction of a subject's fracture risk by a prediction device according to embodiment 2 and the effects of each countermeasure.
- Factors that lead to fractures include a decrease in bone strength and increased stress on bones due to muscle atrophy. In order to accurately estimate the likelihood of a fracture, it is desirable to take into account not only bone strength but also the condition of the muscles. There is room for improvement in the accuracy of technology that can predict the occurrence of fractures from medical images.
- One aspect of the present disclosure has been made in consideration of the above-mentioned problems, and aims to provide an information processing system, information processing method, prediction device, control program, and recording medium that can improve the prediction accuracy of predicting information about abnormalities from medical images.
- Fig. 1 is a block diagram showing an example of the configuration of the information processing system 1 in embodiment 1.
- Fig. 1 shows the information processing system 1 including a prediction device 10, an image management device 40, an electronic medical record management device 50, and a presentation device 60. Note that the configuration of the information processing system 1 is not limited to the configuration shown in Fig. 1.
- the first image G1 may be, for example, an image captured at a medical facility at a third time point.
- the first image includes at least one of a frontal image showing a specified area from the front (for example, an image obtained by irradiating the specified area with X-rays in the front-to-back direction) and a lateral image showing the specified area from the side (for example, an image obtained by irradiating the specified area with X-rays in the left-to-right direction).
- the first image G1 is not limited to a simple X-ray image, but may be any medical image containing information about bones, such as a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, an image obtained by DXA (Dual Energy X-ray Absorptiometry), an image obtained by DES (Dual Energy Subtraction), or an ultrasound image.
- the first image G1 may be an image that includes a phantom, or an image that does not include a phantom.
- the second image G2 is, for example, an echo image showing muscles in areas corresponding to the head, chest, waist, feet, hands, etc. of the first subject (see Figure 6).
- the second image G2 may show only one area of the first subject, or multiple areas.
- the second image G2 may be a still image or a video.
- the second image G2 is not limited to an echo image, and may be any medical image containing information about muscles, such as a CT image, an MRI image, an image obtained by DXA, an image obtained by DES, or an ultrasound image.
- the second image G2 may be of a different type from the first image G1, or the same type.
- the second image G2 may be an image captured using a medium with a different wavelength from that of the first image G1.
- the second image G2 may be, for example, an image taken at a medical facility at a third time point.
- the difference in the time at which the first image G1 and the second image G2 were taken may be within a predetermined period, such as two weeks, one month, six months, or one year.
- the third time point may be set based on the date on which the first image G1 was taken, or the date on which the second image G2 was taken.
- the reference date may be either the later or earlier of the dates on which the first image G1 and the second image G2 were taken.
- the second image G2 also includes at least one of a front image showing the specified area from the front (for example, an image obtained by irradiating the specified area with X-rays in the front-to-back direction) and a side image showing the specified area from the side (for example, an image obtained by irradiating the specified area with X-rays in the left-to-right direction).
- a front image showing the specified area from the front for example, an image obtained by irradiating the specified area with X-rays in the front-to-back direction
- a side image showing the specified area from the side for example, an image obtained by irradiating the specified area with X-rays in the left-to-right direction
- first image G1 or the second image G2 is a CT image
- information about the bone trabeculae based on a three-dimensionally constructed image may be used, or information about the bone trabeculae based on a two-dimensionally captured image may be used.
- at least one of a three-dimensional image, a cross-sectional image perpendicular to the body axis connecting the head and legs (e.g., horizontal section), and a cross-sectional image parallel to the body axis (e.g., sagittal section or coronal section) may be used.
- the prediction information output by the prediction device 10 is information indicating the possibility that an abnormality will occur in the bone of the first subject shown in the first image G1 at the fourth time point.
- the prediction information is a fracture risk indicating the possibility that a fracture will occur in the bone of the first subject at the fourth time point.
- Musculoskeletal diseases are an example of musculoskeletal diseases, and fragility fractures are assumed to be the fractures.
- Musculoskeletal diseases also include osteoporosis, osteoarthritis, spondylosis deformans, neurological disorders, sarcopenia, etc.
- the fourth point in time is a point in time different from the third point in time.
- the third point in time is the point in time when the first image G1 was captured.
- the fourth point in time refers to any point in time in the future or the past of the third point in time.
- the fourth point in time may include multiple points in time, such as one year before, one year after, five years after, ten years after, and thirty years after the third point in time.
- the image management device 40 is a computer that functions as a server for managing the third and fourth images.
- the third image is a plain X-ray image showing the bones of a specific area of the second subject.
- the fourth image is an echo image showing the muscles of a part of the second subject corresponding to the specific area.
- the fourth image may depict a region of the second subject that is different from the region corresponding to the above-mentioned predetermined region.
- the third image and the fourth image may be stored in separate image management devices.
- the prediction device 10 may also be configured to acquire the third image and the fourth image from the imaging device without going through the image management device 40.
- the third image is not limited to a simple X-ray image, but may be any medical image containing information about bones, such as a CT image, MRI image, DXA image, DES image, or ultrasound image.
- the fourth image is not limited to an echo image, but may be any medical image containing information about muscles, such as a CT image, MRI image, DXA image, DES image, or ultrasound image. In other words, the fourth image may be the same or different from the third image.
- the third and fourth images may have the same or different combinations of the types of first image G1 and second image G2.
- the third image may be, for example, an image taken at a medical facility at a first time point.
- the difference in the time at which the third image and the fourth image were taken may be within a predetermined period, such as two weeks, one month, six months, or one year.
- the first time point may be set based on the date the third image was taken, or the date the fourth image was taken.
- the reference date may be either the later or earlier of the dates at which the third and fourth images were taken.
- the electronic medical record management device 50 is a computer that functions as a server for managing electronic medical record information of a first subject who has undergone a medical examination or test at a medical facility, etc.
- the image management device 40 and the electronic medical record management device 50 are connected to the acquisition unit 21 of the prediction device 10.
- the electronic medical record information includes attribute information of the first subject.
- the attribute information includes at least one of the following: the first subject's age, sex, height, weight, muscle quality, race, lifestyle information, medication information, occupational information, blood test information, urine test information, saliva test information, information on existing diseases, medical history, medical history of the first subject's family, surgery information, genetic information, childbirth information, menopausal information, items from the Fracture Risk Assessment Tool (FRAX (registered trademark)), and information regarding estimated menopause based on hormone information.
- FFAX Fracture Risk Assessment Tool
- Body information may include at least one of whether or not a woman has given birth, the number of children born, etc.
- the lifestyle habits may be, for example, sleep time, wake-up time, sleep duration, daily exercise amount, meal contents, meal times, meal duration, and blood glucose level.
- Meal contents may include, for example, at least one of the name of the dish, the ingredients consumed, and the intake amount.
- Meal contents may be, for example, an estimated intake of at least one of calcium, vitamin B, vitamin D, and vitamin K.
- the blood glucose level may be, for example, a designated value estimated from parameters acquired by a wearable device.
- Medication information may include, for example, information such as the name of the medication, the amount taken, and the duration of medication.
- Information regarding medications taken may include information regarding the steroid medication being used.
- Blood test information may be, for example, information regarding the results of at least one of a biochemical test, a glucose metabolism test, and an endocrine system test.
- the presentation device 60 is a device for presenting information output by the prediction device 10.
- the presentation device 60 is a computer used by medical personnel, such as doctors, affiliated with a medical facility.
- the presentation device 60 is, for example, a personal computer with an LCD display or an organic EL display, a tablet terminal, a smartphone, etc.
- the presentation device 60 is controlled by the presentation control unit 26 to present fracture risk and other predictive information.
- the presentation device 60 may also be a device that prints the predictive information on paper or the like and outputs it.
- the prediction device 10 includes a control unit 2 and a storage unit 3.
- the control unit 2 has, for example, a CPU (Central Processing Unit), and manages the operation of the prediction device 10 by comprehensively controlling each unit of the prediction device 10.
- CPU Central Processing Unit
- the control unit 2 of the prediction device 10 includes an acquisition unit 21, an analysis unit 22, a correction unit 23, a learning unit 24, a prediction unit 25, and a presentation control unit 26.
- the control unit 2 and the memory unit 3 are electrically connected to each other.
- the acquisition unit 21 acquires a first image G1 and a second image G2 of the first subject from the image management device 40.
- the acquisition unit 21 also acquires a third image and a fourth image of the second subject from the image management device 40.
- the acquisition unit 21 may also acquire the first image G1 and the second image G2 input via an input device (not shown).
- the acquisition unit 21 may also extract attribute information from the first image G1 and the second image G2 if attribute information has been added to the acquired first image G1 and the second image G2.
- the third and fourth images of the multiple people are stored in the memory unit 3 as learning data 33. Furthermore, abnormality information regarding bone abnormalities that occurred in the multiple people at the second time point is stored in the memory unit 3 as training data 34.
- the analysis unit 22 segments the second image G2 to identify which bone, muscle, etc. area each pixel in the second image G2 corresponds to, thereby dividing the area. Segmentation can be performed using, for example, a convolutional neural network (CNN), a full convolutional network (FCN), a U-Net, a V-Net, etc.
- CNN convolutional neural network
- FCN full convolutional network
- U-Net U-Net
- V-Net V-Net
- the analysis unit 22 identifies the soft tissue area.
- the analysis unit 22 analyzes information including at least one of the amount, thickness, amount of atrophy, and flexibility of the muscle and fat of the first subject.
- the correction unit 23 performs a predetermined correction on the first image G1. Specifically, the correction unit 23 performs a correction to remove the soft tissue area identified by the analysis unit 22 from the first image G1.
- soft tissue refers to tissue other than bone, such as muscle and fat.
- the third image described above may also be corrected by the correction unit 23 in the same way as the first image G1.
- the learning unit 24 performs learning processes to generate the prediction model 32. Note that if a prediction model 32 generated by another device is stored in advance in the storage unit 3, the learning unit 24 may not be necessary.
- the memory unit 3 is a computer-readable, non-transitory recording medium that stores the control program 31.
- the memory unit 3 is configured to include ROM (Read Only Memory), RAM (Random Access Memory), etc.
- the control unit 2 controls the prediction device 10 by executing the control program 31.
- the control program 31 is a control program for causing the computer to function as the information processing system 1, and causes the computer to function as the prediction unit 25.
- the memory unit 3 also stores a prediction model 32.
- the prediction model 32 has AI (Artificial Intelligence).
- the prediction model 32 is generated by machine learning using the third image captured at the first time point and the second data as explanatory variables, and abnormality information related to a fracture that occurred in the bone of the second subject at the second time point as a target variable.
- the memory unit 3 also stores the above-mentioned learning data 33 and teacher data 34.
- the second data is, as an example, the above-mentioned fourth image.
- the prediction model 32 may be generated by machine learning using the third image, the fourth image, bone information obtained by inputting the third image into the first estimation model, and muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and abnormality information related to a fracture that occurred in the bone of the second subject at the second time point as a dependent variable.
- the prediction unit 25 may then use the prediction model to output prediction information from the first image G1, the second image G2, bone information obtained by inputting the first image G1 into the first estimation model, and muscle information obtained by inputting the second image G2 into the second estimation model.
- Fig. 2 is a block diagram for explaining the operation of the prediction model 32 in embodiment 1.
- the prediction model 32 is, for example, a convolutional neural network (CNN). Note that the prediction model 32 may be configured using a neural network other than a convolutional neural network.
- CNN convolutional neural network
- the prediction model 32 has, for example, an input layer 32a, a hidden layer 32b, and an output layer 32c.
- the prediction model 32 was generated by machine learning using the third and fourth images captured at the first time point as explanatory variables and abnormality information regarding an abnormality that occurred in the bones of the second subject at the second time point as the objective variable.
- the first point in time is the point in time when the third image is captured.
- the second point in time is a point in time different from the first point in time.
- the second point in time refers to, for example, any point in time in the future or the past of the first point in time when the third image is captured.
- the second point in time may include multiple points in time, such as one year before, one year after, five years after, ten years after, and thirty years after the first point in time.
- the period between the first point in time and the second point in time may be the same length as the period between the third point in time and the fourth point in time, or it may be different.
- the period between the third point in time and the fourth point in time may be shorter or longer than the period between the first point in time and the second point in time.
- the second point in time may be the same as the third point in time described above. That is, for example, a third image taken five years ago may be used to train the prediction model 32 to predict the risk of fracture occurring five years later.
- the first point in time is five years ago
- the second and third points in time are the present
- the fourth point in time is five years in the future.
- the first image G1 and the second image G2 are input to the input layer 32a, and a fracture risk Y indicating the possibility of a fracture occurring in the bone of the first subject at the fourth time point is output.
- the prediction model 32 may output information indicating the possibility of osteoporosis occurring in the bones of the first subject at the fourth time point.
- the possibility of osteoporosis may be classified, for example, based on at least one of the presence or absence of a fracture, the possibility of a fracture, and a change in bone density.
- the possibility of osteoporosis includes "no osteoporosis,” “suspected osteoporosis,” or "possible osteoporosis.” More specifically, the possibility of osteoporosis may be indicated as primary osteoporosis when there is no disease that reduces bone mass, secondary osteoporosis is not observed, and there is a fracture or a high possibility of a fracture.
- Osteoporosis may also be determined when the YAM indicated by the bone density estimate, which is information indicating the bone density of the first subject, is less than 80% and the measurement results indicate a fracture other than the vertebral body or proximal femur. Osteoporosis may also be determined when the YAM indicated by the bone density estimate is 70% or less.
- Fig. 3 is a flowchart showing an example of the flow of the learning process by the learning unit 24 of the prediction device 10.
- the learning unit 24 executes the learning process shown in Fig. 3 and stores the prediction model 32 in the storage unit 3 before the prediction process by the prediction unit 25, which will be described later.
- the learning unit 24 acquires a third image of the second subject via the acquisition unit 21 (S1).
- the second subject is, for example, multiple people.
- the second subject may be a non-human, such as an animal such as a dog, cat, or horse. It may be the same species as the first subject, or a different species.
- the second subject does not have to be multiple people, and may be the same person.
- the third image is a simple X-ray image taken at a first time point, showing the bones of a specified region of the second subject. Note that multiple pieces of data taken at different times for the same person may be used as the third image.
- the learning unit 24 acquires attribute information for each second subject from the electronic medical record management device 50 and associates the attribute information with each second image G2.
- the learning unit 24 acquires a fourth image of the second subject via the acquisition unit 21 (S2).
- the fourth image may be an echo image of the muscles of the second subject at a location corresponding to the above-mentioned specified location, imaged at the first time point.
- the third image may at least depict the same location as the location from which the fourth image was taken.
- the third image may also depict a location different from the location from which the fourth image was taken. Note that S1 and S2 may be taken in the opposite order, or at the same timing.
- the third image may be at least one of a CT image, an MRI image, an image obtained by DXA, an image obtained by DES, and an ultrasound image.
- the fourth image may be at least one of a CT image, an MRI image, an image obtained by DXA, an image obtained by DES, and an ultrasound image.
- the learning unit 24 acquires abnormality information regarding an abnormality that occurred in the bones of the second subject at the second time point via the acquisition unit 21 (S3).
- the abnormality may be a musculoskeletal disorder, such as a fracture. That is, the information is regarding a fracture that occurred in the second subject at the second time point.
- abnormalities include, in addition to fractures, osteoporosis, osteoarthritis, spondylosis deformans, nerve disorders, sarcopenia, etc.
- Fig. 4 is a flowchart showing an example of the flow of the prediction process by the prediction device 10.
- Fig. 5 is a diagram showing an example of a first image G1 of a first subject.
- the acquisition unit 21 acquires a first image G1 of the first subject from the image management device 40 (S11).
- the first image G1 may be, for example, a simple X-ray image showing a chest bone B of the first subject, as shown in FIG. 5.
- the acquisition unit 21 acquires a second image G2 of the first subject from the image management device 40 (S12).
- the second image G2 may be an echo image showing the muscles of the area corresponding to the chest of the first subject. Note that S11 and S12 may be performed in the opposite order, or at the same time.
- the analysis unit 22 segments the second image G2 and analyzes information about the muscles and fat of the first subject (S13). In S13, the analysis unit 22 segments the second image G2, i.e., divides the multiple types of muscles, fat, etc. that appear in the second image G2 into regions.
- FIG. 6 is a diagram showing an example of segmentation of the second image G2.
- FIG. 6 shows an echo image of the back of the first subject.
- the back of the first subject is divided into the regions of subcutaneous fat F, first muscle M1, second muscle M2, and bone B.
- division into the regions of muscle grouping multiple types of muscle, subcutaneous fat F, and bone B may also be used.
- the analysis unit 22 can analyze the amount of the first muscle M1 by determining the thickness of the first muscle M1, for example, as shown by the open arrow in Figure 6.
- the analysis unit 22 can also analyze the amount of fat, the amount of muscle atrophy, flexibility, etc. of the first subject.
- the analysis unit 22 can calculate the amount of fat of the first subject by segmenting the second image G2 to calculate the fat area or by determining the brightness of the echo image.
- the analysis unit 22 can also determine the amount of muscle atrophy of the first subject by comparing the average muscle thickness measurements of people of the same age as the first subject with the muscle thickness of the first subject.
- the analysis unit 22 can also determine the flexibility of the first subject's muscles by capturing a video of the echo image of the first subject and analyzing the muscle movement.
- the correction unit 23 After S13, the correction unit 23 performs a predetermined correction to remove the soft tissue identified by the analysis unit 22, i.e., tissue areas other than bone, from the first image G1 (S14). Specifically, the correction unit 23 removes muscle and/or fat, etc., other than bone identified by the analysis unit 22, from the first image G1. It is preferable that the second image G2 is of a region corresponding to the first image G1.
- the prediction unit 25 reads the prediction model 32 from the memory unit 3, inputs the first image G1 and the second image G2 corrected by the correction unit 23 to the input layer 32a of the prediction model 32, and outputs the fracture risk Y from the output layer 32c (S15: prediction step).
- the prediction unit 25 may use a prediction model 32 to output, as prediction information, an influence level indicating the degree of influence that each of the first image G1 and the second image G2 has on the fracture of the first subject from the first image G1 and the second image G2.
- an influence level indicating the degree of influence that each of the first image G1 and the second image G2 has on the fracture of the first subject from the first image G1 and the second image G2.
- information may be output indicating that the influence level of the bone condition shown in the first image G1 is 70% and that of the muscle condition shown in the second image G2 is 30%.
- the prediction model 32 is generated by machine learning using the third image and the fourth image as explanatory variables and the influence level indicating the degree of influence on the fracture of the second subject at the second time point as the objective variable.
- the prediction unit 25 may use the prediction model 32 to output a predicted fracture time, which is the time when a fracture is likely to occur.
- the prediction model 32 is generated by machine learning using the third and fourth images as explanatory variables and the time when a fracture occurs in the bone of the second subject as the objective variable.
- the predicted fracture time may be in years, for example, 5 years or 10 years, or may be in months, for example, 5 years and 6 months or 10 years and 6 months.
- the presentation control unit 26 presents a message to the presentation device 60 such as, "Fracture risk Y will be 80% or higher in 10 years.”
- the presentation control unit 26 may also display a diagram showing the fracture risk Y at multiple points in time.
- the presentation control unit 26 may also present a graph showing the progress of the fracture risk Y to the presentation device 60.
- the presentation control unit 26 may also present the fracture risk Y, which is the estimated result, and the actual measurement result to the presentation device 60. This completes the prediction process by the prediction device 10 shown in FIG. 4.
- the prediction unit 25 uses the prediction model 32 to output the fracture risk Y, impact degree, predicted fracture time, etc. of the first subject as prediction information from the first image G1 showing the bones of the first subject and the second image G2 showing the muscles of the first subject.
- the prediction unit 25 uses information about the bones of the first subject as input information, as well as information about the muscles that support the bones, to predict the fracture risk Y, impact degree, predicted fracture time, etc. using the prediction model 32.
- the prediction unit 25 can accurately predict the fracture risk Y, the degree of impact, and the predicted fracture timing of the first subject. This allows, for example, doctors at medical facilities to use the output results of the information processing system 1, such as the fracture risk Y, to diagnose the first subject, who is a patient, and to more appropriately diagnose the patient. Furthermore, even doctors who do not specialize in orthopedics can diagnose patients with an accuracy close to that of an orthopedic surgeon by referring to the output results of the information processing system 1.
- the correction unit 23 performs correction to remove areas of tissue other than bone from the first image G1, thereby improving the accuracy of prediction of the fracture risk Y by the prediction unit 25.
- FIG. 7 is a block diagram showing an example of the configuration of the information processing system 1A.
- Fig. 7 shows the information processing system 1A including a prediction device 10A, an image management device 40, an electronic medical record management device 50, and a presentation device 60, but the configuration of the information processing system 1A is not limited to the configuration shown in Fig. 7.
- the prediction device 10A is a device that acquires a first image G1a of a first subject, which is the target of prediction, and outputs prediction information from the acquired first image using a prediction model 32A.
- the first subject is, for example, a human.
- the first subject may be a non-human, such as an animal such as a dog, cat, or horse.
- the first image G1a may show either the bones or muscles of the first subject, or may show at least a portion of the bones and muscles of the first subject.
- the first image G1a may be a medical image.
- the first image G1a is, for example, a plain X-ray image showing tissues of an area including at least one of the head, neck, chest, lower back, temporomandibular joints, spinal intervertebral joints, hip joints, sacroiliac joints, knee joints, ankle joints, feet, toes, shoulder joints, acromioclavicular joints, elbow joints, wrist joints, hands, fingers, and temporomandibular joints of the first subject.
- the tissues are, for example, bones and muscles. Note that the tissues may be either bones or muscles.
- the plain X-ray image may include, for example, a panoramic X-ray image used for dentistry.
- a panoramic X-ray image is an image that includes multiple teeth, for example, all of the teeth.
- the first image G1a is captured at a third time point.
- the first image G1a includes at least one of a frontal image captured from the front of the first subject, for example, an image obtained by irradiating the target area with X-rays in the front-to-back direction, and a lateral image captured from the side, for example, an image obtained by irradiating the target area with X-rays in the left-to-right direction.
- the first image G1a may be, for example, a frontal chest X-ray image including a person's chest, or a frontal lumbar X-ray image including a person's lumbar region.
- the first image G1a is a CT image, for example, at least one of a three-dimensional image, a cross-sectional image perpendicular to the body axis connecting the head and legs (e.g., horizontal section), and a cross-sectional image parallel to the body axis (e.g., sagittal section or coronal section) may be used.
- the first image G1a may be an image that shows bones, or an image that does not show bones.
- the acquisition unit 21 acquires a first image G1a from the image management device 40, which shows at least a portion of the bones and muscles of the first subject at the third time point.
- the acquisition unit 21 also acquires attribute information of the first subject from the electronic medical record management device 50. Note that the acquisition unit 21 may also acquire the first image G1a input by an input device (not shown).
- the memory unit 3 also stores a prediction model 32A and an estimation model 35.
- the prediction model 32A and the estimation model 35 have AI (Artificial Intelligence).
- the estimation model 35 has at least one AI from a first estimation model 351, a second estimation model 352, and a third estimation model 353.
- the third image may be an image showing the same area as the first image G1a, or an image showing a different area from the first image G1a.
- the third image may be an image showing at least a part of the chest, the same as the first image G1a, or an image showing at least a part of the waist, different from the first image G1a.
- the third image may be an image in the same orientation as the first image G1a, or an image in a different orientation from the first image G1a.
- the third image may be a frontal image or a lateral image.
- Information about the bones of the second subject includes, for example, at least one of the following: possibility of osteoporosis, bone density, bone mass, bone quality, muscle mass, etc.
- Bone mass may be information measured using a bone density measuring device such as DXA, or information obtained by estimating bone density from X-ray images using a first estimation model. Furthermore, bone quality information may be at least one of bone formation markers, bone resorption markers, bone quality markers (e.g., vitamin K levels), cortical bone thickness, trabecular density, trabecular orientation, and trabecular bone score, but is not limited to these. Bone mass is the sum of bone mineral and bone matrix protein. In the present disclosure, bone mass is an index related to bone density, and is the amount of bone tissue in the skeleton.
- muscle mass for example, the muscle mass [kg] for each body part measured using a body composition analyzer can be used. Furthermore, as muscle mass, for example, the area [cm2] and/or width [cm] of the muscle region imaged by MRI or DXA can be used. Furthermore, as muscle mass, for example, the muscle thickness [cm] of an ultrasound image can be used. Furthermore, as muscle mass, for example, the measured values [kg] of a muscle dynamometer for back muscles and/or grip strength, etc. can be used.
- the second point in time refers to any point in time in the future or the past of the first point in time at which the third image was captured.
- the second point in time may include multiple points in time, such as one year, five years, ten years, and thirty years after the first point in time.
- the first estimation model 351 is generated by machine learning using the third image as an explanatory variable and bone strength information indicating the measurement results of the bone density of the second subject as a target variable.
- the first estimation model 351 outputs information indicating the bone density of the bones of the first subject from the first image G1a.
- the bone strength information includes information regarding bone density and information regarding bone quality. Note that the information regarding bone density and information regarding bone quality may be handled separately.
- DXA Device-energy X-ray Absorptiometry
- a DXA device that uses DXA to measure bone density irradiates the lumbar vertebrae with X-rays, specifically two types of X-rays, from the front when measuring the bone density of the lumbar vertebrae, for example.
- the DXA device may also measure the bone density of the lumbar vertebrae by irradiating the measurement area with X-rays from the side. Furthermore, the measurement area only needs to show at least a portion of the chest, proximal femur, knee joint, etc.
- the bone density of the second subject may be measured using ultrasound.
- ultrasound is applied to the calcaneus to measure the bone density of the chest.
- the second estimation model 352 is generated by machine learning using the third image as an explanatory variable and bone strength information indicating the measurement results of the bone quality of the second subject as a target variable.
- the second estimation model 352 outputs information indicating the bone quality of the first subject from the first image G1a.
- the third estimation model 353 is generated by machine learning using the third image as an explanatory variable and bone load information indicating the measurement results of the muscle mass of the second subject as a dependent variable.
- the third estimation model 353 outputs information indicating the muscle mass of the first subject from the first image G1a.
- the bone load information is information indicating the results of measuring at least one of the muscle mass of the second subject and the posture of the second subject. Note that there is a relationship between muscle mass and bone load such that, for example, if the mass of muscles involved in maintaining posture, such as the rectus abdominis and/or erector spinae muscles, decreases, the load on the bones increases in order to maintain posture.
- posture is indicated by the reference state of the second subject, for example, the degree of inclination of the body from an upright position.
- the third estimation model 353 may be generated by machine learning using the third image as an explanatory variable and a fall risk objective variable indicating the possibility of the second subject falling, and may output information indicating the fall risk of the first subject from the first image G1a.
- Muscle mass can be measured using at least one of the following methods: physical function measurement, measurement using a body composition scale, locomotive syndrome test, sarcopenia diagnosis, center of gravity sway measurement, lower limb muscle strength measurement, standing speed measurement, muscle thickness measurement using MRI, DXA, or ultrasound imaging diagnosis, etc.
- the second estimation model 352 has, for example, an input layer 352a, a hidden layer 352b, and an output layer 352c.
- the second estimation model 352 includes second learned parameters that use the third image as an explanatory variable and bone strength information indicating the measurement results of the bone quality of the second subject as an objective variable.
- the first image G1a is input to the input layer 352a, and a bone quality estimate E2 is output from the output layer 352c.
- the hidden layer 352b may include, for example, multiple convolutional layers, multiple pooling layers, and a fully connected layer.
- the first image G1a is input to the input layer 353a, and a muscle mass estimate E3, which is information indicating the muscle mass of the first subject, is output from the output layer 353c.
- the hidden layer 353b may include, for example, multiple convolutional layers, multiple pooling layers, and a fully connected layer.
- Fig. 9 is a block diagram for explaining the operation of the prediction model 32A in embodiment 2.
- the prediction model 32A is, for example, a convolutional neural network. Note that the prediction model 32A may be configured using a neural network other than a convolutional neural network.
- the prediction model 32A has, for example, an input layer 32a, a hidden layer 32b, and an output layer 32c.
- the prediction model 32A includes learned prediction parameters that use the third image captured at the first time point and information about the bones of the second subject as explanatory variables, and abnormality information about abnormalities that occurred in the bones of the second subject at the second time point as the objective variable.
- bone abnormalities may include fractures, bone loss, primary osteoporosis, secondary osteoporosis, osteophyte formation, osteomalacia, bone metastasis of malignant tumors, multiple myeloma, vertebral hemangioma, spinal caries, pyogenic spondylitis, Paget's disease of bone, fibrous dysplasia, ankylosing spondylitis, etc.
- the bone density estimate E1, bone quality estimate E2, and muscle mass estimate E3 input to the input layer 32a may be weighted based on the strength of the causal relationship with the occurrence of a fracture. For example, taking into account that bone strength is more influenced by bone density than by bone quality, the bone density estimate E1 may be weighted more heavily than the bone quality estimate E2. Note that weighting may be done according to predetermined standards such as guidelines, or original standards.
- the estimated muscle mass value E3 is input into the prediction model 32A because it takes into account that a person's muscle mass is correlated with the strength that supports their bones, and affects the risk of fractures when they fall, for example.
- the learning unit 24 acquires a third image of the second subject via the acquisition unit 21 (S21).
- the second subjects are multiple people who are different from each other.
- the third image is an image of the bones and muscles of the second subject captured at a first time point.
- the learning unit 24 acquires attribute information of each second subject from the electronic medical record management device 50 and links this attribute information to each third image.
- attribute information has been linked to the third image in advance, the learning unit 24 extracts the attribute information from the third image.
- the learning unit 24 acquires information about the bones of the second subject via the acquisition unit 21 (S22).
- the information about the bones of the second subject includes information indicating the measurement results of the bone density and bone quality of the bones of the second subject, and information indicating the measurement results of the muscle mass of the second subject.
- the learning unit 24 After S22, the learning unit 24 generates an estimation model 35 by machine learning using the third image acquired in S21 as an explanatory variable and the information about the bones of the second subject acquired in S22 as a target variable (S23).
- the learning unit 24 performs machine learning using the third image as an explanatory variable and bone strength information indicating the bone density measurement results of the bones of the second subject as a target variable, to generate a first estimation model 351.
- the learning unit 24 may input multiple third images into the first estimation model 351, compare the output bone density estimates with the bone density measurement results, and adjust the first estimation model 351 using an error backpropagation method or the like so as to reduce the error between them.
- the learning unit 24 also performs machine learning using the third image as an explanatory variable and bone strength information indicating the bone quality measurement results of the second subject as a target variable, to generate a second estimation model 352.
- the learning unit 24 then inputs multiple third images into the second estimation model 352, compares the output bone quality estimates with the bone quality measurement results, and adjusts the second estimation model 352 using backpropagation or the like to reduce the errors between them.
- the learning unit 24 also performs machine learning using the third image as an explanatory variable and bone load information indicating the muscle mass measurement results of the second subject as a target variable, to generate a third estimation model 353.
- the learning unit 24 then inputs multiple third images into the third estimation model 353, compares the output muscle mass estimates with the muscle mass measurement results, and adjusts the third estimation model 353 using backpropagation or the like to reduce the error between them.
- the learning unit 24 After S23, the learning unit 24 generates a prediction model 32A by machine learning using the third image captured at the first time point and information about the bones of the second subject as explanatory variables, and abnormality information about abnormalities that have occurred in the bones of the second subject at the second time point as a target variable (S24).
- the abnormality information about abnormalities that have occurred in the bones is information about fractures.
- the abnormality information may include bone loss, osteophyte formation, bone atrophy, bone sclerosis, and the like, in addition to fractures.
- the learning unit 24 stores the generated estimation model 35 and prediction model 32A in the memory unit 3 (S25).
- Fig. 11 is a diagram showing a plain X-ray image of the chest of a first subject.
- Fig. 12 is a flowchart showing an example of the flow of the prediction process by the prediction device 10A.
- Figure 11 shows the bone B and muscle M of the first subject. Note that in the plain X-ray image, bone B appears white and muscle M appears gray, making it possible to distinguish between bone B and muscle M based on differences in color and/or brightness in the plain X-ray image. This makes it possible to estimate the size, shape, etc. of bone B and muscle M.
- the acquisition unit 21 acquires a first image G1a of the first subject from the image management device 40 (S31).
- the first image G1a is a simple X-ray image showing the bones B and muscles M of the first subject.
- the estimation unit 27 reads out the first estimation model 351 from the storage unit 3, inputs the acquired first image G1a into the first estimation model 351, and outputs the bone mineral density estimate E1 (S32: first estimation step).
- the output bone mineral density estimate E1 is transmitted to the prediction unit 25.
- the estimation unit 27 may output the bone mineral density estimate E1 of the first subject at multiple future and past time points.
- the estimation unit 27 may also output the progression of the bone mineral density estimate E1 of the first subject from a past time point to a future time point.
- the estimation unit 27 then reads out the second estimation model 352 from the storage unit 3, inputs the acquired first image G1a into the second estimation model 352, and outputs the bone quality estimation value E2 (S33: second estimation step).
- the output bone quality estimation value E2 is sent to the prediction unit 25.
- the estimation unit 27 reads out the third estimation model 353 from the memory unit 3, inputs the acquired first image G1a into the third estimation model 353, and outputs a muscle mass estimation value E3 (S34: third estimation step).
- the output muscle mass estimation value E3 is sent to the prediction unit 25.
- the first estimation step S32, the second estimation step S33, and the third estimation step S34 correspond to estimation steps.
- the estimation unit 27 may use at least one of the first estimation model 351, the second estimation model 352, and the third estimation model 353 to output at least one of the bone density estimate E1, the bone quality estimate E2, and the muscle mass estimate E3.
- the estimation unit 27 may output a bone mass estimate using a fourth estimation model that outputs information indicating the bone mass of the first subject from the first image G1a.
- the fourth estimation model is generated by machine learning using the third image as an explanatory variable and bone strength information indicating the measurement results of the bone mass of the second subject as a target variable.
- the estimation unit 27 may also output a posture estimation value using a fifth estimation model that outputs information indicating the posture of the first subject from the first image G1a.
- the fifth estimation model is generated by machine learning using the third image as an explanatory variable and bone load information indicating the measurement results of the posture of the second subject as a target variable.
- the posture estimation value may be the thoracic spine kyphotic angle (TKA), the lumbar forearm angle (LLA), the sacral inclination angle (SIA), or a combined value of these.
- the prediction unit 25 reads out the prediction model 32A from the memory unit 3, inputs the first image G1a, the estimated bone density value E1, the estimated bone quality value E2, and the estimated muscle mass value E3 into the prediction model 32A, and outputs the fracture risk Y (S35: prediction step).
- the fracture risk Y corresponds to prediction information indicating the possibility of a fracture occurring in the bone of the first subject at a fourth time point, which is different from the third time point when the first image G1a was captured.
- the prediction unit 25 may output the fracture risk Y for each case, taking into account whether or not the subject has undergone menopause and/or whether or not the subject has given birth.
- the bone density estimate E1, bone quality estimate E2, and muscle mass estimate E3 output by the estimation unit 27, and the fracture risk Y output by the prediction unit 25 are transmitted to the presentation control unit 26.
- the presentation control unit 26 then presents the bone density estimate E1, bone quality estimate E2, muscle mass estimate E3, and fracture risk Y on the presentation device 60 (S36).
- the presentation control unit 26 may present support information for supporting the first subject to the presentation device 60.
- the prediction unit 25 outputs support information for supporting the first subject to the presentation control unit 26 from the first image and the first estimated information using a prediction model 32A corresponding to the attribute information of the first subject.
- the presentation control unit 26 will display on the presentation device 60 a message encouraging the first subject to take in more calcium, get more sunlight, exercise, etc.
- the estimated muscle mass value E3 is lower than the average muscle mass of people with the same or similar attribute information, such as age, gender, height, and weight, as the first subject, the presentation control unit 26 will display on the presentation device 60 a message encouraging the first subject to increase the amount of exercise.
- the support information may be output by the prediction unit 25. For example, if the fracture risk Y output by the prediction unit 25 is high, the presentation control unit 26 displays support information on the presentation device 60 urging the first subject to avoid strenuous exercise.
- the presentation control unit 26 may present to the presentation device 60 information indicating the time when a fracture is likely to occur in bone B of the first subject.
- the presentation control unit 26 may present to the presentation device 60 information indicating the probability of a fracture occurring in bone B of the first subject within a predetermined period of time.
- the prediction unit 25 may predict the predicted fracture time when a fracture is likely to occur, or the probability of a fracture occurring in bone B of the first subject within a predetermined period of time, taking into account the fracture risk Y, etc.
- the predicted fracture time may be in years, for example, 5 years or 10 years, or may be in months, for example, 5 years and 6 months or 10 years and 6 months.
- the predetermined period may be in years, for example, 3 years or 5 years, or may be in months, for example, 3 years and 6 months or 5 years and 6 months.
- the presentation control unit 26 may present to the presentation device 60 a message, for example, "Fracture risk Y will be 80% or more in 10 years.” Furthermore, in S36, the presentation control unit 26 may display on the presentation device 60 a message stating, "The fracture risk Y within three years is 60%.”
- the presentation control unit 26 may present on the presentation device 60 the fracture risk Y at multiple points in time and a graph showing the progression of the fracture risk Y.
- the estimation unit 27 can estimate, as first estimated information, an estimated bone density value E1, an estimated bone quality value E2, and an estimated muscle mass value E3 of the bone B of the first subject from the first image G1a showing the bone B and muscle M of the first subject using three estimation models: a first estimation model 351, a second estimation model 352, and a third estimation model 353.
- the prediction model 32A uses the first image G1a and at least one of the bone density estimate E1, bone quality estimate E2, and muscle mass estimate E3 of bone B to output the fracture risk Y of the first subject as prediction information.
- the prediction unit 25 can accurately predict the fracture risk Y of a fracture occurring in the chest, which is the area captured in the first image G1a.
- a doctor at a medical facility can use the output results of information processing system 1A, such as fracture risk Y, to diagnose the first subject, who is a patient, allowing them to make a more appropriate diagnosis of the patient and provide the patient with more accurate support information. Furthermore, even a doctor who does not specialize in orthopedics can diagnose the patient with an accuracy close to that of an orthopedic surgeon by referring to the output results of information processing system 1A.
- information processing system 1A such as fracture risk Y
- the doctor etc. may propose a treatment plan to increase bone density for the first subject, for example, whose bone density estimate E1 has a large impact on fracture risk Y. Furthermore, the doctor etc. may propose a treatment plan to improve bone quality as support information for the first subject, whose bone quality estimate E2 has a large impact on fracture risk Y. Furthermore, the doctor etc. may propose measures to increase muscle mass related to maintaining posture, or measures to improve posture as support information for the first subject, whose muscle mass estimate E3 has a large impact on fracture risk Y. Furthermore, based on the support information, the doctor etc. can decide whether to recommend a diet or exercise therapy to the first subject, whether to prescribe medication, or the type of medication to use, etc.
- Figure 13 shows the results of the prediction of the fracture risk Y of the first subject by the prediction device 10A and the effects of each countermeasure.
- Figure 13 shows an example in which the estimation unit 27 estimates that the bone density estimate E1 of the first subject is "0.985", the bone quality estimate E2 is "1.123", and the muscle mass estimate E3 is "20.54". Note that the higher the estimated values, the greater the strength of the bone B of the first subject and the greater the muscle mass.
- Figure 13 also shows an example in which the prediction unit 25 estimates that the fracture risk Y of the first subject is "0.42". The higher the value of fracture risk Y, the higher the likelihood of a fracture occurring.
- the estimated bone density value E1, estimated bone quality value E2, estimated muscle mass value E3, and fracture risk Y are quantified and presented on the presentation device 60, allowing doctors and others at medical facilities to communicate more specific diagnosis results to the first subject.
- Figure 13 also shows that when “Measure 1" of increasing bone density by 5% is implemented on the first subject, the fracture risk Y is reduced by "-12%.” It also shows that when “Measure 2" of increasing bone quality by 5% is implemented on the first subject, the fracture risk Y is reduced by "-7%.” It also shows that when “Measure 3" of increasing muscle mass by 7% is implemented on the first subject, the fracture risk Y is reduced by "-10%.”
- the learning unit 24 of the prediction device 10 generates the prediction model 32.
- the prediction model 32 may be generated by a device other than the prediction device 10.
- the prediction model 32 generated by the other device may be stored in the storage unit 3.
- the prediction model 32 generated by the other device may be received by a communication unit (not shown) via a communication network, and the control unit 2 may store the received prediction model 32 in the storage unit 3. With this configuration, there is no need to store the learning data 33 and the teacher data 34 in the storage unit 3.
- the control unit 2 and the storage unit 3 are provided in the prediction device 10, but this is not limited to this.
- the prediction device 10 may be a cloud-based device installed on the cloud.
- the first image G1 and the second image G2 are transmitted to the prediction device 10 on the cloud via a communications network, and the prediction information predicted by the prediction device 10 is received by the presentation device 60 via the communications network.
- the prediction device 10 may be an on-premise device installed in a medical facility or a company that provides analysis services.
- the information processing system 1 of the first embodiment described above predicts the fracture risk Y of a fracture occurring in the bones of the first subject, this is not limited to this.
- the information processing system 1 may also predict the risk of developing osteoporosis, scoliosis, spinal stenosis, intervertebral disc degeneration, ankylosing spondylitis, spinal cord injury, cartilage damage, osteomyelitis, osteophytes, muscular atrophy, spinal muscular atrophy, osteoarthritis, bone and soft tissue tumors, etc. in the first subject.
- the prediction device 10 outputs the fracture risk Y, which is the possibility of an abnormality occurring in a region shown in the first image G1, as prediction information, but this is not limited to this.
- the prediction information may also indicate the possibility of an abnormality occurring in a region not shown in the first image G1.
- the prediction unit 25 may use a prediction model 32 to predict the risk Y of a lumbar or femur fracture from a first image G1 showing the chest of a first subject and a second image G2 of a region corresponding to the chest.
- the prediction model 32 is generated by machine learning using a third image showing the chest of a second subject taken at a first time point and a fourth image showing a region corresponding to the chest as explanatory variables, and abnormality information related to a fracture that occurred in the lumbar or femur of the second subject at a second time point as a response variable.
- the first image G1 and the second image G2 are input to the input layer 32a of the prediction model 32.
- the results of the muscle strength measurement of the first subject may be input instead of the second image G2.
- the prediction model 32 may be generated by machine learning using the third image of the second subject taken at the first time point and the results of the muscle strength measurement of the second subject at the first time point as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at the second time point as the objective variable.
- the prediction device 10 inputs the first image G1 corrected in S14 of FIG. 4 to the prediction model 32 in S15, but this is not limited to this.
- the prediction device 10 may not perform S13 and S14 of FIG. 4, and may input the first image that has not been corrected in S15 to the prediction model 32.
- the prediction device 10 uses a neural network as the prediction model 32, but this is not limited to this, and other models such as a linear regression model may also be used.
- the prediction model 32 having one AI is stored in the storage unit 3, but this is not limiting, and multiple AIs may be stored in the storage unit 3.
- the storage unit 3 may store a first estimation model that outputs information indicating the bone density and/or bone quality of the first subject, and a second estimation model that outputs information indicating the muscle mass of the first subject.
- the first estimation model is generated by machine learning using the first image G1 of the second subject as an explanatory variable and bone density information indicating the measurement results of the bone density and/or bone quality of the second subject as a dependent variable.
- the second estimation model is generated by machine learning using the second image G2 of the second subject as an explanatory variable and muscle mass information indicating the measurement results of the muscle mass of the second subject as a dependent variable.
- the prediction unit 25 inputs the first image G1 of the first subject into the first estimation model, and outputs a bone density estimate, which is information indicating the bone density of the first subject, and a bone quality estimate, which is information indicating the bone quality of the first subject.
- bone quality refers to a property based on at least one of the statistical properties of bone, the geometric properties of bone, the mechanical properties of bone, and the chemical properties of bone. Bone quality may also include information regarding the attribute information of the first subject.
- Bone quality can be based on at least one of, for example, bone metabolism markers, sex, race, whether or not the patient has undergone menopause, age, cortical bone condition, cancellous bone condition, cancellous bone trabecular condition, disease information, bone evaluation information, medication information, presence or absence of fracture, number of fractures, location of fracture, and fracture history. More specifically, bone quality can be based on at least one of, for example, bone formation markers, bone resorption markers, bone quality markers (e.g., vitamin K level), cortical bone thickness, trabecular density, trabecular orientation, and trabecular bone score.
- bone metabolism markers e.g., bone metabolism markers, sex, race, whether or not the patient has undergone menopause, age, cortical bone condition, cancellous bone condition, cancellous bone trabecular condition, disease information, bone evaluation information, medication information, presence or absence of fracture, number of fractures, location of fracture, and fracture history. More specifically, bone quality can be based on
- the disease information may include, for example, at least one of osteoporosis, rheumatism, osteonecrosis (e.g., femoral head necrosis, etc.), systemic sclerosis, kidney disease, and osteopetrosis.
- the bone assessment information may include information evaluated using a fracture risk assessment tool (FRAX (registered trademark): Fracture Risk Assessment Tool).
- the drug information may include, for example, at least one of the trade name, generic name, dosage, administration period, and administration method (e.g., oral, intravenous injection, intramuscular injection, subcutaneous injection, etc.) for drugs including at least one of drugs that inhibit bone resorption, drugs that promote bone formation, and other drugs (e.g., calcium preparations, vitamin preparations, female hormone preparations, etc.).
- administration method e.g., oral, intravenous injection, intramuscular injection, subcutaneous injection, etc.
- drugs including at least one of drugs that inhibit bone resorption, drugs that promote bone formation, and other drugs (e.g., calcium preparations, vitamin preparations, female hormone preparations, etc.).
- the bone quality may also include, for example, the type of medullary cavity shape.
- the Dorr classification can be used for the medullary cavity shape.
- the medullary cavity shape can be classified as follows using at least one of the thickness of the cortical bone and the shape of the medullary cavity.
- - Type A The cortical bone is thick and the medullary cavity is narrow and thin.
- - Type B A type that is between Type A and Type C, with a medullary cavity that is neither narrow nor wide.
- - Type C A type in which the cortical bone is thin and the medullary cavity is wide.
- the bone density estimate may be a value related to bone density.
- the bone density estimate is expressed, for example, by at least one of bone mineral density per unit area (g/cm2), bone mineral density per unit volume (g/cm3), YAM (%), T-score, and Z-score.
- YAM (%) is an abbreviation for "Young Adult Mean” and is sometimes called the young adult average percent.
- the bone density estimate may use an index used in osteoporosis guidelines, such as the "2015 Prevention and Treatment Guidelines of the Japan Osteoporosis Society," or may use an original index.
- the prediction unit 25 inputs the second image G2 of the first subject into the second estimation model, thereby outputting a muscle mass estimate, which is information indicating the muscle mass of the first subject. Then, in S16 of FIG. 4, the presentation control unit 26 presents the bone density estimate, bone quality estimate, and muscle mass estimate on the presentation device 60, in addition to the fracture risk Y.
- the first estimation model was generated by machine learning using the first image G1 of the second subject as an explanatory variable and bone density information indicating the measurement results of the second subject's bone density and bone quality as a dependent variable.
- the second estimation model was generated by machine learning using the second image G2 of the second subject as an explanatory variable and muscle mass information indicating the measurement results of the second subject's muscle mass as a dependent variable.
- the bone density of the second subject can be measured using, for example, DXA, ultrasound, MD (Micro Densitometry), and CT (Quantitative Computed Tomography).
- DXA device that measures bone density using DXA
- X-rays are irradiated from the front of the subject's lumbar vertebrae.
- proximal femur X-rays are irradiated from the front of the subject's proximal femur.
- front of the lumbar vertebrae and “front of the proximal femur” refer to the direction that correctly faces the imaging site, such as the lumbar vertebrae and proximal femur, and may be on the ventral side of the subject's body or on the back side of the subject.
- the proximal femur includes, for example, at least one of the neck, trochanter, shaft, and the entire proximal femur (neck, trochanter, and shaft).
- X-rays are irradiated onto the hand.
- the bone quality of the second subject can be measured by calculating the concentration of a bone metabolism marker in the urine or blood of the second subject.
- bone metabolism markers include type I collagen cross-linked N-telopeptide (NTX), type I collagen cross-linked C-telopeptide (CTX), tartrate-resistant acid phosphatase (TRACP-5b), and deoxypyridinoline (DPD).
- the muscle mass of the second subject can be measured by, for example, physical function measurement, measurement using a body composition scale, locomotive syndrome test, sarcopenia diagnosis, center of gravity sway measurement, lower limb muscle strength measurement, standing speed measurement, muscle thickness measurement using ultrasound imaging diagnosis, etc.
- the information processing system 1 of the above-described alternative embodiment 2 it is possible to output the estimated bone density, bone quality, and muscle mass of the first subject from the first image G1 and second image G2 of the first subject.
- This allows doctors and others at medical facilities to refer to the estimated bone density, bone quality, and muscle mass presented on the presentation device 60 and provide the first subject, who is a patient, with more specific diagnostic results that take into account each estimated value.
- the presentation control unit 26 presents to the presentation device 60 the bone density estimate, the bone quality estimate, and the muscle mass estimate in addition to the fracture risk Y.
- the presentation control unit 26 may also present to the presentation device 60 support information for supporting the first subject.
- the prediction unit 25 outputs support information to support the first subject by comparing the fracture risk Y, estimated bone density, estimated bone quality, and estimated muscle mass with reference information indicating bone density, bone quality, and muscle mass according to the age and/or sex of the first subject.
- the estimated bone density, bone quality, and muscle mass correspond to the estimated information of the first subject.
- the prediction unit 25 outputs support information to the presentation control unit 26 that encourages the first subject to take in more calcium, get more sunlight, exercise, etc. Furthermore, if the estimated muscle mass value is low compared to the average muscle mass of people of the same or similar age and sex as the first subject, the prediction unit 25 outputs support information to the presentation control unit 26 that encourages the first subject to increase the amount of exercise.
- the prediction unit 25 may predict attribute information of the first subject by analyzing the brightness of the echo image, which is the second image G2, using the analysis unit 22.
- the attribute information includes at least one of the age, sex, and muscle quality of the first subject.
- the prediction unit 25 outputs the fracture risk Y of the bones in the chest, which is a specific region, but this is not limiting.
- the prediction unit 25 may output the fracture risk for each of multiple regions of the bones of the first subject from the first image G1 and the second image G2 using the prediction model 32.
- the first image G1 shows bones in multiple locations on the first subject.
- the second image G2 shows muscles in multiple locations on the first subject.
- the prediction model 32 is generated by machine learning using the third image, which shows bones in multiple locations on the second subject, and the fourth image, which shows muscles in multiple locations on the second subject, as explanatory variables, and abnormality occurrence information for each location that occurred within a specified period after the third image and/or the fourth image were captured as the objective variable.
- the third image and the fourth image may be captured at different times or the same. If the third image and the fourth image are captured at different times, the date on which either image was captured can be used as the reference date. The reference date may also be a date halfway between the date on which the third image and the date on which the fourth image were captured.
- the prediction unit 25 may use the prediction model 32 to output the fracture risk Y, bone mineral density estimate, bone quality estimate, and muscle mass estimate for each of multiple bone regions of the first subject from the first image G1 and the second image G2.
- "By region” may be, for example, divided into regions such as the cervical vertebrae, thoracic vertebrae, and lumbar vertebrae, or may be divided into vertebral bodies such as the lumbar vertebrae L1, L2, L3, and L4.
- the prediction unit 25 may then identify a region of interest that is highly related to fracture from among multiple regions of the bones of the first subject based on the fracture risk Y, the estimated bone density value, the estimated bone quality value, and the estimated muscle mass value, and present the region of interest via the presentation control unit 26. This allows doctors and other medical professionals to provide more appropriate treatment by focusing their examination on the region of interest when diagnosing the first subject.
- the learning unit 24 of the prediction device 10A generates the prediction model 32A and the estimation model 35.
- the prediction model 32A and the estimation model 35 may be generated by a device other than the prediction device 10A.
- the prediction model 32A and the estimation model 35 generated by the other device may be stored in the storage unit 3, and the learning unit 24 may be omitted.
- the prediction model 32A and the estimation model 35 generated by the other device may be received by a communication unit (not shown) via a communication network, and the control unit 2 may store the received prediction model 32A and the estimation model 35 in the storage unit 3.
- the prediction model 32A and the estimation model 35 generated by the other device may be recorded on a recording medium such as a USB memory or a DVD, and then the prediction model 32A and the estimation model 35 may be stored in the storage unit 3 via the recording medium.
- the prediction unit 25 may predict the risk Y of a lumbar or femur fracture from a first image G1a showing the chest of a first subject using a prediction model 32A.
- the prediction model 32A is generated by machine learning using a second image showing the chest of a second subject and information about the bones of the second subject as explanatory variables, and abnormality information about a fracture that has occurred in the lumbar or femur of the second subject as a response variable.
- the presentation device 60 presents information indicating the bone density of the first subject's bones, information indicating the bone quality, information indicating the muscle mass, and information regarding the fracture risk Y in numerical format, but this is not limited to this.
- each piece of information may be presented in heat map format.
- the information indicating the bone quality may be a feature quantity obtained by texture analysis of at least a portion of the third image.
- the functions of the prediction devices 10, 10A can be realized by a program that causes a computer to function as the prediction devices 10, 10A, and a program that causes a computer to function as each control block (particularly, the prediction unit 25 and the presentation control unit 26) of the prediction devices 10, 10A.
- the above program may be stored non-temporarily on one or more computer-readable storage media. These storage media may or may not be included in the device. In the latter case, the program may be supplied to the device via any wired or wireless transmission medium.
- An information processing system includes a prediction unit that outputs prediction information using a prediction model based on a first image and first data showing at least a portion of a first subject.
- the prediction model is generated by machine learning using a third image and second data showing at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality occurring in the bones of the second subject at a second time point different from a first time point when the third image was captured as a dependent variable.
- the prediction information is information indicating the possibility of an abnormality occurring in the bones of the first subject.
- the first image is an image depicting a predetermined region of the first subject.
- the second image is an image depicting a region of the first subject corresponding to the predetermined region.
- the third image is an image depicting a predetermined region of the second subject.
- the fourth image may be an image depicting a region of the second subject corresponding to the predetermined region.
- the prediction information may be information indicating the possibility of an abnormality occurring in the bone of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
- the body parts may include at least one of the chest, waist, feet, and hands.
- the second image may show one region or multiple regions of the first subject.
- the second image may include at least one of a still image and a video.
- the first image and the second image may include at least one of a plain X-ray image, a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, a DXA (Dual Energy X-ray Absorptiometry) image, an echo image, and an image obtained by DES (Dual Energy Subtraction).
- the second image may be a different image type from the first image.
- the fourth image may be a different image type from the third image.
- the prediction information may be information indicating the possibility that the abnormality will occur in the specified area shown in the first image.
- the prediction information may be information indicating the possibility that the abnormality will occur in a region other than the predetermined region that is not captured in the first image.
- the abnormality may be a musculoskeletal disorder.
- the prediction unit may output the bone density and/or bone quality of the first subject from the first image of the first subject using a first estimation model, and output the muscle mass of the first subject from the second image of the first subject using a second estimation model, the first estimation model being generated by machine learning using the third image of the second subject as an explanatory variable and bone information indicating the bone density and/or bone quality of the second subject as an objective variable, and the second estimation model being generated by machine learning using the fourth image of the second subject as an explanatory variable and muscle information indicating the muscle mass of the second subject as an objective variable.
- the prediction model may be generated by machine learning using at least one of the third image, the fourth image, the bone information obtained by inputting the third image into the first estimation model, and the muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and the abnormality information as a target variable.
- the information processing system is in any one of aspects 2 to 16 above, further comprising an analysis unit that uses the second image to analyze information including at least one of the amount, thickness, amount of atrophy, and flexibility of muscle and fat of the first subject, and a correction unit that performs a predetermined correction on the first image.
- the analysis unit may identify a soft tissue region by segmenting the second image, the correction unit may perform a correction on the first image to remove the soft tissue region identified by the analysis unit, and the prediction unit may output the prediction information using the prediction model from the first image corrected by the correction unit and the second image.
- the prediction unit outputs support information to support the first subject using at least one of the predicted information, estimated information about the first subject, and reference information
- the estimated information may be at least one of the bone density, bone quality, and muscle mass of the first subject
- the reference information may be at least one of the bone density, bone quality, and muscle mass according to the age and/or sex of the first subject.
- the prediction unit may identify a region of interest that is highly related to the abnormality based on the prediction information and/or the estimation information.
- the prediction information may include an influence degree indicating the degree to which each of the first image and the second image affects the abnormality.
- the prediction information may include information indicating a time when the abnormality is likely to occur in the first subject.
- the information processing system is in accordance with aspect 1 above, further comprising an estimation unit that outputs the first data including first estimated information related to the bones of the first subject from the first image using an estimation model.
- the estimation model is generated by machine learning using the third image as an explanatory variable and the second data including information related to the bones of the second subject as a target variable.
- the information processing system is in accordance with aspect 1 above, further comprising an estimation unit that outputs the first data including multiple pieces of first estimated information related to the bones of the first subject from the first image using multiple estimation models.
- the multiple estimation models are generated by machine learning using the third image as an explanatory variable and the second data including multiple pieces of information related to the bones of the second subject as a target variable.
- the prediction information may be information indicating the possibility of an abnormality occurring in the tissue of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
- the prediction information may be information indicating the possibility that the abnormality will occur in the area shown in the first image.
- the prediction information may be information indicating the possibility that the abnormality will occur in an area not captured in the first image.
- the first image may be a plain X-ray image showing at least a portion of the bones and/or muscles of the first subject
- the third image may be a plain X-ray image showing at least a portion of the bones and/or muscles of the second subject.
- the first image may be a front image or a side image
- the third image may be an image oriented in the same direction as the first image
- the estimation unit may use at least one of a bone strength estimation model generated by machine learning using the third image as an explanatory variable and bone strength information indicating at least one measurement result of the bone mineral density, bone mass, and bone quality of the bones of the second subject as an objective variable, and a bone stress estimation model generated by machine learning using the third image as an explanatory variable and bone stress information indicating at least one measurement result of the muscle mass and posture of the second subject as an objective variable.
- the bone strength estimation model includes a first estimation model that outputs information indicating the bone density of the bone of the first subject from the first image, and a second estimation model that outputs information indicating the bone quality of the first subject from the first image.
- the bone load estimation model may include a third estimation model that outputs information indicating the muscle mass of the first subject from the first image.
- the first estimated information includes two or more pieces of information from the group consisting of information indicating the bone density of the bone of the first subject output from the first estimation model, information indicating the bone quality of the first subject output from the second estimation model, and information indicating the muscle mass of the first subject output from the third estimation model
- the prediction unit may weight each of the two or more pieces of information based on the strength of the causal relationship with the occurrence of the abnormality and input the weighted information into the prediction model.
- the information indicating the bone density of the first subject may be expressed by at least one of bone mineral density per unit area, bone mineral density per unit volume, YAM (Young Adult Mean), T-score, and Z-score.
- the abnormality may be a musculoskeletal disorder.
- the prediction model may be generated by machine learning using the third image and/or information for each bone region of the second subject as explanatory variables and the abnormality information regarding the abnormality that occurred for each bone region of the second subject at the second time point as a target variable, and the prediction information may be information indicating the possibility of the abnormality occurring for each tissue region of the first subject at the fourth time point.
- the prediction device may be any of aspects 25 to 40 above, and may include a presentation control unit that causes the presentation device to present the prediction information.
- the recording medium according to aspect 50 of the present disclosure may be a computer-readable, non-transitory recording medium on which the control program according to aspect 48 above is recorded.
- the prediction model may be generated by machine learning using the third image and the fourth image as explanatory variables and the abnormality information for each of the parts that has occurred within a predetermined period since the third image and/or the fourth image was captured as a target variable, and the prediction unit may use the prediction model to output the prediction information for each of the multiple parts of the bones of the first subject from the first image and the second image.
- the body parts may include at least one of the chest, waist, feet, and hands.
- the second image may show one region or multiple regions of the first subject.
- the second image may include at least one of a still image and a video.
- the first image and the second image may include at least one of a plain X-ray image, a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, a DXA (Dual Energy X-ray Absorptiometry) image, an echo image, and an image obtained by DES (Dual Energy Subtraction).
- the second image may be a different image type from the first image
- the fourth image may be a different image type from the third image
- the prediction information may be information indicating the possibility that the abnormality will occur in the specified area shown in the first image.
- the prediction information may be information indicating the possibility of the abnormality occurring in a region other than the predetermined region that is not captured in the first image.
- the abnormality may be a musculoskeletal disorder.
- the prediction unit may output the bone density and/or bone quality of the first subject from the first image of the first subject using a first estimation model, and output the muscle mass of the first subject from the second image of the first subject using a second estimation model, the first estimation model being generated by machine learning using the third image of the second subject as an explanatory variable and bone information indicating the bone density and/or bone quality of the second subject as an objective variable, and the second estimation model being generated by machine learning using the fourth image of the second subject as an explanatory variable and muscle information indicating the muscle mass of the second subject as an objective variable.
- the prediction model may be generated by machine learning using at least one of the third image, the fourth image, the bone information obtained by inputting the third image into the first estimation model, and the muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and the abnormality information as a target variable.
- the information processing system is any of aspects A1 to A15 above, further comprising an analysis unit that uses the second image to analyze information including at least one of the amount, thickness, amount of atrophy, and flexibility of muscle and fat of the first subject, and a correction unit that performs a predetermined correction on the first image.
- the analysis unit identifies soft tissue regions by segmenting the second image.
- the correction unit performs correction on the first image to remove the soft tissue regions identified by the analysis unit.
- the prediction unit may output the prediction information using the prediction model from the first image corrected by the correction unit and the second image.
- the second image includes an echo image.
- the prediction unit may predict attribute information of the first subject by analyzing the brightness of the echo image using the analysis unit.
- the attribute information may include at least one of the age, sex, and muscle quality of the first subject.
- the prediction unit outputs support information to support the first subject using at least one of the predicted information, estimated information about the first subject, and reference information.
- the estimated information may be at least one of the bone density, bone quality, and muscle mass of the first subject
- the reference information may be at least one of the bone density, bone quality, and muscle mass according to the age and/or sex of the first subject.
- the prediction unit may identify a region of interest that is highly related to the abnormality based on the prediction information and/or the estimation information.
- An information processing system includes an estimation unit that uses a plurality of estimation models to output a plurality of first estimated information items related to the bones of a first subject from a first image that shows at least a portion of the tissue of the first subject, and a prediction unit that uses a prediction model to output predicted information from the plurality of first estimated information items.
- the plurality of estimation models are generated by machine learning using a second image that shows the tissue of a second subject as an explanatory variable and a plurality of pieces of information related to the bones of the second subject as an objective variable.
- the prediction models are each generated by machine learning using a plurality of pieces of information related to the bones of the second subject as an explanatory variable and abnormality information related to an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the second image was captured as an objective variable.
- the predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
- the prediction information may be information indicating the possibility of an abnormality occurring in the tissue of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
- the prediction information may be information indicating the possibility that the abnormality will occur in the area shown in the first image.
- the prediction information may be information indicating the possibility that the abnormality will occur in an area not captured in the first image.
- the first image may be a front image or a side image
- the second image may be an image oriented in the same direction as the first image
- the estimation unit may use at least one of a bone strength estimation model generated by machine learning using the second image as an explanatory variable and bone strength information indicating at least one measurement result of the bone mineral density, bone mass, and bone quality of the bones of the second subject as an objective variable, and a bone stress estimation model generated by machine learning using the second image as an explanatory variable and bone stress information indicating at least one measurement result of the muscle mass and posture of the second subject as an objective variable.
- the bone strength estimation model includes a first estimation model that outputs information indicating the bone density of the bone of the first subject from the first image, and a second estimation model that outputs information indicating the bone quality of the first subject from the first image.
- the bone load estimation model may include a third estimation model that outputs information indicating the muscle mass of the first subject from the first image.
- the first estimated information includes two or more pieces of information from the following: information indicating the bone mineral density of the bone of the first subject output from the first estimation model; information indicating the bone quality of the first subject output from the second estimation model; and information indicating the muscle mass of the first subject output from the third estimation model.
- the prediction unit may weight each of the two or more pieces of information based on the strength of the causal relationship with the occurrence of the abnormality, and input the weighted information into the prediction model.
- the information indicating the bone density of the first subject may be expressed by at least one of bone mineral density per unit area, bone mineral density per unit volume, YAM (Young Adult Mean), T-score, and Z-score.
- the bone strength information is information measured using a method including at least one of DXA (Dual-energy X-ray Absorptiometry), ultrasound, and a method of calculating the concentration of a bone metabolic marker in the urine or blood of the second subject.
- the bone load information may be information indicating the results of measuring at least one of the muscle mass of the second subject and the posture of the second subject.
- the abnormality may be a musculoskeletal disorder.
- the prediction model is generated by machine learning using the second image and/or information for each bone region of the second subject as explanatory variables, and the abnormality information regarding the abnormality that occurred for each bone region of the second subject at the second time point as a target variable.
- the prediction information may be information indicating the possibility of the abnormality occurring for each tissue region of the first subject at the fourth time point.
- the prediction information may include information indicating a time when the abnormality is likely to occur in the tissue of the first subject.
- the prediction unit may output support information for supporting the first subject from the first image and/or the first estimated information using the prediction model corresponding to attribute information of the first subject.
- the information processing system according to aspect B17 of the present disclosure may be any of aspects B1 to B16 above, and may include a presentation control unit that causes the presentation device to present the prediction information.
- a prediction device is any of aspects B1 to B17 above, and includes the estimation unit and the prediction unit in the information processing system.
- An information processing method is an information processing method executed by one or more computers, and includes an estimation step of outputting first estimated information regarding the bones of a first subject from a first image that shows at least a portion of the tissue of the first subject using an estimation model, and a prediction step of outputting predicted information from the first image and the first estimated information using a prediction model.
- the estimation model is generated by machine learning using a second image that shows the tissue of a second subject as an explanatory variable and information about the bones of the second subject as a dependent variable.
- the prediction model is generated by machine learning using the second image and information about the bones of the second subject as explanatory variables and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the second image was captured as a dependent variable.
- the predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
- the prediction model is generated by machine learning using a plurality of pieces of information related to the bones of the second subject as an explanatory variable and abnormality information related to an abnormality that occurred in the bones of the second subject at a second time point different from the first time point when the second image was captured as an objective variable.
- the predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
- the recording medium according to aspect B22 of the present disclosure may be a computer-readable, non-transitory recording medium on which the control program of aspect B21 described above is recorded.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Data Mining & Analysis (AREA)
- Primary Health Care (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
Abstract
Description
本開示は、情報処理システム、予測装置、情報処理方法、制御プログラム、及び記録媒体に関する。 This disclosure relates to an information processing system, a prediction device, an information processing method, a control program, and a recording medium.
従来、ニューラルネットワークを用いて医用画像から疾患に関する情報を推定する技術が知られている。例えば、特許文献1には、患者のX線画像から、患者が骨粗鬆症か否かを推定する構成について開示されている。 Technology for using neural networks to estimate disease-related information from medical images is known. For example, Patent Document 1 discloses a configuration for estimating whether a patient has osteoporosis based on their X-ray images.
本開示の一態様に係る情報処理システムは、第1被検体の少なくとも一部が写る第1画像及び第1データから、予測モデルを用いて予測情報を出力する予測部を備えている。前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である。 An information processing system according to one aspect of the present disclosure includes a prediction unit that uses a prediction model to output predictive information from a first image and first data that show at least a portion of a first subject. The predictive model is generated by machine learning using a third image and second data that show at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the third image was captured as a dependent variable. The predictive information is information that indicates the possibility of an abnormality occurring in the bones of the first subject.
本開示の一態様に係る情報処理方法は、1または複数のコンピュータが実行する情報処理方法であって、第1被検体の少なくとも一部が写る第1画像及び第1データから、予測モデルを用いて予測情報を出力する予測ステップを含む。前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である。 An information processing method according to one aspect of the present disclosure is an information processing method executed by one or more computers, and includes a prediction step of outputting predicted information using a prediction model from a first image and first data that show at least a portion of a first subject. The prediction model is generated by machine learning using a third image and second data that show at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the third image was captured as a target variable. The predicted information is information that indicates the possibility of an abnormality occurring in the bones of the first subject.
骨折が発生する要因には、骨強度の低下、及び筋肉が衰えることによる骨への負荷増大が挙げられる。骨折が生じる可能性を精度良く推定するためには、骨強度に加え、筋肉の状態についても考慮することが望ましい。医用画像から骨折の発生を推定する技術には、推定精度の面で改善の余地がある。 Factors that lead to fractures include a decrease in bone strength and increased stress on bones due to muscle atrophy. In order to accurately estimate the likelihood of a fracture, it is desirable to take into account not only bone strength but also the condition of the muscles. There is room for improvement in the accuracy of technology that can predict the occurrence of fractures from medical images.
本開示の一態様は、上記の課題に鑑みてなされたものであり、医用画像から異常に関する情報を予測する予測精度を向上させることが可能な情報処理システム、情報処理方法、予測装置、制御プログラム及び記録媒体を提供することを目的とする。 One aspect of the present disclosure has been made in consideration of the above-mentioned problems, and aims to provide an information processing system, information processing method, prediction device, control program, and recording medium that can improve the prediction accuracy of predicting information about abnormalities from medical images.
本開示の一態様によれば、医用画像から異常に関する情報を予測する予測精度を向上させることができる。 According to one aspect of the present disclosure, it is possible to improve the accuracy of predicting information about abnormalities from medical images.
〔実施形態1〕
以下、本開示の実施形態1に係る情報処理システム1ついて、図1~図6を参照して説明する。
[Embodiment 1]
An information processing system 1 according to a first embodiment of the present disclosure will be described below with reference to FIGS.
〔情報処理システムの構成〕
まず、情報処理システム1の構成について、図1を参照して説明する。図1は、実施形態1における情報処理システム1の構成の一例を示すブロック図である。図1には、予測装置10、画像管理装置40、電子カルテ管理装置50、及び提示装置60を備える情報処理システム1が示されている。なお、情報処理システム1の構成は、図1に示す構成に限定されない。
[Configuration of information processing system]
First, the configuration of the information processing system 1 will be described with reference to Fig. 1. Fig. 1 is a block diagram showing an example of the configuration of the information processing system 1 in embodiment 1. Fig. 1 shows the information processing system 1 including a prediction device 10, an image management device 40, an electronic medical record management device 50, and a presentation device 60. Note that the configuration of the information processing system 1 is not limited to the configuration shown in Fig. 1.
予測装置10は、異常に関する情報を予測する対象である第1被検体の第1画像G1及び第1データを取得し、取得した第1画像G1及び第1データから、予測モデル32を用いて予測情報を出力する装置である。実施形態1では、第1データは、一例として、第1被検体の第2画像G2である(図2参照)。 The prediction device 10 is a device that acquires a first image G1 and first data of a first subject, which is the subject of the prediction of information related to an abnormality, and outputs prediction information from the acquired first image G1 and first data using a prediction model 32. In embodiment 1, the first data is, as an example, a second image G2 of the first subject (see Figure 2).
ここで、第1被検体は、例えば人である。なお、第1被検体は、人以外であってもよく、例えば、イヌ、ネコ、及びウマ等の動物であってもよい。第1画像G1は、第1被検体の所定部位の少なくとも一部の骨が写っていればよい。また、第2画像G2は、第1被検体の上記所定部位に対応した部位の筋肉が写っていればよい。なお、第2画像G2は、第1被検体の上記所定部位に対応した部位と同じ部位が写っていてもよいし、第1被検体の上記所定部位に対応した部位とは異なる部位が写っていてもよい。 Here, the first subject is, for example, a human. However, the first subject may be a non-human, such as an animal such as a dog, cat, or horse. The first image G1 may show at least some of the bones in a predetermined region of the first subject. The second image G2 may show the muscles in a region of the first subject corresponding to the predetermined region. The second image G2 may show the same region of the first subject as the region corresponding to the predetermined region, or it may show a region of the first subject that is different from the region corresponding to the predetermined region of the first subject.
第1画像G1は、例えば、第1被検体の頭部、頚部、胸部、腰部、顎関節、脊椎椎間関節、股関節、仙腸関節、膝関節、足関節、足部、足趾、肩関節、肩鎖関節、肘関節、手関節、手部、手指、及び顎関節等のうち少なくともいずれかを含む部位の骨が写る単純X線画像である。第1画像G1には、第1被検体において骨折等の異常が予想される部位以外の部位が写っていてもよい。なお、単純X線画像は、例えば、歯科用に用いられるパノラマX線画像を含んでもよい。パノラマX線画像は、例えば、複数の歯、又は全ての歯が含まれる画像である。 The first image G1 is, for example, a plain X-ray image showing bones of an area including at least one of the head, neck, chest, lower back, temporomandibular joints, spinal intervertebral joints, hip joints, sacroiliac joints, knee joints, ankle joints, feet, toes, shoulder joints, acromioclavicular joints, elbow joints, wrist joints, hands, fingers, and temporomandibular joints of the first subject. The first image G1 may also show areas of the first subject other than areas where abnormalities such as fractures are expected. Note that the plain X-ray image may include, for example, a panoramic X-ray image used for dentistry. A panoramic X-ray image is, for example, an image that includes multiple teeth or all of the teeth.
第1画像G1は、例えば、医療施設で第3時点において撮像された画像を用いてよい。第1画像は、所定部位が正面から写る正面像(例えば、所定部位に対して前後方向にX線を照射して得られる像)、及び、所定部位が側面から写る側面像(例えば、所定部位に対して左右方向にX線を照射して得られる像)のうち少なくともいずれかを含む。 The first image G1 may be, for example, an image captured at a medical facility at a third time point. The first image includes at least one of a frontal image showing a specified area from the front (for example, an image obtained by irradiating the specified area with X-rays in the front-to-back direction) and a lateral image showing the specified area from the side (for example, an image obtained by irradiating the specified area with X-rays in the left-to-right direction).
なお、第1画像G1は、単純X線画像に限らず、骨に関する情報が含まれる医用画像であればよく、他にも、例えば、CT(Computed Tomography)画像、MRI(Magnetic Resonance Imaging)画像、DXA(Dual Energy X-ray Absorptiometry)法による画像、DES(Dual Energy Subtraction)による画像、及び超音波画像等であってもよい。また、第1画像G1は、ファントムを含む画像であってもよいし、ファントムを含まない画像であってもよい。 The first image G1 is not limited to a simple X-ray image, but may be any medical image containing information about bones, such as a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, an image obtained by DXA (Dual Energy X-ray Absorptiometry), an image obtained by DES (Dual Energy Subtraction), or an ultrasound image. The first image G1 may be an image that includes a phantom, or an image that does not include a phantom.
第2画像G2は、例えば、第1被検体の頭部、胸部、腰部、足部、及び手部等の部位に対応した部位の筋肉が写るエコー画像である(図6参照)。第2画像G2は、第1被検体の1つの部位だけが写っていてもよいし、複数の部位が写っていてもよい。なお、第2画像G2は、静止画であってもよいし、動画であってもよい。 The second image G2 is, for example, an echo image showing muscles in areas corresponding to the head, chest, waist, feet, hands, etc. of the first subject (see Figure 6). The second image G2 may show only one area of the first subject, or multiple areas. The second image G2 may be a still image or a video.
また、第2画像G2は、エコー画像に限らず、筋肉に関する情報が含まれる医用画像であればよく、他にも、例えば、CT画像、MRI画像、DXA法による画像、DESによる画像、及び超音波画像等であってもよい。つまり、第2画像G2は、第1画像G1とは画像の種類が異なっていてもよいし、同じであってもよい。また、第2画像G2は、第1画像G1とは異なる波長の媒体を用いて撮像された画像であってもよい。 Furthermore, the second image G2 is not limited to an echo image, and may be any medical image containing information about muscles, such as a CT image, an MRI image, an image obtained by DXA, an image obtained by DES, or an ultrasound image. In other words, the second image G2 may be of a different type from the first image G1, or the same type. Furthermore, the second image G2 may be an image captured using a medium with a different wavelength from that of the first image G1.
第2画像G2は、例えば、医療施設で第3時点において撮像された画像を用いてもよい。なお、第1画像G1と第2画像G2との撮影時期の差は、例えば2週間、1ヶ月、半年、1年以内等、所定の期間以内であればよい。第3時点は、第1画像G1が撮影された日を基準として設定されてもよいし、第2画像G2が撮影された日を基準としてもよい。また、基準日は、第1画像G1及び第2画像G2の撮影時期のうち撮影日が遅い方でもよいし、早い方でもよい。 The second image G2 may be, for example, an image taken at a medical facility at a third time point. The difference in the time at which the first image G1 and the second image G2 were taken may be within a predetermined period, such as two weeks, one month, six months, or one year. The third time point may be set based on the date on which the first image G1 was taken, or the date on which the second image G2 was taken. The reference date may be either the later or earlier of the dates on which the first image G1 and the second image G2 were taken.
また、第2画像G2は、所定部位が正面から写る正面像(例えば、所定部位に対して前後方向にX線を照射して得られる像)、及び、所定部位が側面から写る側面像(例えば、所定部位に対して左右方向にX線を照射して得られる像)のうち少なくともいずれかを含む。 The second image G2 also includes at least one of a front image showing the specified area from the front (for example, an image obtained by irradiating the specified area with X-rays in the front-to-back direction) and a side image showing the specified area from the side (for example, an image obtained by irradiating the specified area with X-rays in the left-to-right direction).
なお、第1画像G1又は第2画像G2がCT画像である場合には、3次元的に構築された画像に基づく骨梁に関する情報を用いてもよいし、2次元的に撮影された画像に基づく骨梁に関する情報を用いてもよい。また、例えば3次元画像、頭部と脚部とを結ぶ体軸に垂直な方向の断面画像(例えば、水平断)、及び体軸に平行な方向の断面画像(例えば、矢状断又は冠状断など)の少なくとも1つを用いてもよい。 If the first image G1 or the second image G2 is a CT image, information about the bone trabeculae based on a three-dimensionally constructed image may be used, or information about the bone trabeculae based on a two-dimensionally captured image may be used. Furthermore, for example, at least one of a three-dimensional image, a cross-sectional image perpendicular to the body axis connecting the head and legs (e.g., horizontal section), and a cross-sectional image parallel to the body axis (e.g., sagittal section or coronal section) may be used.
予測装置10が出力する予測情報は、第4時点で、第1画像G1に写る第1被検体の骨に異常が発生する可能性を示す情報である。本実施形態では、予測情報は、第4時点で第1被検体の骨に骨折が発生する可能性を示す骨折リスクである。 The prediction information output by the prediction device 10 is information indicating the possibility that an abnormality will occur in the bone of the first subject shown in the first image G1 at the fourth time point. In this embodiment, the prediction information is a fracture risk indicating the possibility that a fracture will occur in the bone of the first subject at the fourth time point.
なお、骨折は運動器疾患の一例であり、骨折としては脆弱性骨折を想定している。運動器疾患には、骨折以外にも骨粗鬆症、変形性関節症、変形性脊椎症、神経障害、サルコペニア等も含まれる。 Fractures are an example of musculoskeletal diseases, and fragility fractures are assumed to be the fractures. Musculoskeletal diseases also include osteoporosis, osteoarthritis, spondylosis deformans, neurological disorders, sarcopenia, etc.
第4時点は、第3時点とは異なる時点である。第3時点は、第1画像G1が撮像された時点である。第4時点は、第3時点よりも将来又は過去の任意の時点を指している。第4時点は、例えば、第3時点から1年前、1年後、5年後、10年後、及び30年後等、複数の時点を含んでいてもよい。 The fourth point in time is a point in time different from the third point in time. The third point in time is the point in time when the first image G1 was captured. The fourth point in time refers to any point in time in the future or the past of the third point in time. The fourth point in time may include multiple points in time, such as one year before, one year after, five years after, ten years after, and thirty years after the third point in time.
画像管理装置40は、第3画像及び第4画像を管理するためのサーバとして機能するコンピュータである。第3画像は、第2被検体の所定部位の骨が写る単純X線画像である。第4画像は、第2被検体の上記所定部位に対応した部位の筋肉が写るエコー画像である。 The image management device 40 is a computer that functions as a server for managing the third and fourth images. The third image is a plain X-ray image showing the bones of a specific area of the second subject. The fourth image is an echo image showing the muscles of a part of the second subject corresponding to the specific area.
なお、第4画像は、第2被検体の上記所定部位に対応した部位とは異なる部位が写っていてもよい。また、第3画像及び第4画像が別々の画像管理装置に格納されていてもよい。また、予測装置10が、画像管理装置40を介さずに、撮像装置から第3画像及び第4画像を取得する構成であってもよい。 The fourth image may depict a region of the second subject that is different from the region corresponding to the above-mentioned predetermined region. The third image and the fourth image may be stored in separate image management devices. The prediction device 10 may also be configured to acquire the third image and the fourth image from the imaging device without going through the image management device 40.
また、第3画像は、単純X線画像に限らず、骨に関する情報が含まれる医用画像であればよく、他にも、例えばCT画像、MRI画像、DXA法による画像、DESによる画像、及び超音波画像等であってもよい。また、第4画像は、エコー画像に限らず、筋肉に関する情報が含まれる医用画像であればよく、他にも、例えば、CT画像、MRI画像、DXA法による画像、DESによる画像及び超音波画像等であってもよい。つまり、第4画像は、第3画像とは画像の種類が異なっていてもよいし、同じであってもよい。第3画像及び第4画像は、第1画像G1及び第2画像G2の種類の組合せが異なっていてもよいし、同じであってもよい。 Furthermore, the third image is not limited to a simple X-ray image, but may be any medical image containing information about bones, such as a CT image, MRI image, DXA image, DES image, or ultrasound image. Furthermore, the fourth image is not limited to an echo image, but may be any medical image containing information about muscles, such as a CT image, MRI image, DXA image, DES image, or ultrasound image. In other words, the fourth image may be the same or different from the third image. The third and fourth images may have the same or different combinations of the types of first image G1 and second image G2.
第3画像は、例えば、医療施設で第1時点において撮像された画像を用いてもよい。なお、第3画像と第4画像との撮影時期の差は、例えば2週間、1ヶ月、半年、1年以内等、所定の期間以内であればよい。第1時点は、第3画像が撮影された日を基準として設定されてもよいし、第4画像が撮影された日を基準としてもよい。また、基準日は、第3画像及び第4画像の撮影時期のうち撮影日が遅い方でもよいし、早い方でもよい。 The third image may be, for example, an image taken at a medical facility at a first time point. The difference in the time at which the third image and the fourth image were taken may be within a predetermined period, such as two weeks, one month, six months, or one year. The first time point may be set based on the date the third image was taken, or the date the fourth image was taken. The reference date may be either the later or earlier of the dates at which the third and fourth images were taken.
電子カルテ管理装置50は、医療施設等において診察または検査等を受けた第1被検体の電子カルテ情報を管理するためのサーバとして機能するコンピュータである。画像管理装置40及び電子カルテ管理装置50は、予測装置10の取得部21に接続されている。 The electronic medical record management device 50 is a computer that functions as a server for managing electronic medical record information of a first subject who has undergone a medical examination or test at a medical facility, etc. The image management device 40 and the electronic medical record management device 50 are connected to the acquisition unit 21 of the prediction device 10.
電子カルテ情報には、第1被検体の属性情報が含まれる。属性情報には、第1被検体の年齢、性別、身長、体重、筋肉の質、人種、生活習慣に関する情報、服薬情報、職業情報、血液検査情報、尿検査情報、唾液検査情報、有病している疾患に関する情報、既往歴、第1被検体の家族の既往歴、手術情報、遺伝子情報、出産情報、閉経情報、骨折リスク評価ツール(FRAX(登録商標);Fracture Risk Assessment Tool)の項目、及びホルモン情報に基づいて推定された閉経推定に関する情報等のうち少なくとも1つが含まれる。 The electronic medical record information includes attribute information of the first subject. The attribute information includes at least one of the following: the first subject's age, sex, height, weight, muscle quality, race, lifestyle information, medication information, occupational information, blood test information, urine test information, saliva test information, information on existing diseases, medical history, medical history of the first subject's family, surgery information, genetic information, childbirth information, menopausal information, items from the Fracture Risk Assessment Tool (FRAX (registered trademark)), and information regarding estimated menopause based on hormone information.
出産情報は、出産の有無、出産人数等の少なくともいずれかを含む。前記生活習慣は、例えば、睡眠時間、起床時間、睡眠時間、1日の運動量、食事内容、食事時刻、食事時間、及び血糖値等であって良い。食事内容は、例えば、料理名、摂取した食材、及び摂取量の少なくとも1つを含む。食事内容は、例えば、カルシウム、ビタミンB、ビタミンD、及びビタミンKの少なくとも1つを含む推定摂取量でもよい。血糖値は、例えば、ウェアラブルデバイスで取得したパラメータから推定された指定値を用いてもよい。服薬情報は、例えば、薬剤名、服用している量、服薬している期間等の情報が含まれてよい。服用している薬剤に関する情報は、使用しているステロイド剤に関する情報を含んでいてもよい。血液検査情報は、例えば、生化学検査、糖代謝系検査、内分泌系検査の少なくともいずれかの結果に関する情報であってもよい。 Birth information may include at least one of whether or not a woman has given birth, the number of children born, etc. The lifestyle habits may be, for example, sleep time, wake-up time, sleep duration, daily exercise amount, meal contents, meal times, meal duration, and blood glucose level. Meal contents may include, for example, at least one of the name of the dish, the ingredients consumed, and the intake amount. Meal contents may be, for example, an estimated intake of at least one of calcium, vitamin B, vitamin D, and vitamin K. The blood glucose level may be, for example, a designated value estimated from parameters acquired by a wearable device. Medication information may include, for example, information such as the name of the medication, the amount taken, and the duration of medication. Information regarding medications taken may include information regarding the steroid medication being used. Blood test information may be, for example, information regarding the results of at least one of a biochemical test, a glucose metabolism test, and an endocrine system test.
提示装置60は、予測装置10により出力される情報を提示するための装置である。提示装置60は、医療施設に所属する医師等の医療関係者が使用するコンピュータである。提示装置60は、例えば、液晶表示ディスプレイ又は有機ELディスプレイを有するパーソナルコンピュータ、タブレット端末、スマートフォン等である。提示装置60は、提示制御部26により制御されることにより、予測情報として骨折リスク等を提示する。なお、提示装置60は、予測情報を用紙等に印刷して排出する装置であってもよい。 The presentation device 60 is a device for presenting information output by the prediction device 10. The presentation device 60 is a computer used by medical personnel, such as doctors, affiliated with a medical facility. The presentation device 60 is, for example, a personal computer with an LCD display or an organic EL display, a tablet terminal, a smartphone, etc. The presentation device 60 is controlled by the presentation control unit 26 to present fracture risk and other predictive information. The presentation device 60 may also be a device that prints the predictive information on paper or the like and outputs it.
[予測装置の構成]
次に、予測装置10の構成について、図1を参照して詳しく説明する。図1に示すように、予測装置10は、制御部2と、記憶部3とを備えている。制御部2は、例えばCPU(Central Processing Unit)を有し、予測装置10の各部を統括的に制御することによって、予測装置10の動作を管理する。
[Configuration of prediction device]
Next, the configuration of the prediction device 10 will be described in detail with reference to Fig. 1. As shown in Fig. 1, the prediction device 10 includes a control unit 2 and a storage unit 3. The control unit 2 has, for example, a CPU (Central Processing Unit), and manages the operation of the prediction device 10 by comprehensively controlling each unit of the prediction device 10.
予測装置10の制御部2は、取得部21と、解析部22と、補正部23と、学習部24と、予測部25と、提示制御部26とを有している。制御部2と記憶部3とは、互いに電気的に接続されている。 The control unit 2 of the prediction device 10 includes an acquisition unit 21, an analysis unit 22, a correction unit 23, a learning unit 24, a prediction unit 25, and a presentation control unit 26. The control unit 2 and the memory unit 3 are electrically connected to each other.
取得部21は、画像管理装置40から、第1被検体の第1画像G1及び第2画像G2を取得する。また、取得部21は、画像管理装置40から、第2被検体の第3画像及び第4画像を取得する。なお、取得部21は、図示しない入力装置を介して入力された第1画像G1及び第2画像G2を取得してもよい。また、取得部21は、取得した第1画像G1及び第2画像G2に、これらの属性情報が付加されている場合には、第1画像G1及び第2画像G2から属性情報を抽出してもよい。 The acquisition unit 21 acquires a first image G1 and a second image G2 of the first subject from the image management device 40. The acquisition unit 21 also acquires a third image and a fourth image of the second subject from the image management device 40. The acquisition unit 21 may also acquire the first image G1 and the second image G2 input via an input device (not shown). The acquisition unit 21 may also extract attribute information from the first image G1 and the second image G2 if attribute information has been added to the acquired first image G1 and the second image G2.
ここで、複数の人の第3画像及び第4画像は、学習用データ33として記憶部3に記憶される。また、上記複数の人において、第2時点で発生した骨の異常に関する異常情報は、教師用データ34として記憶部3に記憶される。 Here, the third and fourth images of the multiple people are stored in the memory unit 3 as learning data 33. Furthermore, abnormality information regarding bone abnormalities that occurred in the multiple people at the second time point is stored in the memory unit 3 as training data 34.
解析部22は、第2画像G2をセグメンテーションすることで、第2画像G2内の個々の画素がどこの骨や筋肉等の領域に該当するかを特定して領域分割を行う。セグメンテーションを行う方法としては、例えば、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)、全層畳み込みネットワーク(FCN:Fully Convolutional Network)、U-Net、V-Net等を用いることができる。解析部22は、軟部組織の領域を特定する。解析部22は、第1被検体の筋肉及び脂肪の量、厚さ、萎縮量、及び柔軟性のうち少なくともいずれかを含む情報を解析する。 The analysis unit 22 segments the second image G2 to identify which bone, muscle, etc. area each pixel in the second image G2 corresponds to, thereby dividing the area. Segmentation can be performed using, for example, a convolutional neural network (CNN), a full convolutional network (FCN), a U-Net, a V-Net, etc. The analysis unit 22 identifies the soft tissue area. The analysis unit 22 analyzes information including at least one of the amount, thickness, amount of atrophy, and flexibility of the muscle and fat of the first subject.
補正部23は、第1画像G1に所定の補正を行う。具体的には、補正部23は、第1画像G1から、解析部22により特定された軟部組織の領域を除く補正を行う。ここで、軟部組織とは、骨以外の筋肉や脂肪等の組織をいう。なお、上記した第3画像も、第1画像G1と同様に、補正部23により補正が行われてもよい。 The correction unit 23 performs a predetermined correction on the first image G1. Specifically, the correction unit 23 performs a correction to remove the soft tissue area identified by the analysis unit 22 from the first image G1. Here, soft tissue refers to tissue other than bone, such as muscle and fat. Note that the third image described above may also be corrected by the correction unit 23 in the same way as the first image G1.
学習部24は、予測モデル32を生成する学習処理等を行う。なお、別の装置で生成された予測モデル32が予め記憶部3に記憶されている場合には、学習部24はなくてもよい。 The learning unit 24 performs learning processes to generate the prediction model 32. Note that if a prediction model 32 generated by another device is stored in advance in the storage unit 3, the learning unit 24 may not be necessary.
記憶部3は、制御プログラム31を記録したコンピュータ読み取り可能な非一時的な記録媒体である。記憶部3は、ROM(Read Only Memory)、及びRAM(Random Access Memory)等を有して構成されている。 The memory unit 3 is a computer-readable, non-transitory recording medium that stores the control program 31. The memory unit 3 is configured to include ROM (Read Only Memory), RAM (Random Access Memory), etc.
制御部2は、制御プログラム31を実行することにより、予測装置10を制御する。即ち、制御プログラム31は、情報処理システム1としてコンピュータを機能させるための制御プログラムであって、予測部25としてコンピュータを機能させるためのものである。 The control unit 2 controls the prediction device 10 by executing the control program 31. In other words, the control program 31 is a control program for causing the computer to function as the information processing system 1, and causes the computer to function as the prediction unit 25.
記憶部3には、制御プログラム31以外にも、予測モデル32が記憶されている。予測モデル32は、AI(Artificial Intelligence)を有している。予測モデル32は、第1時点で撮像された第3画像、及び第2データを説明変数とし、第2時点で第2被検体の骨に発生した骨折に関する異常情報を目的変数として用いた機械学習により生成されるものである。また、記憶部3には、上述した学習用データ33、及び教師用データ34が記憶されている。実施形態1では、第2データは、一例として、上述した第4画像である。 In addition to the control program 31, the memory unit 3 also stores a prediction model 32. The prediction model 32 has AI (Artificial Intelligence). The prediction model 32 is generated by machine learning using the third image captured at the first time point and the second data as explanatory variables, and abnormality information related to a fracture that occurred in the bone of the second subject at the second time point as a target variable. The memory unit 3 also stores the above-mentioned learning data 33 and teacher data 34. In embodiment 1, the second data is, as an example, the above-mentioned fourth image.
なお、予測モデル32は、第3画像、第4画像、第3画像を第1推定モデルに入力して得られる骨情報、及び第4画像を第2推定モデルに入力して得られる筋肉情報を説明変数とし、第2時点で第2被検体の骨に発生した骨折に関する異常情報を目的変数として用いた機械学習により生成されたものであってもよい。そして、予測部25は、第1画像G1、第2画像G2、第1画像G1を第1推定モデルに入力して得られる骨情報、及び第2画像G2を第2推定モデルに入力して得られる筋肉情報から、上記予測モデルを用いて、予測情報を出力してもよい。 The prediction model 32 may be generated by machine learning using the third image, the fourth image, bone information obtained by inputting the third image into the first estimation model, and muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and abnormality information related to a fracture that occurred in the bone of the second subject at the second time point as a dependent variable. The prediction unit 25 may then use the prediction model to output prediction information from the first image G1, the second image G2, bone information obtained by inputting the first image G1 into the first estimation model, and muscle information obtained by inputting the second image G2 into the second estimation model.
[予測モデルの動作]
次に、予測モデルの動作について、図2を参照して説明する。図2は、実施形態1における予測モデル32の動作を説明するためのブロック図である。予測モデル32は、例えば、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)である。なお、予測モデル32は、畳み込みニューラルネットワーク以外のニューラルネットワークから構成されてもよい。
[Predictive Model Operation]
Next, the operation of the prediction model will be described with reference to Fig. 2. Fig. 2 is a block diagram for explaining the operation of the prediction model 32 in embodiment 1. The prediction model 32 is, for example, a convolutional neural network (CNN). Note that the prediction model 32 may be configured using a neural network other than a convolutional neural network.
図2に示すように、予測モデル32は、例えば、入力層32aと、隠れ層32bと、出力層32cとを有する。予測モデル32は、第1時点で撮像された第3画像、及び第4画像を説明変数とし、第2時点で第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成されたものである。 As shown in FIG. 2, the prediction model 32 has, for example, an input layer 32a, a hidden layer 32b, and an output layer 32c. The prediction model 32 was generated by machine learning using the third and fourth images captured at the first time point as explanatory variables and abnormality information regarding an abnormality that occurred in the bones of the second subject at the second time point as the objective variable.
第1時点は、第3画像が撮像された時点である。第2時点は、第1時点とは異なる時点である。第2時点は、例えば、第3画像が撮像された第1時点よりも将来又は過去の任意の時点を指している。第2時点は、第1時点から1年前、1年後、5年後、10年後、及び30年後等、複数の時点を含んでいてもよい。また、1時点と第2時点との間の期間は、第3時点と第4時点との間の期間と同じ長さであってもよいし、異なっていてもよい。また、第3時点と第4時点との間の期間は、第1時点と第2時点との間の期間よりも短くてもよいし、長くてもよい。 The first point in time is the point in time when the third image is captured. The second point in time is a point in time different from the first point in time. The second point in time refers to, for example, any point in time in the future or the past of the first point in time when the third image is captured. The second point in time may include multiple points in time, such as one year before, one year after, five years after, ten years after, and thirty years after the first point in time. Furthermore, the period between the first point in time and the second point in time may be the same length as the period between the third point in time and the fourth point in time, or it may be different. Furthermore, the period between the third point in time and the fourth point in time may be shorter or longer than the period between the first point in time and the second point in time.
なお、第2時点は、上記した第3時点と同じ時点であってもよい。即ち、例えば5年前に撮像された第3画像を用いて、その5年後に発生する骨折のリスクを予測するように予測モデル32に機械学習させてもよい。この場合、第1時点が5年前であり、第2時点及び第3時点が現在であり、第4時点が5年後となる。 The second point in time may be the same as the third point in time described above. That is, for example, a third image taken five years ago may be used to train the prediction model 32 to predict the risk of fracture occurring five years later. In this case, the first point in time is five years ago, the second and third points in time are the present, and the fourth point in time is five years in the future.
予測モデル32では、入力層32aに、第1画像G1及び第2画像G2が入力されることで、第4時点で第1被検体の骨に骨折が発生する可能性を示す骨折リスクYが出力される。 In the prediction model 32, the first image G1 and the second image G2 are input to the input layer 32a, and a fracture risk Y indicating the possibility of a fracture occurring in the bone of the first subject at the fourth time point is output.
なお、予測モデル32は、第4時点で第1被検体の骨に骨粗鬆症が発生する可能性を示す情報を出力してもよい。骨粗鬆症の可能性は、例えば、骨折の有無、骨折の可能性、及び骨密度の変化の少なくとも1つに基づいて分類してもよい。骨粗鬆症の可能性は、「骨粗鬆症なし」、「骨粗鬆症疑いあり」、又は「骨粗鬆症あり」などを含む。より具体的に、骨粗鬆症の可能性として、骨量が低くなる疾患が無く、続発性骨粗鬆症が認められない場合であり、且つ、骨折がある場合または骨折の可能性が高い場合には、原発性骨粗鬆症であることを示してもよい。また、骨粗鬆症の判定としては、第1被検体の骨密度を示す情報である骨密度推定値が示すYAMが80%未満の値であり、且つ測定結果が椎体及び大腿骨近位部以外の骨折を示す場合に、骨粗鬆症であると判定してもよい。また、骨密度推定値が示すYAMが70%以下の値を示す場合に、骨粗鬆症であると判定してもよい。 The prediction model 32 may output information indicating the possibility of osteoporosis occurring in the bones of the first subject at the fourth time point. The possibility of osteoporosis may be classified, for example, based on at least one of the presence or absence of a fracture, the possibility of a fracture, and a change in bone density. The possibility of osteoporosis includes "no osteoporosis," "suspected osteoporosis," or "possible osteoporosis." More specifically, the possibility of osteoporosis may be indicated as primary osteoporosis when there is no disease that reduces bone mass, secondary osteoporosis is not observed, and there is a fracture or a high possibility of a fracture. Osteoporosis may also be determined when the YAM indicated by the bone density estimate, which is information indicating the bone density of the first subject, is less than 80% and the measurement results indicate a fracture other than the vertebral body or proximal femur. Osteoporosis may also be determined when the YAM indicated by the bone density estimate is 70% or less.
[学習処理の流れ]
次に、学習部24による学習処理の流れについて、図3を参照して説明する。図3は、予測装置10の学習部24による学習処理の流れの一例を示すフローチャートである。学習部24は、後述する予測部25による予測処理よりも前に、図3に示す学習処理を実行して予測モデル32を記憶部3に記憶させる。
[Learning process flow]
Next, the flow of the learning process by the learning unit 24 will be described with reference to Fig. 3. Fig. 3 is a flowchart showing an example of the flow of the learning process by the learning unit 24 of the prediction device 10. The learning unit 24 executes the learning process shown in Fig. 3 and stores the prediction model 32 in the storage unit 3 before the prediction process by the prediction unit 25, which will be described later.
図3のフローチャートにおいて、まず、学習部24は、取得部21を介して、第2被検体の第3画像を取得する(S1)。ここで、第2被検体は、例えば、複数の人である。なお、第2被検体は、人以外であってもよく、例えば、イヌ、ネコ、及びウマ等の動物であってもよい。第1被検体と同じ種類であってもよいし、異なっていてもよい。また、第2被検体は、複数の人でなくてもよく、同じ人であってもよい。また、第3画像は、第1時点で撮像された第2被検体の所定部位の骨が写る単純X線画像である。なお、第3画像として、同じ人の異なる撮影時期の複数のデータを用いてもよい。 In the flowchart of FIG. 3, first, the learning unit 24 acquires a third image of the second subject via the acquisition unit 21 (S1). Here, the second subject is, for example, multiple people. Note that the second subject may be a non-human, such as an animal such as a dog, cat, or horse. It may be the same species as the first subject, or a different species. Furthermore, the second subject does not have to be multiple people, and may be the same person. Furthermore, the third image is a simple X-ray image taken at a first time point, showing the bones of a specified region of the second subject. Note that multiple pieces of data taken at different times for the same person may be used as the third image.
なお、S1において、学習部24は、電子カルテ管理装置50から各第2被検体の属性情報を取得し、当該属性情報と各第2画像G2とを紐付けしておくものとする。 In S1, the learning unit 24 acquires attribute information for each second subject from the electronic medical record management device 50 and associates the attribute information with each second image G2.
S1の後、学習部24は、取得部21を介して、第2被検体の第4画像を取得する(S2)。第4画像は、第1時点で撮像された第2被検体における上記所定部位に対応した部位の筋肉が写るエコー画像であってよい。第3画像は、第4画像が撮影された部位と同じ部位が少なくとも写っていればよい。また、第3画像は、第4画像が撮影された部位と異なる部位が写っていてもよい。なお、S1とS2とは、順序が逆であってもよいし、同じタイミングであってもよい。 After S1, the learning unit 24 acquires a fourth image of the second subject via the acquisition unit 21 (S2). The fourth image may be an echo image of the muscles of the second subject at a location corresponding to the above-mentioned specified location, imaged at the first time point. The third image may at least depict the same location as the location from which the fourth image was taken. The third image may also depict a location different from the location from which the fourth image was taken. Note that S1 and S2 may be taken in the opposite order, or at the same timing.
第3画像は、CT画像、MRI画像、DXA法による画像、DESによる画像、及び超音波画像のうち少なくとも1つであってよい。また、第4画像は、CT画像、MRI画像、DXA法による画像、DESによる画像、及び超音波画像のうちの少なくともいずれかであってよい。 The third image may be at least one of a CT image, an MRI image, an image obtained by DXA, an image obtained by DES, and an ultrasound image. The fourth image may be at least one of a CT image, an MRI image, an image obtained by DXA, an image obtained by DES, and an ultrasound image.
S2の後、学習部24は、取得部21を介して、第2時点で第2被検体の骨に発生した異常に関する異常情報を取得する(S3)。ここで、異常は、運動器疾患であってもよく、例えば、骨折である。即ち、第2時点で第2被検体に発生した骨折に関する情報である。なお、異常には、骨折以外にも、骨粗鬆症、変形性関節症、変形性脊椎症、神経障害、サルコペニア等も含まれる。 After S2, the learning unit 24 acquires abnormality information regarding an abnormality that occurred in the bones of the second subject at the second time point via the acquisition unit 21 (S3). Here, the abnormality may be a musculoskeletal disorder, such as a fracture. That is, the information is regarding a fracture that occurred in the second subject at the second time point. Note that abnormalities include, in addition to fractures, osteoporosis, osteoarthritis, spondylosis deformans, nerve disorders, sarcopenia, etc.
S3の後、学習部24は、第3画像及び第4画像を説明変数とし、上記異常情報を目的変数として用いた機械学習により予測モデル32を生成する。S4の後、学習部24は、生成した予測モデル32を記憶部3に記憶させる(S5)。以上により、学習部24による学習処理が終了する。 After S3, the learning unit 24 generates a prediction model 32 through machine learning using the third and fourth images as explanatory variables and the abnormality information as a target variable. After S4, the learning unit 24 stores the generated prediction model 32 in the storage unit 3 (S5). This completes the learning process by the learning unit 24.
[予測処理の流れ]
次に、予測装置10による予測処理の流れについて、図4~図6を参照して説明する。図4は、予測装置10による予測処理の流れの一例を示すフローチャートである。図5は、第1被検体の第1画像G1の一例を示す図である。
[Prediction process flow]
Next, the flow of the prediction process by the prediction device 10 will be described with reference to Fig. 4 to Fig. 6. Fig. 4 is a flowchart showing an example of the flow of the prediction process by the prediction device 10. Fig. 5 is a diagram showing an example of a first image G1 of a first subject.
以下、医療施設等において、図5に示すように、第1画像G1として、第1被検体の胸部の骨が写る撮像され、第2画像G2として、第1被検体の背部の筋肉が写るエコー画像が撮像された場合について説明する。 The following describes a case where, as shown in Figure 5, an image of the chest bones of a first subject is captured as a first image G1, and an echo image of the back muscles of the first subject is captured as a second image G2 in a medical facility or the like.
図4のフローチャートにおいて、まず、取得部21は、画像管理装置40から、第1被検体の第1画像G1を取得する(S11)。第1画像G1は、図5に示すように、例えば、第1被検体の胸部の骨Bが写る単純X線画像であってもよい。 In the flowchart of FIG. 4, first, the acquisition unit 21 acquires a first image G1 of the first subject from the image management device 40 (S11). The first image G1 may be, for example, a simple X-ray image showing a chest bone B of the first subject, as shown in FIG. 5.
続いて、取得部21は、画像管理装置40から、第1被検体の第2画像G2を取得する(S12)。第2画像G2は、第1被検体の胸部に対応した部位の筋肉が写るエコー画像であってもよい。なお、S11とS12とは、順序が逆であってもよいし、同じタイミングであってもよい。 Next, the acquisition unit 21 acquires a second image G2 of the first subject from the image management device 40 (S12). The second image G2 may be an echo image showing the muscles of the area corresponding to the chest of the first subject. Note that S11 and S12 may be performed in the opposite order, or at the same time.
S12の後、解析部22は、第2画像G2をセグメンテーションして、第1被検体の筋肉及び脂肪に関する情報を解析する(S13)。S13において、解析部22は、第2画像G2をセグメンテーション、即ち、第2画像G2に写る複数の種類の筋肉及び脂肪等を領域毎に分割する。 After S12, the analysis unit 22 segments the second image G2 and analyzes information about the muscles and fat of the first subject (S13). In S13, the analysis unit 22 segments the second image G2, i.e., divides the multiple types of muscles, fat, etc. that appear in the second image G2 into regions.
ここで、図6は、第2画像G2のセグメンテーションの一例を示す図である。図6には、第1被検体の背部のエコー画像が示されている。図6に示す例では、第1被検体の背部が、皮下脂肪F、第1筋肉M1、第2筋肉M2、及び骨Bの各領域に分割されている。セグメンテーションの方法としては、複数種類の筋肉をまとめた筋肉、皮下脂肪F、及び骨Bの各領域に分割してもよい。 Here, FIG. 6 is a diagram showing an example of segmentation of the second image G2. FIG. 6 shows an echo image of the back of the first subject. In the example shown in FIG. 6, the back of the first subject is divided into the regions of subcutaneous fat F, first muscle M1, second muscle M2, and bone B. As a segmentation method, division into the regions of muscle grouping multiple types of muscle, subcutaneous fat F, and bone B may also be used.
解析部22は、例えば、図6の白抜きの矢印で示されるように、第1筋肉M1の厚さを求めることに基づいて、第1筋肉M1の量を解析することができる。また、解析部22は、第1被検体の脂肪の量、筋肉の萎縮量、及び柔軟性等を解析することもできる。例えば、解析部22は、第2画像G2をセグメンテーションして脂肪の領域を計算したり、エコー画像の輝度を求めたりすることにより、第1被検体の脂肪の量を算出できる。また、解析部22は、第1被検体と同じ年代の人の筋肉の厚さの測定値の平均値と、第1被検体の筋肉の厚さとを比較することにより、第1被検体の筋肉の萎縮量を求めることができる。また、解析部22は、第1被検体のエコー画像を動画で撮影し、筋肉の動きを解析することにより、第1被検体の筋肉の柔軟性を求めることができる。 The analysis unit 22 can analyze the amount of the first muscle M1 by determining the thickness of the first muscle M1, for example, as shown by the open arrow in Figure 6. The analysis unit 22 can also analyze the amount of fat, the amount of muscle atrophy, flexibility, etc. of the first subject. For example, the analysis unit 22 can calculate the amount of fat of the first subject by segmenting the second image G2 to calculate the fat area or by determining the brightness of the echo image. The analysis unit 22 can also determine the amount of muscle atrophy of the first subject by comparing the average muscle thickness measurements of people of the same age as the first subject with the muscle thickness of the first subject. The analysis unit 22 can also determine the flexibility of the first subject's muscles by capturing a video of the echo image of the first subject and analyzing the muscle movement.
S13の後、補正部23は、所定の補正として、第1画像G1から、解析部22により特定された軟部組織、即ち骨以外の組織の領域を除く補正を行う(S14)。具体的には、補正部23は、第1画像G1から、解析部22により特定された骨以外の筋肉および/または脂肪等を取り除く。第2画像G2は、第1画像G1に対応した部位であることが好ましい。 After S13, the correction unit 23 performs a predetermined correction to remove the soft tissue identified by the analysis unit 22, i.e., tissue areas other than bone, from the first image G1 (S14). Specifically, the correction unit 23 removes muscle and/or fat, etc., other than bone identified by the analysis unit 22, from the first image G1. It is preferable that the second image G2 is of a region corresponding to the first image G1.
S14の後、予測部25は、記憶部3から予測モデル32を読み出し、補正部23により補正された第1画像G1、及び第2画像G2を予測モデル32の入力層32aに入力して、出力層32cから骨折リスクYを出力する(S15:予測ステップ)。 After S14, the prediction unit 25 reads the prediction model 32 from the memory unit 3, inputs the first image G1 and the second image G2 corrected by the correction unit 23 to the input layer 32a of the prediction model 32, and outputs the fracture risk Y from the output layer 32c (S15: prediction step).
また、S15において、予測部25は、第1画像G1及び第2画像G2から、予測モデル32を用いて、予測情報として第1画像G1及び第2画像G2のそれぞれが第1被検体の骨折に与える影響度合いを示す影響度を出力してもよい。例えば、第1画像G1に写る骨の状態の影響度が70%であり、第2画像G2に写る筋肉の状態の影響度が30%であるといった情報を出力する。当該予測モデル32は、第3画像及び第4画像を説明変数とし、第2時点で第2被検体の骨折に与える影響度合いを示す影響度を目的変数として用いた機械学習により生成される。 Furthermore, in S15, the prediction unit 25 may use a prediction model 32 to output, as prediction information, an influence level indicating the degree of influence that each of the first image G1 and the second image G2 has on the fracture of the first subject from the first image G1 and the second image G2. For example, information may be output indicating that the influence level of the bone condition shown in the first image G1 is 70% and that of the muscle condition shown in the second image G2 is 30%. The prediction model 32 is generated by machine learning using the third image and the fourth image as explanatory variables and the influence level indicating the degree of influence on the fracture of the second subject at the second time point as the objective variable.
また、S15において、予測部25は、予測モデル32を用いて、骨折が発生する可能性が高い時期である骨折予測時期を出力してもよい。この場合、予測モデル32は、第3画像及び第4画像を説明変数とし、第2被検体の骨に骨折が発生した時期を目的変数として用いた機械学習により生成される。骨折予測時期は、例えば、5年後、10年後等、年単位であってもよいし、5年6ヵ月後、10年6ヵ月後等、月単位であってもよい。 Furthermore, in S15, the prediction unit 25 may use the prediction model 32 to output a predicted fracture time, which is the time when a fracture is likely to occur. In this case, the prediction model 32 is generated by machine learning using the third and fourth images as explanatory variables and the time when a fracture occurs in the bone of the second subject as the objective variable. The predicted fracture time may be in years, for example, 5 years or 10 years, or may be in months, for example, 5 years and 6 months or 10 years and 6 months.
予測部25により出力された骨折リスクY、影響度、及び骨折予測時期等は、提示制御部26へ送信される。そして、提示制御部26は、骨折リスクY、影響度、及び骨折発生予測時期等のうち少なくとも1つを含む予測情報を提示装置60に提示する(S16)。 The fracture risk Y, impact level, predicted fracture time, etc. output by the prediction unit 25 are transmitted to the presentation control unit 26. The presentation control unit 26 then presents prediction information including at least one of the fracture risk Y, impact level, predicted fracture time, etc. on the presentation device 60 (S16).
S16において、例えば、提示制御部26は、「10年後に骨折リスクYが80%以上になります。」といったメッセージを提示装置60に提示する。また、提示制御部26は、複数の時点での骨折リスクYを示す図を示してもよい。また、提示制御部26は、骨折リスクYの推移を示したグラフを提示装置60に提示してもよい。また、提示制御部26は、推測結果である骨折リスクYと、実際の測定結果とをそれぞれ提示装置60に提示してもよい。以上により、図4に示す予測装置10による予測処理が終了する。 In S16, for example, the presentation control unit 26 presents a message to the presentation device 60 such as, "Fracture risk Y will be 80% or higher in 10 years." The presentation control unit 26 may also display a diagram showing the fracture risk Y at multiple points in time. The presentation control unit 26 may also present a graph showing the progress of the fracture risk Y to the presentation device 60. The presentation control unit 26 may also present the fracture risk Y, which is the estimated result, and the actual measurement result to the presentation device 60. This completes the prediction process by the prediction device 10 shown in FIG. 4.
以上説明した実施形態1における情報処理システム1においては、予測部25により、第1被検体の骨が写る第1画像G1、及び第1被検体の筋肉が写る第2画像G2から、予測モデル32を用いて、予測情報として、第1被検体の骨折リスクY、影響度、及び骨折予測時期等を出力する。即ち、予測部25は、第1被検体の骨に関する情報に加えて、骨を支える筋肉の情報を入力情報として、予測モデル32を用いて骨折リスクY、影響度、及び骨折予測時期等を予測する。 In the information processing system 1 of embodiment 1 described above, the prediction unit 25 uses the prediction model 32 to output the fracture risk Y, impact degree, predicted fracture time, etc. of the first subject as prediction information from the first image G1 showing the bones of the first subject and the second image G2 showing the muscles of the first subject. In other words, the prediction unit 25 uses information about the bones of the first subject as input information, as well as information about the muscles that support the bones, to predict the fracture risk Y, impact degree, predicted fracture time, etc. using the prediction model 32.
上記構成によれば、予測部25により、第1被検体に骨折が発生する骨折リスクY、影響度、及び骨折予測時期等を高精度に予測できる。これにより、例えば、医療施設の医師等は、患者である第1被検体の診断に、骨折リスクY等の情報処理システム1の出力結果を役立てることができ、より適切に患者の診断を行うことができる。更に、例えば整形外科を専門としない医師であっても、情報処理システム1の出力結果を参照することで、整形外科医に近い精度で患者の診断を行うことが可能となる。 With the above configuration, the prediction unit 25 can accurately predict the fracture risk Y, the degree of impact, and the predicted fracture timing of the first subject. This allows, for example, doctors at medical facilities to use the output results of the information processing system 1, such as the fracture risk Y, to diagnose the first subject, who is a patient, and to more appropriately diagnose the patient. Furthermore, even doctors who do not specialize in orthopedics can diagnose patients with an accuracy close to that of an orthopedic surgeon by referring to the output results of the information processing system 1.
また、図4のS14にて、補正部23により、第1画像G1から骨以外の組織の領域を除く補正を行うことによって、予測部25による骨折リスクYの予測精度を向上させることができる。 Furthermore, in S14 of FIG. 4, the correction unit 23 performs correction to remove areas of tissue other than bone from the first image G1, thereby improving the accuracy of prediction of the fracture risk Y by the prediction unit 25.
〔実施形態2〕
次に、本開示の実施形態2に係る情報処理システム1Aついて、図7~図13を参照して説明する。なお、説明の便宜上、上記実施形態1にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Embodiment 2]
Next, an information processing system 1A according to a second embodiment of the present disclosure will be described with reference to Figures 7 to 13. For ease of explanation, components having the same functions as those described in the first embodiment will be denoted by the same reference numerals, and their description will not be repeated.
〔情報処理システムの構成〕
実施形態2の情報処理システム1Aの構成について、図7を参照して説明する。図7は、情報処理システム1Aの構成の一例を示すブロック図である。図7には、予測装置10A、画像管理装置40、電子カルテ管理装置50、及び提示装置60を備える情報処理システム1Aが示されているが、情報処理システム1Aの構成は図7に示す構成に限定されない。
[Configuration of information processing system]
The configuration of an information processing system 1A according to the second embodiment will be described with reference to Fig. 7. Fig. 7 is a block diagram showing an example of the configuration of the information processing system 1A. Fig. 7 shows the information processing system 1A including a prediction device 10A, an image management device 40, an electronic medical record management device 50, and a presentation device 60, but the configuration of the information processing system 1A is not limited to the configuration shown in Fig. 7.
予測装置10Aは、予測する対象である第1被検体の第1画像G1aを取得し、取得した第1画像から予測モデル32Aを用いて、予測情報を出力する装置である。 The prediction device 10A is a device that acquires a first image G1a of a first subject, which is the target of prediction, and outputs prediction information from the acquired first image using a prediction model 32A.
ここで、第1被検体は、例えば人である。なお、第1被検体は、人以外であってもよく、例えば、イヌ、ネコ、及びウマ等の動物であってもよい。また、第1画像G1aは、第1被検体の骨又は筋肉のいずれか一方が写っていればよく、また、第1被検体の骨及び筋肉の少なくとも一部が写っていればよい。 Here, the first subject is, for example, a human. However, the first subject may be a non-human, such as an animal such as a dog, cat, or horse. Furthermore, the first image G1a may show either the bones or muscles of the first subject, or may show at least a portion of the bones and muscles of the first subject.
第1画像G1aは、医用画像であってよい。第1画像G1aは、例えば、第1被検体の頭部、頚部、胸部、腰部、顎関節、脊椎椎間関節、股関節、仙腸関節、膝関節、足関節、足部、足趾、肩関節、肩鎖関節、肘関節、手関節、手部、手指、及び顎関節等のうち少なくともいずれかを含む部位の組織が写る単純X線画像である。組織は、例えば骨及び筋肉である。なお、組織は、骨及び筋肉のいずれか一方であってもよい。単純X線画像は、例えば、歯科用に用いられるパノラマX線画像を含んでいてもよい。パノラマX線画像は、複数の歯、例えば全ての歯等が含まれる画像である。 The first image G1a may be a medical image. The first image G1a is, for example, a plain X-ray image showing tissues of an area including at least one of the head, neck, chest, lower back, temporomandibular joints, spinal intervertebral joints, hip joints, sacroiliac joints, knee joints, ankle joints, feet, toes, shoulder joints, acromioclavicular joints, elbow joints, wrist joints, hands, fingers, and temporomandibular joints of the first subject. The tissues are, for example, bones and muscles. Note that the tissues may be either bones or muscles. The plain X-ray image may include, for example, a panoramic X-ray image used for dentistry. A panoramic X-ray image is an image that includes multiple teeth, for example, all of the teeth.
第1画像G1aは、第3時点において撮像されたものである。第1画像G1aは、第1被検体を正面から撮像した正面像、例えば対象部位に対して前後方向にX線を照射して得られる像、及び側面から撮像した側面像、例えば対象部位に対して左右方向にX線を照射して得られる像のうち少なくともいずれかを含む。第1画像G1aは、例えば、人の胸部を含む胸部X線正面画像、又は人の腰部を含む腰部X線正面画像を用いることができる。胸部X線画像は、例えば、肋骨、鎖骨及び胸骨の少なくとも1つが写る画像である。腰部X線画像は、例えば、腰椎、骨盤および大腿骨の少なくとも1つが写る画像である。第1画像G1aは、胸部及び腰部に限らず、例えば、歯、顎、腕、手、肩関節、膝関節、踵、頭蓋骨、又は、足の骨が写る画像を用いてもよい。 The first image G1a is captured at a third time point. The first image G1a includes at least one of a frontal image captured from the front of the first subject, for example, an image obtained by irradiating the target area with X-rays in the front-to-back direction, and a lateral image captured from the side, for example, an image obtained by irradiating the target area with X-rays in the left-to-right direction. The first image G1a may be, for example, a frontal chest X-ray image including a person's chest, or a frontal lumbar X-ray image including a person's lumbar region. A chest X-ray image is, for example, an image showing at least one of the ribs, clavicle, and sternum. A lumbar X-ray image is, for example, an image showing at least one of the lumbar vertebrae, pelvis, and femur. The first image G1a is not limited to the chest and lumbar region, and may also be an image showing, for example, the teeth, jaw, arm, hand, shoulder joint, knee joint, heel, skull, or foot bones.
第1画像G1aは、単純X線画像に限らず、少なくとも骨及び筋肉のいずれか1つに関する情報が含まれる画像であればよい。他にも、第1画像G1aは、例えば、MRI(Magnetic Resonance Imaging)画像、CT(Computed Tomography)画像、PET(Positron Emission Tomography)画像、及び超音波画像等であってもよい。第1画像G1aがCT画像の場合、3次元的に構築された画像に基づく骨梁に関する情報を用いてもよいし、2次元的に撮影された画像に基づく骨梁に関する情報を用いてよい。第1画像G1aがCT画像の場合、例えば、3次元画像、頭部と脚部とを結ぶ体軸に垂直な方向の断面画像(例えば、水平断)、及び体軸に平行な方向の断面画像(例えば、矢状断又は冠状断など)の少なくとも1つを用いてよい。第1画像G1aは、骨が写っている画像であってもよいし、骨が写っていない画像であってもよい。 The first image G1a is not limited to a simple X-ray image, but may be any image that includes information about at least one of bones and muscles. Alternatively, the first image G1a may be, for example, an MRI (Magnetic Resonance Imaging) image, a CT (Computed Tomography) image, a PET (Positron Emission Tomography) image, or an ultrasound image. If the first image G1a is a CT image, information about the bone trabeculae based on a three-dimensionally constructed image may be used, or information about the bone trabeculae based on a two-dimensionally captured image may be used. If the first image G1a is a CT image, for example, at least one of a three-dimensional image, a cross-sectional image perpendicular to the body axis connecting the head and legs (e.g., horizontal section), and a cross-sectional image parallel to the body axis (e.g., sagittal section or coronal section) may be used. The first image G1a may be an image that shows bones, or an image that does not show bones.
予測装置10Aが出力する予測情報は、第1画像G1aに写る部位に、運動器疾患等の異常が発生する可能性を示す情報である。実施形態2では、予測情報は、第3時点とは異なる時点である第4時点で、第1被検体の骨に骨折が発生する可能性を示す骨折リスクである。なお、骨折は運動器疾患の一例であり、骨折としては脆弱性骨折を想定している。運動器疾患には、骨折以外にも、例えば、骨粗鬆症、変形性関節症、変形性脊椎症、神経障害、サルコペニア等が含まれる。 The prediction information output by the prediction device 10A is information indicating the possibility of an abnormality, such as a musculoskeletal disorder, occurring in the area captured in the first image G1a. In embodiment 2, the prediction information is a fracture risk indicating the possibility of a fracture occurring in the bone of the first subject at a fourth time point, which is different from the third time point. Note that a fracture is an example of a musculoskeletal disorder, and fragility fractures are assumed as fractures. In addition to fractures, musculoskeletal disorders include, for example, osteoporosis, osteoarthritis, spondylosis osteoarthritis, nerve disorders, sarcopenia, etc.
第1画像G1aとしては、例えば、医療施設で撮影された画像を用いてよい。第1画像G1aには、第1被検体において骨折等の異常が予想される部位が写っている。なお、第1画像G1aには、第1被検体において骨折等の異常が予想される部位以外の部位が写っていてもよい。 The first image G1a may be, for example, an image taken at a medical facility. The first image G1a shows an area of the first subject where an abnormality such as a fracture is expected. Note that the first image G1a may also show an area of the first subject other than an area where an abnormality such as a fracture is expected.
第4時点は、第3時点よりも将来又は過去の任意の時点を指している。第4時点は、例えば、第3時点から1年後、5年後、10年後、及び30年後等、複数の時点を含んでいてもよい。 The fourth point in time refers to any point in time in the future or the past of the third point in time. The fourth point in time may include multiple points in time, such as one year, five years, ten years, and thirty years after the third point in time.
画像管理装置40は、医療施設において撮像された第1画像G1a及び第3画像を管理するためのサーバとして機能するコンピュータである。第1画像G1aは、第1被検体の骨及び筋肉の少なくとも一部が写る単純X線画像であってよい。第3画像は、医用画像であってよい。第3画像は、第2被検体の組織が写る単純X線画像であってよい。なお、第1画像G1a及び第3画像は、別々の画像管理装置に保存されていてもよい。第3画像は、第1同じ種類の画像であってもよいし、異なる種類の画像であってもよい。例えば、第1画像G1aが単純X線画像であり、第3画像がCT画像であってもよい。 The image management device 40 is a computer that functions as a server for managing the first image G1a and the third image captured at a medical facility. The first image G1a may be a plain X-ray image showing at least a portion of the bones and muscles of the first subject. The third image may be a medical image. The third image may be a plain X-ray image showing the tissues of the second subject. The first image G1a and the third image may be stored in separate image management devices. The third image may be the same type of image as the first image, or may be a different type of image. For example, the first image G1a may be a plain X-ray image, and the third image may be a CT image.
電子カルテ管理装置50は、医療施設等において診察および/または検査を受けた第1被検体の電子カルテ情報を管理するためのサーバとして機能するコンピュータである。画像管理装置40及び電子カルテ管理装置50は、予測装置10Aの取得部21に接続されている。 The electronic medical record management device 50 is a computer that functions as a server for managing electronic medical record information of a first subject who has undergone a medical examination and/or test at a medical facility, etc. The image management device 40 and the electronic medical record management device 50 are connected to the acquisition unit 21 of the prediction device 10A.
電子カルテ情報には、第1被検体の属性情報が含まれる。属性情報には、第1被検体の年齢、性別、身長、体重、人種、生活習慣に関する情報、服薬情報、職業情報、血液検査情報、尿検査情報、唾液検査情報、有病している疾患に関する情報、既往歴、第1被検体の家族の既往歴、遺伝子情報、出産情報、閉経情報、骨折リスク評価ツール(FRAX(登録商標);Fracture Risk Assessment Tool)の項目、及びホルモン情報に基づいて推定された閉経推定に関する情報等のうち少なくとも1つが含まれる。 The electronic medical record information includes attribute information of the first subject. The attribute information includes at least one of the following: the first subject's age, sex, height, weight, race, information on lifestyle habits, medication information, occupational information, blood test information, urine test information, saliva test information, information on existing diseases, medical history, medical history of the first subject's family, genetic information, childbirth information, menopausal information, items from the Fracture Risk Assessment Tool (FRAX (registered trademark)), and information regarding estimated menopause based on hormone information.
出産情報は、出産の有無、出産人数等の少なくともいずれかを含む。前記生活習慣は、例えば、睡眠時間、起床時間、睡眠時間、1日の運動量、食事内容、食事時刻、食事時間、および血糖値等であって良い。食事内容は、例えば、料理名、摂取した食材および摂取量の少なくとも1つを含む。食事内容は、例えば、カルシウム、ビタミンB、ビタミンDおよびビタミンKの少なくとも1つを含む推定摂取量でもよい。血糖値は、例えば、ウェアラブルデバイスで取得したパラメータから推定された指定値を用いてもよい。服薬情報は、例えば、薬剤名、服用している量、服薬している期間等の情報が含まれてよい。服用している薬剤に関する情報は、使用しているステロイド剤に関する情報を含んでいてもよい。血液検査情報は、例えば、生化学検査、糖代謝系検査、内分泌系検査の少なくともいずれかの結果に関する情報であってもよい。なお、情報処理システム1Aでは、電子カルテ管理装置50から第1被検体の属性情報を取得してもよいし、第1画像G1aに属性情報が紐付けられている場合には、第1画像G1aから属性情報を取得してもよい。 The birth information may include at least one of whether or not a subject has given birth, the number of children born, etc. The lifestyle habits may include, for example, sleep duration, wake-up time, sleep duration, daily exercise amount, dietary content, meal times, meal durations, and blood glucose levels. The dietary content may include, for example, at least one of the name of a dish, ingested ingredients, and intake amount. The dietary content may be, for example, an estimated intake of at least one of calcium, vitamin B, vitamin D, and vitamin K. The blood glucose level may be, for example, a specified value estimated from parameters acquired by a wearable device. The medication information may include, for example, information such as the name of the medication, the amount taken, and the duration of medication. The information regarding the medication taken may include information regarding the steroid drug being used. The blood test information may be, for example, information regarding the results of at least one of a biochemical test, a glucose metabolism test, and an endocrine system test. The information processing system 1A may acquire attribute information of the first subject from the electronic medical record management device 50, or, if attribute information is linked to the first image G1a, may acquire the attribute information from the first image G1a.
提示装置60は、予測装置10Aにより出力される情報を提示するための装置である。提示装置60は、例えば、液晶表示ディスプレイ、又は有機ELディスプレイである。提示装置60は、提示制御部26により制御されることにより、骨密度推定値、骨質推定値、筋肉量推定値、及び骨折リスク等の数値、第1被験体への支援情報等を提示する。なお、提示装置60は、予測情報を用紙等に印刷して排出する装置であってもよい。 The presentation device 60 is a device for presenting information output by the prediction device 10A. The presentation device 60 is, for example, a liquid crystal display or an organic EL display. The presentation device 60 is controlled by the presentation control unit 26 to present numerical values such as estimated bone density, estimated bone quality, estimated muscle mass, and fracture risk, as well as support information for the first subject. The presentation device 60 may also be a device that prints the prediction information on paper or the like and outputs it.
[予測装置の構成]
次に、予測装置10Aの構成について、図7を参照して詳しく説明する。図7に示すように、予測装置10Aは、制御部2と、記憶部3とを備えている。制御部2は、例えばCPU(Central Processing Unit)を有し、予測装置10Aの各部を統括的に制御することによって、予測装置10Aの動作を管理する。制御部2は、取得部21、学習部24、予測部25、推定部27、及び提示制御部26を有している。制御部2と記憶部3とは、互いに電気的に接続されている。
[Configuration of prediction device]
Next, the configuration of the prediction device 10A will be described in detail with reference to Fig. 7. As shown in Fig. 7, the prediction device 10A includes a control unit 2 and a storage unit 3. The control unit 2 has, for example, a CPU (Central Processing Unit) and manages the operation of the prediction device 10A by comprehensively controlling each unit of the prediction device 10A. The control unit 2 has an acquisition unit 21, a learning unit 24, a prediction unit 25, an estimation unit 27, and a presentation control unit 26. The control unit 2 and the storage unit 3 are electrically connected to each other.
取得部21は、画像管理装置40から、第3時点での第1被検体の骨及び筋肉の少なくとも一部が写った第1画像G1aを取得する。また、取得部21は、電子カルテ管理装置50から、第1被検体の属性情報を取得する。なお、取得部21は、図示しない入力装置により入力された第1画像G1aを取得してもよい。 The acquisition unit 21 acquires a first image G1a from the image management device 40, which shows at least a portion of the bones and muscles of the first subject at the third time point. The acquisition unit 21 also acquires attribute information of the first subject from the electronic medical record management device 50. Note that the acquisition unit 21 may also acquire the first image G1a input by an input device (not shown).
学習部24は、推定モデル35、予測モデル32Aを生成する学習処理を制御する。推定部27は、第1被検体の骨及び/または筋肉の少なくとも一部が写る第1画像G1aから、後述する第1推定モデル351、第2推定モデル352、及び第3推定モデル353のうち少なくとも1つを用いて、第1被検体の骨に関する第1推定情報を出力する。 The learning unit 24 controls the learning process that generates the estimation model 35 and the prediction model 32A. The estimation unit 27 outputs first estimated information about the bones of the first subject from the first image G1a, which shows at least a portion of the bones and/or muscles of the first subject, using at least one of the first estimation model 351, second estimation model 352, and third estimation model 353, which will be described later.
第1推定情報は、第1推定モデル351から出力された第1被検体の骨の骨密度を示す情報である骨密度推定値と、第2推定モデル352から出力された第1被検体の骨質を示す情報である骨質推定値と、第3推定モデル353から出力された第1被検体の筋肉量を示す情報である筋肉量推定値とを少なくとも1つ含んでいる。 The first estimation information includes at least one of a bone density estimation value, which is information indicating the bone density of the bones of the first subject output from the first estimation model 351, a bone quality estimation value, which is information indicating the bone quality of the first subject output from the second estimation model 352, and a muscle mass estimation value, which is information indicating the muscle mass of the first subject output from the third estimation model 353.
記憶部3は、制御プログラム31を記録したコンピュータ読み取り可能な非一時的な記録媒体である。記憶部3は、ROM(Read Only Memory)、及びRAM(Random Access Memory)等を有して構成されている。 The memory unit 3 is a computer-readable, non-transitory recording medium that stores the control program 31. The memory unit 3 is configured to include ROM (Read Only Memory), RAM (Random Access Memory), etc.
制御部2は、制御プログラム31を実行することにより、予測装置10Aを制御する。即ち、制御プログラム31は、情報処理システム1Aとしてコンピュータを機能させるための制御プログラムであって、予測部25及び推定部27、としてコンピュータを機能させるためのものである。 The control unit 2 controls the prediction device 10A by executing the control program 31. In other words, the control program 31 is a control program for causing the computer to function as the information processing system 1A, and causes the computer to function as the prediction unit 25 and the estimation unit 27.
記憶部3には、制御プログラム31以外にも、予測モデル32A、及び推定モデル35が記憶されている。予測モデル32A、及び推定モデル35は、AI(Artificial Intelligence)を有している。具体的に、推定モデル35は、第1推定モデル351、第2推定モデル352、及び第3推定モデル353のうち少なくとも1つのAIを有している。 In addition to the control program 31, the memory unit 3 also stores a prediction model 32A and an estimation model 35. The prediction model 32A and the estimation model 35 have AI (Artificial Intelligence). Specifically, the estimation model 35 has at least one AI from a first estimation model 351, a second estimation model 352, and a third estimation model 353.
予測モデル32Aは、第3画像、及び第2被検体の骨に関する情報を説明変数とし、第2時点で第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成されたものである。 Prediction model 32A was generated by machine learning using the third image and information about the bones of the second subject as explanatory variables, and abnormality information about abnormalities that occurred in the bones of the second subject at the second time point as the objective variable.
ここで、第3画像は、第2被検体の組織が写る画像である。第2被検体は、第1被検体と同じ人であってもよいし、第1被検体と異なる人であってもよい。第3画像は、第1画像G1aと同じ場所で撮像されたものでもよいし、第1画像G1aと異なる場所で撮像されたものでもよい。また、第3画像は、第1画像G1aと同じ撮像装置により撮像されてもよいし、第1画像G1aと異なる撮像装置により撮像されてもよい。また、第1画像G1a及び第3画像の取得方法は、同じであってもよいし、異なっていてもよい。例えば、第1画像G1aは、電子カルテ管理装置50から取得され、第3画像は、画像管理装置40から取得されてもよい。 Here, the third image is an image showing the tissue of the second subject. The second subject may be the same person as the first subject, or a different person from the first subject. The third image may be captured at the same location as the first image G1a, or at a different location from the first image G1a. Furthermore, the third image may be captured by the same imaging device as the first image G1a, or by a different imaging device from the first image G1a. Furthermore, the first image G1a and the third image may be acquired using the same or different methods. For example, the first image G1a may be acquired from the electronic medical record management device 50, and the third image may be acquired from the image management device 40.
また、第3画像は、第1画像G1aと同じ部位が写る画像であってもよいし、第1画像G1aと異なる部位が写る画像であってもよい。例えば、第3画像は、第1画像G1aと同じ胸部の少なくとも一部が写る画像であってもよいし、第1画像G1aと異なる腰部の少なくとも一部が写る画像であってもよい。第3画像は、第1画像G1aと同じ向きの像であってもよいし、第1画像G1aとは異なる向きの像であってもよい。即ち、第3画像は、正面像であってもよいし、側面像であってもよい。第2被検体の骨に関する情報は、例えば、骨粗鬆症の可能性、骨密度、骨量、骨質、筋肉量等のうち少なくとも1つが含まれる。 Furthermore, the third image may be an image showing the same area as the first image G1a, or an image showing a different area from the first image G1a. For example, the third image may be an image showing at least a part of the chest, the same as the first image G1a, or an image showing at least a part of the waist, different from the first image G1a. The third image may be an image in the same orientation as the first image G1a, or an image in a different orientation from the first image G1a. In other words, the third image may be a frontal image or a lateral image. Information about the bones of the second subject includes, for example, at least one of the following: possibility of osteoporosis, bone density, bone mass, bone quality, muscle mass, etc.
骨に関する情報として、例えば、骨粗鬆症の可能性は、骨折の有無、骨折の可能性、及び骨密度の変化のうち少なくとも1つに基づいて分類され、骨粗鬆症なし、骨粗鬆症の疑いあり、又は、骨粗鬆症あり等がある。具体的には、骨粗鬆症の可能性として、骨量が低くなる疾患がなく、続発性骨粗鬆症が認められない場合であり、且つ、骨折がある場合又は骨折の可能性が高い場合、原発性骨粗鬆症であることを示してもよい。 For example, bone-related information may indicate the possibility of osteoporosis, which is classified based on at least one of the presence or absence of a fracture, the possibility of a fracture, and changes in bone density, and may include no osteoporosis, suspected osteoporosis, or present osteoporosis. Specifically, the possibility of osteoporosis may be indicated when there is no disease that reduces bone mass and no secondary osteoporosis is observed, and when a fracture is present or there is a high possibility of a fracture, it may indicate primary osteoporosis.
骨密度としては、例えば、手、腰椎、大腿骨近位部、脛骨、踵、及び腕(橈骨等)の少なくとも1つから、実際に骨密度を測定した測定値を用いることができる。骨密度の測定は、例えば、単一エネルギーX線吸収(Single energy X-ray Absorptiometry)法、二重エネルギーX線吸収測定(Dual-energy X-ray Absorptiometry)法、超音波法、MD(Micro Densitometry)法または定量的CT(Quantitative Computed Tomography)法を用いることができる。DXA法を用いて骨密度を測定するDXA装置では、腰椎の骨密度が測定される場合、被検体の腰椎に対してその正面からX線が照射される。MD法は、例えば手部にX線が照射される。 Bone density can be measured by actually measuring bone density from at least one of the hand, lumbar vertebrae, proximal femur, tibia, heel, and arm (radius, etc.). Bone density can be measured using, for example, single-energy X-ray absorptiometry, dual-energy X-ray absorptiometry, ultrasound, MD (microdensitometry), or quantitative CT (quantitative computed tomography). In a DXA device that measures bone density using the DXA method, when measuring bone density of the lumbar vertebrae, X-rays are irradiated from the front of the subject's lumbar vertebrae. In the MD method, X-rays are irradiated onto the hand, for example.
骨密度は、骨の密度に関連する値である。骨密度は、単位面積当りの骨ミネラル密度〔g/cm2〕、単位体積当りの骨ミネラル密度〔g/cm3〕、YAM〔%〕、Tスコア、及びZスコアの少なくとも1種類によって表されてよい。YAM〔%〕は、“Young Adult Mean”の略であって、若年成人平均パーセントと呼ばれることがある。例えば、骨の骨密度は、単位面積当りの骨ミネラル密度〔g/cm2〕、及びYAM〔%〕で表された値であってよい。骨の骨密度は、ガイドラインで定められた指標であってもよいし、独自指標であってもよい。骨密度は、骨粗鬆症のガイドライン、例えば「一般社団法人 日本骨粗鬆症学会 予防と治療ガイドライン2015年版」等に記載された値を適用することができる。 Bone mineral density is a value related to bone density. Bone mineral density may be expressed by at least one of bone mineral density per unit area (g/cm2), bone mineral density per unit volume (g/cm3), YAM (%), T-score, and Z-score. YAM (%) is an abbreviation for "Young Adult Mean" and is sometimes called the young adult mean percent. For example, bone mineral density may be a value expressed as bone mineral density per unit area (g/cm2) and YAM (%). Bone mineral density may be an index defined in guidelines or a unique index. Bone mineral density may be the value described in osteoporosis guidelines, such as the "2015 Prevention and Treatment Guidelines of the Japan Osteoporosis Society."
骨量としては、DXA等の骨密度測定装置によって測定された情報でもよいし、X線画像から第1推定モデルを用いて骨密度を推定した情報を用いてもよい。また、骨質の情報は、骨形成マーカ、骨吸収マーカ、骨質マーカ(例えば、ビタミンKの値)、皮質骨の厚さ、骨梁の密度、骨梁の方向、及び海綿骨構造指標(trabecular bone score)のうち少なくとも1つを用いてもよいが、これに限定されるものではない。骨量とは骨塩と骨基質タンパクの総和である。本開示において、骨量は、骨密度に関連する指標であり、該骨量は骨格内の骨組織の量である。 Bone mass may be information measured using a bone density measuring device such as DXA, or information obtained by estimating bone density from X-ray images using a first estimation model. Furthermore, bone quality information may be at least one of bone formation markers, bone resorption markers, bone quality markers (e.g., vitamin K levels), cortical bone thickness, trabecular density, trabecular orientation, and trabecular bone score, but is not limited to these. Bone mass is the sum of bone mineral and bone matrix protein. In the present disclosure, bone mass is an index related to bone density, and is the amount of bone tissue in the skeleton.
筋肉量としては、例えば、体組成計で計測した部位毎の筋肉量〔kg〕を用いることができる。また、筋肉量としては、例えば、MRIまたはDXAにより撮影した筋肉の領域の面積〔cm2〕および/または幅〔cm〕を用いることができる。また、筋肉量としては、例えば、エコー撮影画像の筋厚〔cm〕を用いることができる。また、筋肉量としては、例えば、背筋および/または握力等の筋力計の測定値〔kg〕等を用いることができる。 As muscle mass, for example, the muscle mass [kg] for each body part measured using a body composition analyzer can be used. Furthermore, as muscle mass, for example, the area [cm2] and/or width [cm] of the muscle region imaged by MRI or DXA can be used. Furthermore, as muscle mass, for example, the muscle thickness [cm] of an ultrasound image can be used. Furthermore, as muscle mass, for example, the measured values [kg] of a muscle dynamometer for back muscles and/or grip strength, etc. can be used.
第2時点とは、第3画像が撮像された第1時点よりも将来又は過去の任意の時点を指している。第2時点は、例えば、第1時点から1年後、5年後、10年後、及び30年後等、複数の時点を含んでいてもよい。 The second point in time refers to any point in time in the future or the past of the first point in time at which the third image was captured. The second point in time may include multiple points in time, such as one year, five years, ten years, and thirty years after the first point in time.
なお、予測モデル32Aの機械学習の期間が例えば5年である場合、予測部25が予測する予測情報の時期は、上記学習期間と同じ5年後のものであってもよいし、上記学習期間よりも短い例えば3年後のものであってもよいし、上記学習期間よりも長い例えば8年後のものであってもよい。また、予測部25は、例えば骨折リスクが80%以上になる時期を予測してもよい。また、予測部25は、YAMが80%以下になる時期を予測してもよい。この場合、予測モデル32Aは、骨折リスクが80%以上になる時期を学習していればよい。また、この場合、予測モデル32Aは、YAMが80%以下になる時期を学習していればよい。 If the machine learning period of the prediction model 32A is, for example, five years, the time of the prediction information predicted by the prediction unit 25 may be five years from now, the same as the learning period, or it may be shorter than the learning period, for example, three years from now, or longer than the learning period, for example, eight years from now. The prediction unit 25 may also predict, for example, the time when the fracture risk will be 80% or more. The prediction unit 25 may also predict the time when the YAM will be 80% or less. In this case, it is sufficient for the prediction model 32A to have learned the time when the fracture risk will be 80% or more. In this case, it is sufficient for the prediction model 32A to have learned the time when the YAM will be 80% or less.
推定部27は、推定モデル35である第1推定モデル351、第2推定モデル352、及び第3推定モデル353のうち少なくとも1つを用いる。第1推定モデル351及び第2推定モデル352は、骨強度推定モデルの一例である。第3推定モデル353は、骨負荷推定モデルの一例である。骨強度推定モデルは、第2被検体の骨の骨密度、骨量、及び骨質の少なくとも1つの測定結果を示す骨強度情報を目的変数として用いた機械学習により生成される。 The estimation unit 27 uses at least one of the estimation models 35, which are the first estimation model 351, the second estimation model 352, and the third estimation model 353. The first estimation model 351 and the second estimation model 352 are examples of bone strength estimation models. The third estimation model 353 is an example of a bone load estimation model. The bone strength estimation model is generated by machine learning using bone strength information indicating the measurement results of at least one of the bone mineral density, bone mass, and bone quality of the bones of the second subject as the objective variable.
第1推定モデル351は、第3画像を説明変数とし、第2被検体の骨密度の測定結果を示す骨強度情報を目的変数として用いた機械学習により生成される。第1推定モデル351は、第1画像G1aから第1被検体の骨の骨密度を示す情報を出力する。ここで、骨強度情報には、骨密度に関する情報、及び骨質に関する情報が含まれる。なお、骨密度に関する情報と骨質に関する情報とを別々に扱ってもよい。 The first estimation model 351 is generated by machine learning using the third image as an explanatory variable and bone strength information indicating the measurement results of the bone density of the second subject as a target variable. The first estimation model 351 outputs information indicating the bone density of the bones of the first subject from the first image G1a. Here, the bone strength information includes information regarding bone density and information regarding bone quality. Note that the information regarding bone density and information regarding bone quality may be handled separately.
第2被検体の骨密度の測定方法としては、DXA(Dual-energy X-ray Absorptiometry)法を用いることができる。DXA法を用いて骨密度を測定するDXA装置では、例えば腰椎の骨密度を測定する場合、腰椎に対してその正面からX線、具体的には2種類のX線を照射する。なお、DXA装置により、測定箇所の側面からX線を照射して、腰椎の骨密度を測定してもよい。また、測定箇所は、胸部、大腿骨近位部、膝関節等の少なくとも一部が写っていればよい。 DXA (Dual-energy X-ray Absorptiometry) can be used as a method for measuring the bone density of the second subject. A DXA device that uses DXA to measure bone density irradiates the lumbar vertebrae with X-rays, specifically two types of X-rays, from the front when measuring the bone density of the lumbar vertebrae, for example. The DXA device may also measure the bone density of the lumbar vertebrae by irradiating the measurement area with X-rays from the side. Furthermore, the measurement area only needs to show at least a portion of the chest, proximal femur, knee joint, etc.
例えば、DXA装置により大腿骨近位部の骨密度が測定される場合、第2被検体の大腿骨近位部に対してその正面からX線が照射される。ここで、「大腿骨近位部に対してその正面」とは、大腿骨近位部等の撮影部位に正しく向き合う方向を意図しており、第2被検体の体の腹側であってもよいし、第2被検体の背中側であってもよい。なお、大腿骨近位部は、例えば、頚部、転子部、骨幹部、及び全大腿骨近位部(頚部、転子部及び骨幹部等)の少なくとも1つの部位を含む。 For example, when bone density of the proximal femur is measured using a DXA device, X-rays are irradiated from the front of the proximal femur of the second subject. Here, "front of the proximal femur" refers to the direction that correctly faces the imaging area, such as the proximal femur, and may be the ventral side of the body of the second subject, or the dorsal side of the second subject. The proximal femur includes, for example, at least one of the neck, trochanter, shaft, and the entire proximal femur (neck, trochanter, shaft, etc.).
第2被検体の骨密度は超音波法を用いて測定してもよい。超音波法を用いて骨密度を測定する装置では、例えば踵骨に対して超音波が当てられることにより、当該胸部の骨密度が測定される。 The bone density of the second subject may be measured using ultrasound. In a device that measures bone density using ultrasound, for example, ultrasound is applied to the calcaneus to measure the bone density of the chest.
第2推定モデル352は、第3画像を説明変数とし、第2被検体の骨質の測定結果を示す骨強度情報を目的変数として用いた機械学習により生成される。第2推定モデル352は、第1画像G1aから第1被検体の骨質を示す情報を出力する。 The second estimation model 352 is generated by machine learning using the third image as an explanatory variable and bone strength information indicating the measurement results of the bone quality of the second subject as a target variable. The second estimation model 352 outputs information indicating the bone quality of the first subject from the first image G1a.
第3推定モデル353は、第3画像を説明変数とし、第2被検体の筋肉量の測定結果を示す骨負荷情報を目的変数として用いた機械学習により生成される。第3推定モデル353は、第1画像G1aから第1被検体の筋肉量を示す情報を出力する。骨負荷情報は、第2被検体の筋肉量、及び第2被検体の姿勢のうち少なくともいずれかを測定した結果を示す情報である。なお、筋肉量と骨負荷量との間には、例えば、腹直筋および/または脊柱起立筋等の姿勢維持に関わる筋肉量が減少した場合、姿勢を維持するために骨への負荷量が増大するという関係性がある。ここで、姿勢とは、第2被検体の基準となる状態、例えば直立状態からの身体の傾き程度等により示される。 The third estimation model 353 is generated by machine learning using the third image as an explanatory variable and bone load information indicating the measurement results of the muscle mass of the second subject as a dependent variable. The third estimation model 353 outputs information indicating the muscle mass of the first subject from the first image G1a. The bone load information is information indicating the results of measuring at least one of the muscle mass of the second subject and the posture of the second subject. Note that there is a relationship between muscle mass and bone load such that, for example, if the mass of muscles involved in maintaining posture, such as the rectus abdominis and/or erector spinae muscles, decreases, the load on the bones increases in order to maintain posture. Here, posture is indicated by the reference state of the second subject, for example, the degree of inclination of the body from an upright position.
なお、第3推定モデル353は、第3画像を説明変数とし、第2被検体が転倒する可能性を示す転倒リスク目的変数として用いた機械学習により生成され、第1画像G1aから第1被検体の転倒リスクを示す情報を出力するものであってもよい。 The third estimation model 353 may be generated by machine learning using the third image as an explanatory variable and a fall risk objective variable indicating the possibility of the second subject falling, and may output information indicating the fall risk of the first subject from the first image G1a.
筋肉量の測定としては、例えば、身体機能測定、体組成計による測定、ロコモ度テスト、サルコペニア診断、重心動揺測定、下肢筋力測定、立ち上がり速度測定、MRI、DXA、超音波画像診断による筋肉の厚み測定等のうち少なくとも1つを用いることができる。 Muscle mass can be measured using at least one of the following methods: physical function measurement, measurement using a body composition scale, locomotive syndrome test, sarcopenia diagnosis, center of gravity sway measurement, lower limb muscle strength measurement, standing speed measurement, muscle thickness measurement using MRI, DXA, or ultrasound imaging diagnosis, etc.
[推定モデルの動作]
次に、推定モデル35の動作について、図8を参照して説明する。図8は、推定モデル35の動作を説明するためのブロック図である。推定モデル35を構成する第1推定モデル351、第2推定モデル352、及び第3推定モデル353は、例えば、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)である。なお、推定モデル35は、畳み込みニューラルネットワーク以外のニューラルネットワークから構成されてもよい。
[Operation of the estimation model]
Next, the operation of the estimation model 35 will be described with reference to Fig. 8. Fig. 8 is a block diagram for explaining the operation of the estimation model 35. The first estimation model 351, the second estimation model 352, and the third estimation model 353 constituting the estimation model 35 are, for example, convolutional neural networks (CNN). Note that the estimation model 35 may be configured using a neural network other than a convolutional neural network.
図8に示すように、第1推定モデル351は、例えば、入力層351aと、隠れ層351bと、出力層351cとを有する。第1推定モデル351は、第3画像を説明変数とし、第2被検体の骨密度の測定結果を示す骨強度情報を目的変数とする第1学習済みパラメータを含んでいる。 As shown in FIG. 8, the first estimation model 351 has, for example, an input layer 351a, a hidden layer 351b, and an output layer 351c. The first estimation model 351 includes first learned parameters that use the third image as an explanatory variable and bone strength information indicating the bone density measurement results of the second subject as a target variable.
第1推定モデル351では、入力層351aに第1画像G1aが入力されることで、出力層351cから骨密度推定値E1が出力される。なお、隠れ層351bは、例えば、複数の畳み込み層と、複数のプーリング層と、全結合層とを含んでいてもよい。 In the first estimation model 351, a first image G1a is input to the input layer 351a, and a bone mineral density estimate E1 is output from the output layer 351c. Note that the hidden layer 351b may include, for example, multiple convolutional layers, multiple pooling layers, and a fully connected layer.
骨密度推定値E1は、単位面積当りの骨ミネラル密度〔g/cm2〕、単位体積当りの骨ミネラル密度〔g/cm3〕、YAM(Young Adult Mean:若年成人平均パーセント)、Tスコア、及びZスコアのうち少なくとも1つにより表される。 The bone mineral density estimate E1 is expressed by at least one of bone mineral density per unit area [g/cm 2 ], bone mineral density per unit volume [g/cm 3 ], YAM (Young Adult Mean), T-score, and Z-score.
第2推定モデル352は、例えば、入力層352aと、隠れ層352bと、出力層352cとを有する。第2推定モデル352は、第3画像を説明変数とし、第2被検体の骨質の測定結果を示す骨強度情報を目的変数とする第2学習済みパラメータを含んでいる。 The second estimation model 352 has, for example, an input layer 352a, a hidden layer 352b, and an output layer 352c. The second estimation model 352 includes second learned parameters that use the third image as an explanatory variable and bone strength information indicating the measurement results of the bone quality of the second subject as an objective variable.
第2推定モデル352では、入力層352aに第1画像G1aが入力されることで、出力層352cから骨質推定値E2が出力される。なお、隠れ層352bは、例えば、複数の畳み込み層と、複数のプーリング層と、全結合層とを含んでいてもよい。 In the second estimation model 352, the first image G1a is input to the input layer 352a, and a bone quality estimate E2 is output from the output layer 352c. Note that the hidden layer 352b may include, for example, multiple convolutional layers, multiple pooling layers, and a fully connected layer.
第3推定モデル353は、例えば、入力層353aと、隠れ層353bと、出力層353cとを有する。第3推定モデル353は、第3画像を説明変数とし、第2被検体の筋肉量の測定結果を示す骨負荷情報を目的変数とする第3学習済みパラメータを含んでいる。第1学習済みパラメータ、第2学習済みパラメータ、及び第3学習済みパラメータは、推定用学習済みパラメータに相当する。 The third estimation model 353 has, for example, an input layer 353a, a hidden layer 353b, and an output layer 353c. The third estimation model 353 includes third learned parameters that use the third image as an explanatory variable and bone load information indicating the measurement results of the muscle mass of the second subject as an objective variable. The first learned parameters, second learned parameters, and third learned parameters correspond to learned parameters for estimation.
第3推定モデル353では、入力層353aに第1画像G1aが入力されることで、出力層353cから、第1被検体の筋肉量を示す情報である筋肉量推定値E3が出力される。なお、隠れ層353bは、例えば、複数の畳み込み層と、複数のプーリング層と、全結合層とを含んでいてもよい。 In the third estimation model 353, the first image G1a is input to the input layer 353a, and a muscle mass estimate E3, which is information indicating the muscle mass of the first subject, is output from the output layer 353c. Note that the hidden layer 353b may include, for example, multiple convolutional layers, multiple pooling layers, and a fully connected layer.
[予測モデルの動作]
次に、予測モデル32Aの動作について、図9を参照して説明する。図9は、実施形態2における予測モデル32Aの動作を説明するためのブロック図である。予測モデル32Aは、例えば、畳み込みニューラルネットワークである。なお、予測モデル32Aは、畳み込みニューラルネットワーク以外のニューラルネットワークから構成されてもよい。
[Predictive Model Operation]
Next, the operation of the prediction model 32A will be described with reference to Fig. 9. Fig. 9 is a block diagram for explaining the operation of the prediction model 32A in embodiment 2. The prediction model 32A is, for example, a convolutional neural network. Note that the prediction model 32A may be configured using a neural network other than a convolutional neural network.
図9に示すように、予測モデル32Aは、例えば、入力層32aと、隠れ層32bと、出力層32cとを有する。予測モデル32Aは、第1時点に撮像された第3画像、及び第2被検体の骨に関する情報を説明変数とし、第2時点で第2被検体の骨に発生した異常に関する異常情報を目的変数とする予測用学習済みパラメータを含んでいる。なお、骨の異常には、骨折、骨量減少、原発性骨粗鬆症、続発性骨粗鬆症、骨棘形成、骨軟化症、悪性腫瘍の骨転移、多発性骨髄腫、脊椎血管腫、脊椎カリエス、化膿性脊椎炎、骨パジェット病、繊維性骨異形成症、及び強直性脊椎炎等が含まれてよい。 As shown in FIG. 9, the prediction model 32A has, for example, an input layer 32a, a hidden layer 32b, and an output layer 32c. The prediction model 32A includes learned prediction parameters that use the third image captured at the first time point and information about the bones of the second subject as explanatory variables, and abnormality information about abnormalities that occurred in the bones of the second subject at the second time point as the objective variable. Note that bone abnormalities may include fractures, bone loss, primary osteoporosis, secondary osteoporosis, osteophyte formation, osteomalacia, bone metastasis of malignant tumors, multiple myeloma, vertebral hemangioma, spinal caries, pyogenic spondylitis, Paget's disease of bone, fibrous dysplasia, ankylosing spondylitis, etc.
予測モデル32Aでは、入力層32aに、第3時点で撮像された第1画像G1a、骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3が入力されることで、第4時点で第1被検体の骨に骨折が発生確率を示す骨折リスクYが出力される。なお、第1画像G1aが入力層32aに入力されなくてもよい。第1推定情報である骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3は、第1データの一例である。 In the prediction model 32A, a first image G1a captured at a third time point, an estimated bone density value E1, an estimated bone quality value E2, and an estimated muscle mass value E3 are input to the input layer 32a, and a fracture risk Y indicating the probability of a fracture occurring in the bone of the first subject at a fourth time point is output. Note that the first image G1a does not have to be input to the input layer 32a. The estimated bone density value E1, estimated bone quality value E2, and estimated muscle mass value E3, which are the first estimated information, are examples of first data.
このとき、骨折の発生との因果関係の強さに基づいて、入力層32aに入力される骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3に、重み付けを施してもよい。例えば、骨の強度は、骨質よりも骨密度の影響が大きいことを考慮して、骨質推定値E2よりも骨密度推定値E1の重みを重くしてもよい。なお、ガイドライン等の所定の基準、又は独自の基準に従って重み付けを施してもよい。 In this case, the bone density estimate E1, bone quality estimate E2, and muscle mass estimate E3 input to the input layer 32a may be weighted based on the strength of the causal relationship with the occurrence of a fracture. For example, taking into account that bone strength is more influenced by bone density than by bone quality, the bone density estimate E1 may be weighted more heavily than the bone quality estimate E2. Note that weighting may be done according to predetermined standards such as guidelines, or original standards.
なお、予測モデル32Aに、筋肉量推定値E3を入力するのは、人の筋肉量が骨を支える力と相関関係があり、例えば人が転倒したときに骨折するリスクに影響を及ぼすことを考慮している。 The estimated muscle mass value E3 is input into the prediction model 32A because it takes into account that a person's muscle mass is correlated with the strength that supports their bones, and affects the risk of fractures when they fall, for example.
[学習処理の流れ]
次に、学習部24による学習処理の流れについて、図10を参照して説明する。図10は、学習部24による学習処理の一例を示すフローチャートである。学習部24は、予測装置10Aによる後述の予測処理の前に学習処理を実行して、予測モデル32A及び推定モデル35を記憶部3に記憶させておく。
[Learning process flow]
Next, the flow of the learning process by the learning unit 24 will be described with reference to Fig. 10. Fig. 10 is a flowchart showing an example of the learning process by the learning unit 24. The learning unit 24 executes the learning process before the prediction device 10A performs a prediction process (to be described later) and stores a prediction model 32A and an estimation model 35 in the storage unit 3.
図10に示すフローチャートにおいて、まず、学習部24は、取得部21を介して、第2被検体の第3画像を取得する(S21)。ここで、第2被検体は、互いに異なる複数の人である。第3画像は、第1時点で撮像された第2被検体の骨及び筋肉が写る画像である。なお、S1において、学習部24は、電子カルテ管理装置50から各第2被検体の属性情報を取得し、当該属性情報と各第3画像とを紐付けしておくものとする。ただし、予め第3画像に属性情報が紐付けされている場合には、学習部24は、第3画像から属性情報を抽出するものとする。 In the flowchart shown in FIG. 10, first, the learning unit 24 acquires a third image of the second subject via the acquisition unit 21 (S21). Here, the second subjects are multiple people who are different from each other. The third image is an image of the bones and muscles of the second subject captured at a first time point. Note that in S1, the learning unit 24 acquires attribute information of each second subject from the electronic medical record management device 50 and links this attribute information to each third image. However, if attribute information has been linked to the third image in advance, the learning unit 24 extracts the attribute information from the third image.
S21の後、学習部24は、取得部21を介して、第2被検体の骨に関する情報を取得する(S22)。第2被検体の骨に関する情報には、第2被検体の骨の骨密度及び骨質の測定結果を示す情報、及び第2被検体の筋肉量の測定結果を示す情報が含まれる。 After S21, the learning unit 24 acquires information about the bones of the second subject via the acquisition unit 21 (S22). The information about the bones of the second subject includes information indicating the measurement results of the bone density and bone quality of the bones of the second subject, and information indicating the measurement results of the muscle mass of the second subject.
S22の後、学習部24は、S21にて取得した第3画像を説明変数とし、S22にて取得した第2被検体の骨に関する情報を目的変数として用いた機械学習により、推定モデル35を生成する(S23)。 After S22, the learning unit 24 generates an estimation model 35 by machine learning using the third image acquired in S21 as an explanatory variable and the information about the bones of the second subject acquired in S22 as a target variable (S23).
S23において、学習部24は、第3画像を説明変数とし、第2被検体の骨の骨密度の測定結果を示す骨強度情報を目的変数として用いて機械学習させ、第1推定モデル351を生成する。学習部24は、複数の第3画像を第1推定モデル351に入力して、出力された骨密度推定値と骨密度の測定結果とを比較して、これらの誤差が小さくなるように、誤差逆伝播法等を用いて、第1推定モデル351を調整してよい。 In S23, the learning unit 24 performs machine learning using the third image as an explanatory variable and bone strength information indicating the bone density measurement results of the bones of the second subject as a target variable, to generate a first estimation model 351. The learning unit 24 may input multiple third images into the first estimation model 351, compare the output bone density estimates with the bone density measurement results, and adjust the first estimation model 351 using an error backpropagation method or the like so as to reduce the error between them.
また、学習部24は、第3画像を説明変数とし、第2被検体の骨質の測定結果を示す骨強度情報を目的変数として用いて機械学習させ、第2推定モデル352を生成する。そして、学習部24は、複数の第3画像を第2推定モデル352に入力して、出力された骨質推定値と上記骨質の測定結果とを比較して、これらの誤差が小さくなるように、誤差逆伝播法等を用いて、第2推定モデル352を調整する。 The learning unit 24 also performs machine learning using the third image as an explanatory variable and bone strength information indicating the bone quality measurement results of the second subject as a target variable, to generate a second estimation model 352. The learning unit 24 then inputs multiple third images into the second estimation model 352, compares the output bone quality estimates with the bone quality measurement results, and adjusts the second estimation model 352 using backpropagation or the like to reduce the errors between them.
また、学習部24は、第3画像を説明変数とし、第2被検体の筋肉量の測定結果を示す骨負荷情報を目的変数として用いて機械学習させ、第3推定モデル353を生成する。そして、学習部24は、複数の第3画像を第3推定モデル353に入力して、出力された筋肉量推定値と筋肉量の測定結果とを比較して、これらの誤差が小さくなるように、誤差逆伝播法等を用いて、第3推定モデル353を調整する。 The learning unit 24 also performs machine learning using the third image as an explanatory variable and bone load information indicating the muscle mass measurement results of the second subject as a target variable, to generate a third estimation model 353. The learning unit 24 then inputs multiple third images into the third estimation model 353, compares the output muscle mass estimates with the muscle mass measurement results, and adjusts the third estimation model 353 using backpropagation or the like to reduce the error between them.
S23の後、学習部24は、第1時点で撮像された第3画像、及び第2被検体の骨に関する情報を説明変数とし、第2時点で第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により、予測モデル32Aを生成する(S24)。ここで、骨に発生した異常に関する異常情報とは、骨折に関する情報である。なお、異常情報には、骨折以外にも、骨量減少、骨棘形成、骨萎縮、及び骨硬化等が含まれてよい。 After S23, the learning unit 24 generates a prediction model 32A by machine learning using the third image captured at the first time point and information about the bones of the second subject as explanatory variables, and abnormality information about abnormalities that have occurred in the bones of the second subject at the second time point as a target variable (S24). Here, the abnormality information about abnormalities that have occurred in the bones is information about fractures. Note that the abnormality information may include bone loss, osteophyte formation, bone atrophy, bone sclerosis, and the like, in addition to fractures.
S24の後、学習部24は、生成した推定モデル35及び予測モデル32Aを、記憶部3に記憶させる(S25)。 After S24, the learning unit 24 stores the generated estimation model 35 and prediction model 32A in the memory unit 3 (S25).
[予測処理の流れ]
次に、予測装置10Aによる予測処理の流れについて、図11~図13を参照して説明する。図11は、第1被検体の胸部が写る単純X線画像を示す図である。図12は、予測装置10Aによる予測処理の流れの一例を示すフローチャートである。
[Prediction process flow]
Next, the flow of the prediction process by the prediction device 10A will be described with reference to Fig. 11 to Fig. 13. Fig. 11 is a diagram showing a plain X-ray image of the chest of a first subject. Fig. 12 is a flowchart showing an example of the flow of the prediction process by the prediction device 10A.
以下、医療施設等において、図11に示すように、第1画像G1aとして、第1被検体の胸部の正面から単純X線画像が撮像された場合について説明する。図11には、第1被検体の骨B、及び筋肉Mが写されている。なお、単純X線画像においては、骨Bは白く写り、筋肉Mは灰色に写るため、骨Bと筋肉Mとを単純X線画像の色および/または明るさの違いから区別することが可能となっている。これにより、骨B及び筋肉Mの大きさ及び形状等を推定することができる。 The following describes a case where a plain X-ray image is taken from the front of the chest of a first subject as the first image G1a in a medical facility, as shown in Figure 11. Figure 11 shows the bone B and muscle M of the first subject. Note that in the plain X-ray image, bone B appears white and muscle M appears gray, making it possible to distinguish between bone B and muscle M based on differences in color and/or brightness in the plain X-ray image. This makes it possible to estimate the size, shape, etc. of bone B and muscle M.
図12に示すフローチャートにおいて、まず、取得部21は、画像管理装置40から、第1被検体の第1画像G1aを取得する(S31)。第1画像G1aは、第1被検体の骨B及び筋肉Mが写る単純X線画像である。 In the flowchart shown in FIG. 12, first, the acquisition unit 21 acquires a first image G1a of the first subject from the image management device 40 (S31). The first image G1a is a simple X-ray image showing the bones B and muscles M of the first subject.
S31の後、推定部27は、記憶部3から第1推定モデル351を読み出し、取得した第1画像G1aを第1推定モデル351に入力して、骨密度推定値E1を出力する(S32:第1推定ステップ)。出力された骨密度推定値E1は、予測部25へ送信される。なお、推定部27は、第1被検体の将来及び過去の複数時点における骨密度推定値E1を出力してもよい。また、推定部27は、第1被検体の過去のある時点から将来のある時点までの骨密度推定値E1の推移を出力してもよい。 After S31, the estimation unit 27 reads out the first estimation model 351 from the storage unit 3, inputs the acquired first image G1a into the first estimation model 351, and outputs the bone mineral density estimate E1 (S32: first estimation step). The output bone mineral density estimate E1 is transmitted to the prediction unit 25. The estimation unit 27 may output the bone mineral density estimate E1 of the first subject at multiple future and past time points. The estimation unit 27 may also output the progression of the bone mineral density estimate E1 of the first subject from a past time point to a future time point.
続いて、推定部27は、記憶部3から第2推定モデル352を読み出し、取得した第1画像G1aを第2推定モデル352に入力して、骨質推定値E2を出力する(S33:第2推定ステップ)。出力された骨質推定値E2は、予測部25へ送信される。 The estimation unit 27 then reads out the second estimation model 352 from the storage unit 3, inputs the acquired first image G1a into the second estimation model 352, and outputs the bone quality estimation value E2 (S33: second estimation step). The output bone quality estimation value E2 is sent to the prediction unit 25.
次に、推定部27は、記憶部3から第3推定モデル353を読み出し、取得した第1画像G1aを第3推定モデル353に入力して、筋肉量推定値E3を出力する(S34:第3推定ステップ)。出力された筋肉量推定値E3は、予測部25へ送信される。ここで、第1推定ステップS32、第2推定ステップS33、及び第3推定ステップS34は、推定ステップに相当する。 Next, the estimation unit 27 reads out the third estimation model 353 from the memory unit 3, inputs the acquired first image G1a into the third estimation model 353, and outputs a muscle mass estimation value E3 (S34: third estimation step). The output muscle mass estimation value E3 is sent to the prediction unit 25. Here, the first estimation step S32, the second estimation step S33, and the third estimation step S34 correspond to estimation steps.
なお、推定部27は、第1推定モデル351、第2推定モデル352、及び第3推定モデル353のうちの少なくともいずれかを用いて、骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3のうちの少なくともいずれかを出力すればよい。 The estimation unit 27 may use at least one of the first estimation model 351, the second estimation model 352, and the third estimation model 353 to output at least one of the bone density estimate E1, the bone quality estimate E2, and the muscle mass estimate E3.
更に、推定部27は、第1画像G1aから第1被検体の骨量を示す情報を出力する第4推定モデルを用いて、骨量推定値を出力してもよい。第4推定モデルは、第3画像を説明変数とし、第2被検体の骨量の測定結果を示す骨強度情報を目的変数として用いた機械学習により生成される。 Furthermore, the estimation unit 27 may output a bone mass estimate using a fourth estimation model that outputs information indicating the bone mass of the first subject from the first image G1a. The fourth estimation model is generated by machine learning using the third image as an explanatory variable and bone strength information indicating the measurement results of the bone mass of the second subject as a target variable.
また、推定部27は、第1画像G1aから第1被検体の姿勢を示す情報を出力する第5推定モデルを用いて、姿勢推定値を出力してもよい。第5推定モデルは、第3画像を説明変数とし、第2被検体の姿勢の測定結果を示す骨負荷情報を目的変数として用いた機械学習により生成される。姿勢推定値としては、胸椎後弯角(TKA:Thoracic Spine Kyphotic Angle)、及び腰椎前腕角(LLA:Lumber Loadosis Angle)、仙骨傾斜角(SIA:Sacral Inclination Angle)、又は、これらを総合した値を用いることができる。 The estimation unit 27 may also output a posture estimation value using a fifth estimation model that outputs information indicating the posture of the first subject from the first image G1a. The fifth estimation model is generated by machine learning using the third image as an explanatory variable and bone load information indicating the measurement results of the posture of the second subject as a target variable. The posture estimation value may be the thoracic spine kyphotic angle (TKA), the lumbar forearm angle (LLA), the sacral inclination angle (SIA), or a combined value of these.
S34の後、予測部25は、記憶部3から予測モデル32Aを読み出し、第1画像G1a、骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3を予測モデル32Aに入力して、骨折リスクYを出力する(S35:予測ステップ)。骨折リスクYは、第1画像G1aを撮像した第3時点とは異なる時点である第4時点で第1被検体の骨に骨折が発生する可能性を示す予測情報に相当する。なお、予測部25は、閉経の有無、及び/又は、出産の有無を考慮して、各場合の骨折リスクYを出力するようにしてもよい。 After S34, the prediction unit 25 reads out the prediction model 32A from the memory unit 3, inputs the first image G1a, the estimated bone density value E1, the estimated bone quality value E2, and the estimated muscle mass value E3 into the prediction model 32A, and outputs the fracture risk Y (S35: prediction step). The fracture risk Y corresponds to prediction information indicating the possibility of a fracture occurring in the bone of the first subject at a fourth time point, which is different from the third time point when the first image G1a was captured. Note that the prediction unit 25 may output the fracture risk Y for each case, taking into account whether or not the subject has undergone menopause and/or whether or not the subject has given birth.
推定部27により出力された骨密度推定値E1、骨質推定値E2、筋肉量推定値E3、及び、予測部25により出力された骨折リスクYは、提示制御部26へ送信される。そして、提示制御部26は、骨密度推定値E1、骨質推定値E2、筋肉量推定値E3、及び骨折リスクYを、提示装置60に提示する(S36)。 The bone density estimate E1, bone quality estimate E2, and muscle mass estimate E3 output by the estimation unit 27, and the fracture risk Y output by the prediction unit 25 are transmitted to the presentation control unit 26. The presentation control unit 26 then presents the bone density estimate E1, bone quality estimate E2, muscle mass estimate E3, and fracture risk Y on the presentation device 60 (S36).
S36において、提示制御部26は、第1被験体を支援する支援情報を提示装置60に提示してもよい。この場合、予測部25は、第1画像及び第1推定情報から、第1被検体の属性情報に対応した予測モデル32Aを用いて、第1被検体を支援する支援情報を提示制御部26へ出力する。 In S36, the presentation control unit 26 may present support information for supporting the first subject to the presentation device 60. In this case, the prediction unit 25 outputs support information for supporting the first subject to the presentation control unit 26 from the first image and the first estimated information using a prediction model 32A corresponding to the attribute information of the first subject.
上記予測モデル32Aは、第2被検体の属性情報に対応した第3画像と、第2被検体の骨に関する情報である骨密度推定値、骨質推定値、及び筋肉量推定値とを説明変数とし、第2被検体の骨折リスクを低下させるように支援する支援情報を目的変数として用いた機械学習により生成される。 The prediction model 32A is generated by machine learning using a third image corresponding to the attribute information of the second subject and information about the second subject's bones, including an estimated bone density, an estimated bone quality, and an estimated muscle mass, as explanatory variables, and support information that supports reducing the second subject's fracture risk as a target variable.
提示制御部26は、例えば、第1被検体と、年齢及び性別等の属性情報が同じ又は近い人の骨密度の平均値と比較して、骨密度推定値E1が低い場合、第1被検体にカルシウムを多く摂取すること、日光を浴びること、及び運動をすること等を促す旨を提示装置60に提示する。また、第1被検体と、年齢、性別、身長、及び体重等の属性情報が同じ又は近い人の筋肉量の平均値と比較して、筋肉量推定値E3が低い場合、提示制御部26は、運動量を増やすことを促す旨を提示装置60に提示する。 For example, if the estimated bone density value E1 is lower than the average bone density of people with the same or similar attribute information, such as age and gender, as the first subject, the presentation control unit 26 will display on the presentation device 60 a message encouraging the first subject to take in more calcium, get more sunlight, exercise, etc. Furthermore, if the estimated muscle mass value E3 is lower than the average muscle mass of people with the same or similar attribute information, such as age, gender, height, and weight, as the first subject, the presentation control unit 26 will display on the presentation device 60 a message encouraging the first subject to increase the amount of exercise.
なお、支援情報は、予測部25により出力されるものであってもよい。例えば、予測部25により出力された骨折リスクYが高い場合には、提示制御部26は、支援情報として、第1被検体に激しい運動を避けるように促す旨を提示装置60に提示する。 The support information may be output by the prediction unit 25. For example, if the fracture risk Y output by the prediction unit 25 is high, the presentation control unit 26 displays support information on the presentation device 60 urging the first subject to avoid strenuous exercise.
また、S36において、提示制御部26は、第1被検体の骨Bに骨折が発生する可能性が高い時期を示す情報を提示装置60に提示してもよい。また、S36において、提示制御部26は、所定期間経過するまでに第1被検体の骨Bに骨折が発生する確率を示す情報を提示装置60に提示してもよい。この場合、予測部25は、骨折リスクY等を考慮して、骨折が発生する可能性が高い骨折予測時期を予測したり、所定期間経過するまでに第1被検体の骨Bに骨折が発生する確率を予測したりすればよい。骨折予測時期は、例えば、5年後、10年後等、年単位でもよいし、5年6ヵ月後、10年6ヵ月後等、月単位でもよい。所定期間は、例えば3年後、5年後等、年単位でもよいし、3年6ヵ月後、5年6ヶ月後等、月単位でもよい。S36において、提示制御部26は、例えば、「10年後に骨折リスクYが80%以上になります。」というメッセージを提示装置60に提示してもよい。また、S36において、提示制御部26は、「3年後までの骨折リスクYは60%です。」というメッセージを提示装置60に提示してもよい。 。 In addition, in S36, the presentation control unit 26 may present to the presentation device 60 information indicating the time when a fracture is likely to occur in bone B of the first subject. In addition, in S36, the presentation control unit 26 may present to the presentation device 60 information indicating the probability of a fracture occurring in bone B of the first subject within a predetermined period of time. In this case, the prediction unit 25 may predict the predicted fracture time when a fracture is likely to occur, or the probability of a fracture occurring in bone B of the first subject within a predetermined period of time, taking into account the fracture risk Y, etc. The predicted fracture time may be in years, for example, 5 years or 10 years, or may be in months, for example, 5 years and 6 months or 10 years and 6 months. The predetermined period may be in years, for example, 3 years or 5 years, or may be in months, for example, 3 years and 6 months or 5 years and 6 months. In S36, the presentation control unit 26 may present to the presentation device 60 a message, for example, "Fracture risk Y will be 80% or more in 10 years." Furthermore, in S36, the presentation control unit 26 may display on the presentation device 60 a message stating, "The fracture risk Y within three years is 60%."
また、S36において、提示制御部26は、複数の時点での骨折リスクYや、骨折リスクYの推移を示したグラフを提示装置60に提示してもよい。 Furthermore, in S36, the presentation control unit 26 may present on the presentation device 60 the fracture risk Y at multiple points in time and a graph showing the progression of the fracture risk Y.
以上説明した実施形態2における情報処理システム1Aにおいては、推定部27により、第1被検体の骨B及び筋肉Mが写る第1画像G1aから、第1推定モデル351と、第2推定モデル352と、第3推定モデル353との3つの推定モデルを用いて、第1推定情報として、第1被検体の骨Bの骨密度推定値E1と、骨質推定値E2と、筋肉量推定値E3とを推定することができる。 In the information processing system 1A in the second embodiment described above, the estimation unit 27 can estimate, as first estimated information, an estimated bone density value E1, an estimated bone quality value E2, and an estimated muscle mass value E3 of the bone B of the first subject from the first image G1a showing the bone B and muscle M of the first subject using three estimation models: a first estimation model 351, a second estimation model 352, and a third estimation model 353.
予測モデル32Aは、第1画像G1a、骨Bの骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3のうち少なくともいずれかを用いて、予測情報として、第1被検体の骨折リスクYを出力する。具体的には、予測部25により、第1画像G1aに写る部位である胸部に骨折が発生する骨折リスクYを高精度に予測することができる。 The prediction model 32A uses the first image G1a and at least one of the bone density estimate E1, bone quality estimate E2, and muscle mass estimate E3 of bone B to output the fracture risk Y of the first subject as prediction information. Specifically, the prediction unit 25 can accurately predict the fracture risk Y of a fracture occurring in the chest, which is the area captured in the first image G1a.
これにより、例えば、医療施設の医師は、患者である第1被検体の診断に、骨折リスクY等の情報処理システム1Aの出力結果を役立てることができ、より適切に患者の診断を行えると共に、より的確な支援情報を患者に提供することができる。また、例えば整形外科を専門としない医師であっても、情報処理システム1Aの出力結果を参照することで、整形外科医に近い精度で患者の診断を行うことが可能となる。 As a result, for example, a doctor at a medical facility can use the output results of information processing system 1A, such as fracture risk Y, to diagnose the first subject, who is a patient, allowing them to make a more appropriate diagnosis of the patient and provide the patient with more accurate support information. Furthermore, even a doctor who does not specialize in orthopedics can diagnose the patient with an accuracy close to that of an orthopedic surgeon by referring to the output results of information processing system 1A.
支援情報として、医師等は、例えば骨密度推定値E1の骨折リスクYへの影響度が大きい第1被検体に対して、骨密度を増加させる治療方針を提案する。また、医師等は、骨質推定値E2の骨折リスクYへの影響度が大きい第1被検体に対しては、支援情報として骨質を改善させる治療方針を提案する。また、医師等は、筋肉量推定値E3の骨折リスクYへの影響度が大きい第1被検体に対しては、支援情報として姿勢維持に関わる筋肉量の増加対策、又は姿勢改善の対策を提案する。更に、医師等は、支援情報に基づいて、第1被検体に対して、食事や運動療法を推奨するか、投薬するのか否か、又は使用する薬の種類等を決めることができる。 As support information, the doctor etc. may propose a treatment plan to increase bone density for the first subject, for example, whose bone density estimate E1 has a large impact on fracture risk Y. Furthermore, the doctor etc. may propose a treatment plan to improve bone quality as support information for the first subject, whose bone quality estimate E2 has a large impact on fracture risk Y. Furthermore, the doctor etc. may propose measures to increase muscle mass related to maintaining posture, or measures to improve posture as support information for the first subject, whose muscle mass estimate E3 has a large impact on fracture risk Y. Furthermore, based on the support information, the doctor etc. can decide whether to recommend a diet or exercise therapy to the first subject, whether to prescribe medication, or the type of medication to use, etc.
ここで、図13は、予測装置10Aによる第1被検体の骨折リスクYの予測結果と各対策の効果とを示す図である。図13には、推定部27により、第1被検体の骨密度推定値E1が「0.985」、骨質推定値E2が「1.123」、筋肉量推定値E3が「20.54」であると推定された例が示されている。なお、各推定値が高い程、第1被検体の骨Bの強度が大きく、筋肉量が多いことを示している。また、図13には、予測部25により、第1被検体の骨折リスクYが「0.42」であると推定された例が示されている。骨折リスクYの値が大きい程、骨折が発生する可能性が高いことを示している。 Here, Figure 13 shows the results of the prediction of the fracture risk Y of the first subject by the prediction device 10A and the effects of each countermeasure. Figure 13 shows an example in which the estimation unit 27 estimates that the bone density estimate E1 of the first subject is "0.985", the bone quality estimate E2 is "1.123", and the muscle mass estimate E3 is "20.54". Note that the higher the estimated values, the greater the strength of the bone B of the first subject and the greater the muscle mass. Figure 13 also shows an example in which the prediction unit 25 estimates that the fracture risk Y of the first subject is "0.42". The higher the value of fracture risk Y, the higher the likelihood of a fracture occurring.
このように、提示装置60に、骨密度推定値E1、骨質推定値E2、筋肉量推定値E3、及び骨折リスクYが数値化されて提示されるので、医療施設の医師等は、第1被検体に対して、より具体的な診断結果を伝えることができる。 In this way, the estimated bone density value E1, estimated bone quality value E2, estimated muscle mass value E3, and fracture risk Y are quantified and presented on the presentation device 60, allowing doctors and others at medical facilities to communicate more specific diagnosis results to the first subject.
また、図13には、第1被検体に対して、骨密度を5%増加させるという「対策1」を施した場合、骨折リスクYが「-12%」低くなるという効果が示されている。また、第1被検体に対して、骨質を5%増加させるという「対策2」を施した場合、骨折リスクYが「-7%」低くなるという効果が示されている。また、第1被検体に対して、筋肉量を7%増加させるという「対策3」を施した場合、骨折リスクYが「-10%」低くなるという効果が示されている。 Figure 13 also shows that when "Measure 1" of increasing bone density by 5% is implemented on the first subject, the fracture risk Y is reduced by "-12%." It also shows that when "Measure 2" of increasing bone quality by 5% is implemented on the first subject, the fracture risk Y is reduced by "-7%." It also shows that when "Measure 3" of increasing muscle mass by 7% is implemented on the first subject, the fracture risk Y is reduced by "-10%."
上記した対策1~3に示したように、骨密度推定値E1、骨質推定値E2、及び筋肉量推定値E3の値を適宜変更することによって、骨折リスクYを高くしている要因を特定することができる。この場合、第1被検体の骨密度が骨折リスクYを高くしている要因であることが特定できる。従って、医師等は、第1被検体に対して、骨密度を増加させることを推奨することで、第1被検体の骨折リスクYを効果的に低下させることが期待できる。 As shown in measures 1 to 3 above, by appropriately changing the values of the estimated bone density value E1, estimated bone quality value E2, and estimated muscle mass value E3, it is possible to identify the factors that are increasing the fracture risk Y. In this case, it is possible to identify that the bone density of the first subject is the factor that is increasing the fracture risk Y. Therefore, by recommending that the first subject increase their bone density, doctors and others can expect to effectively reduce the fracture risk Y of the first subject.
〔その他の実施形態1〕
上記した実施形態1の情報処理システム1では、予測装置10の学習部24が予測モデル32を生成するものとしたが、これに限らず、予測装置10以外の別の装置が予測モデル32を生成してもよい。この場合、別の装置が生成した予測モデル32を記憶部3に記憶させておけばよい。なお、別の装置が生成した予測モデル32は、図示しない通信部が通信ネットワークを介して受信し、制御部2が受信された予測モデル32を記憶部3に記憶させればよい。この構成によれば、記憶部3に、学習用データ33及び教師用データ34を記憶させる必要がない。
Other embodiment 1
In the information processing system 1 of the first embodiment described above, the learning unit 24 of the prediction device 10 generates the prediction model 32. However, this is not limiting, and the prediction model 32 may be generated by a device other than the prediction device 10. In this case, the prediction model 32 generated by the other device may be stored in the storage unit 3. The prediction model 32 generated by the other device may be received by a communication unit (not shown) via a communication network, and the control unit 2 may store the received prediction model 32 in the storage unit 3. With this configuration, there is no need to store the learning data 33 and the teacher data 34 in the storage unit 3.
また、上記した実施形態1の情報処理システム1では、制御部2及び記憶部3が予測装置10に備えられているものとしたが、これに限定されない。予測装置10は、クラウド上に導入されたクラウド型の装置であってもよい。この場合、第1画像G1及び第2画像G2を、通信ネットワークを介してクラウド上の予測装置10に送信し、予測装置10が予測した予測情報を、通信ネットワークを介して提示装置60が受信する。また、予測装置10は、医療施設、又は解析サービスを提供する企業内に設けられたオンプレミス型の装置であってもよい。 Furthermore, in the information processing system 1 of the above-described first embodiment, the control unit 2 and the storage unit 3 are provided in the prediction device 10, but this is not limited to this. The prediction device 10 may be a cloud-based device installed on the cloud. In this case, the first image G1 and the second image G2 are transmitted to the prediction device 10 on the cloud via a communications network, and the prediction information predicted by the prediction device 10 is received by the presentation device 60 via the communications network. Furthermore, the prediction device 10 may be an on-premise device installed in a medical facility or a company that provides analysis services.
また、上記した実施形態1の情報処理システム1では、第1被検体の骨に骨折が発生する骨折リスクYを予測するものとしたが、これに限定されない。情報処理システム1は、第1被検体において、骨粗鬆症、脊柱側症、脊柱管狭窄症、椎間板変性症、強直性脊椎炎、脊髄損傷、軟骨損傷、骨髄炎、骨棘、筋萎縮、脊髄性筋萎縮症、変形性関節症、骨軟部腫瘍等が発生するリスクを予測してもよい。 Furthermore, while the information processing system 1 of the first embodiment described above predicts the fracture risk Y of a fracture occurring in the bones of the first subject, this is not limited to this. The information processing system 1 may also predict the risk of developing osteoporosis, scoliosis, spinal stenosis, intervertebral disc degeneration, ankylosing spondylitis, spinal cord injury, cartilage damage, osteomyelitis, osteophytes, muscular atrophy, spinal muscular atrophy, osteoarthritis, bone and soft tissue tumors, etc. in the first subject.
また、上記した実施形態1の情報処理システム1では、予測装置10は、予測情報として、第1画像G1に写る部位に異常が発生する可能性である骨折リスクYを出力するものとしたが、これに限定されない。予測情報は、第1画像G1に写らない部位に異常が発生する可能性を示すものであってもよい。 Furthermore, in the information processing system 1 of the above-described first embodiment, the prediction device 10 outputs the fracture risk Y, which is the possibility of an abnormality occurring in a region shown in the first image G1, as prediction information, but this is not limited to this. The prediction information may also indicate the possibility of an abnormality occurring in a region not shown in the first image G1.
例えば、予測部25は、第1被検体の胸部が写る第1画像G1、及び胸部に対応した部位の第2画像G2から、予測モデル32を用いて、腰椎又は大腿骨の骨折リスクYを予測してもよい。この場合、予測モデル32は、第1時点で撮像された第2被検体の胸部が写る第3画像、及び胸部に対応した部位が写る第4画像を説明変数とし、第2時点で第2被検体の腰椎又は大腿骨に発生した骨折に関する異常情報を目的変数として用いた機械学習により生成される。 For example, the prediction unit 25 may use a prediction model 32 to predict the risk Y of a lumbar or femur fracture from a first image G1 showing the chest of a first subject and a second image G2 of a region corresponding to the chest. In this case, the prediction model 32 is generated by machine learning using a third image showing the chest of a second subject taken at a first time point and a fourth image showing a region corresponding to the chest as explanatory variables, and abnormality information related to a fracture that occurred in the lumbar or femur of the second subject at a second time point as a response variable.
また、上記した実施形態1の情報処理システム1では、予測モデル32の入力層32aに、第1画像G1と第2画像G2を入力するものとしたが、これに限らず、第2画像G2の代わりに、第1被検体の筋力測定の結果を入力してもよい。この場合、予測モデル32は、第1時点で撮像された第2被検体の第3画像、及び第1時点での第2被検体の筋力測定の結果を説明変数とし、第2時点で第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成されたものであればよい。 Furthermore, in the information processing system 1 of the above-described first embodiment, the first image G1 and the second image G2 are input to the input layer 32a of the prediction model 32. However, this is not limited to this, and the results of the muscle strength measurement of the first subject may be input instead of the second image G2. In this case, the prediction model 32 may be generated by machine learning using the third image of the second subject taken at the first time point and the results of the muscle strength measurement of the second subject at the first time point as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at the second time point as the objective variable.
また、上記した実施形態1の情報処理システム1では、予測装置10は、図4のS14にて補正された第1画像G1を、S15にて予測モデル32に入力するものとしたが、これに限定されない。予測装置10は、図4のS13及びS14を行わず、S15にて補正されていない第1画像を予測モデル32に入力してもよい。 Furthermore, in the information processing system 1 of the first embodiment described above, the prediction device 10 inputs the first image G1 corrected in S14 of FIG. 4 to the prediction model 32 in S15, but this is not limited to this. The prediction device 10 may not perform S13 and S14 of FIG. 4, and may input the first image that has not been corrected in S15 to the prediction model 32.
また、上記した実施形態1の情報処理システム1では、予測装置10は、予測モデル32として、ニューラルネットワークを用いるものとしたが、これに限らず、他にも、線形回帰モデル等を用いてもよい。 Furthermore, in the information processing system 1 of the first embodiment described above, the prediction device 10 uses a neural network as the prediction model 32, but this is not limited to this, and other models such as a linear regression model may also be used.
〔その他の実施形態2〕
上記した実施形態1の予測装置10では、1つのAIを有する予測モデル32が記憶部3に記憶されているものとしたが、これに限らず、記憶部3に複数のAIが記憶されていてもよい。例えば、第1被検体の骨密度及び/又は骨質を示す情報を出力する第1推定モデルと、第1被検体の筋肉量を示す情報を出力する第2推定モデルとが、記憶部3に記憶されていてもよい。
Other embodiment 2
In the prediction device 10 of the first embodiment described above, the prediction model 32 having one AI is stored in the storage unit 3, but this is not limiting, and multiple AIs may be stored in the storage unit 3. For example, the storage unit 3 may store a first estimation model that outputs information indicating the bone density and/or bone quality of the first subject, and a second estimation model that outputs information indicating the muscle mass of the first subject.
上記第1推定モデルは、第2被検体の第1画像G1を説明変数とし、第2被検体の骨密度及び/又は骨質の測定結果を示す骨密度情報を目的変数として用いた機械学習により生成されている。また、第2推定モデルは、第2被検体の第2画像G2を説明変数とし、第2被検体の筋肉量の測定結果を示す筋肉量情報を目的変数として用いた機械学習により生成されている。 The first estimation model is generated by machine learning using the first image G1 of the second subject as an explanatory variable and bone density information indicating the measurement results of the bone density and/or bone quality of the second subject as a dependent variable. The second estimation model is generated by machine learning using the second image G2 of the second subject as an explanatory variable and muscle mass information indicating the measurement results of the muscle mass of the second subject as a dependent variable.
図4のS15において、予測部25は、第1被検体の第1画像G1を第1推定モデルに入力することで、第1被検体の骨密度を示す情報である骨密度推定値、及び第1被検体の骨質を示す情報である骨質推定値を出力する。 In S15 of FIG. 4, the prediction unit 25 inputs the first image G1 of the first subject into the first estimation model, and outputs a bone density estimate, which is information indicating the bone density of the first subject, and a bone quality estimate, which is information indicating the bone quality of the first subject.
ここで、骨の骨質とは、例えば、骨の統計的な性質、骨の形状的な性質、骨の力学的な性質、及び骨の化学的な性質のうち少なくとも1つに基づく性質である。骨質は、第1被検体の属性情報に関する情報を含んでいてもよい。 Here, bone quality refers to a property based on at least one of the statistical properties of bone, the geometric properties of bone, the mechanical properties of bone, and the chemical properties of bone. Bone quality may also include information regarding the attribute information of the first subject.
骨質は、例えば、骨代謝マーカ、性別、人種、閉経の有無、年齢、皮質骨の状態、海綿骨の状態、海綿骨の骨梁の状態、疾病情報、骨評価情報、薬剤情報、骨折の有無、骨折の数、骨折の場所、及び骨折歴の少なくとも1つに基づくものを用いることができる。より具体的には、骨質は、例えば、骨形成マーカ、骨吸収マーカ、骨質マーカ(例えば、ビタミンKの値)、皮質骨の厚さ、骨梁の密度、骨梁の方向、及び海綿骨構造指標(trabecular bone score)のうち少なくとも1つに基づくもの用いることができる。 Bone quality can be based on at least one of, for example, bone metabolism markers, sex, race, whether or not the patient has undergone menopause, age, cortical bone condition, cancellous bone condition, cancellous bone trabecular condition, disease information, bone evaluation information, medication information, presence or absence of fracture, number of fractures, location of fracture, and fracture history. More specifically, bone quality can be based on at least one of, for example, bone formation markers, bone resorption markers, bone quality markers (e.g., vitamin K level), cortical bone thickness, trabecular density, trabecular orientation, and trabecular bone score.
上記疾病情報には、例えば、骨粗鬆症、リウマチ、骨壊死(例えば、大腿骨頭壊死症等)、全身性硬化症、腎臓病、及び大理石骨病等の少なくとも1つが含まれていてもよい。骨評価情報は、骨折リスク評価ツール(FRAX(登録商標):Fracture Risk Assessment Tool)により評価された情報が含まれていてもよい。薬剤情報には、例えば、骨吸収を抑制する薬剤、骨形成を促進する薬剤、及びその他薬剤(例えば、カルシウム製剤、ビタミン製剤、女性ホルモン製剤等)の少なくとも1つを含む薬剤に関する商品名、一般名、投与量、投与期間、及び投与方法(例えば、経口、静脈内注射、筋肉内注射、皮下注射等)の少なくとも1つが含まれていてもよい。 The disease information may include, for example, at least one of osteoporosis, rheumatism, osteonecrosis (e.g., femoral head necrosis, etc.), systemic sclerosis, kidney disease, and osteopetrosis. The bone assessment information may include information evaluated using a fracture risk assessment tool (FRAX (registered trademark): Fracture Risk Assessment Tool). The drug information may include, for example, at least one of the trade name, generic name, dosage, administration period, and administration method (e.g., oral, intravenous injection, intramuscular injection, subcutaneous injection, etc.) for drugs including at least one of drugs that inhibit bone resorption, drugs that promote bone formation, and other drugs (e.g., calcium preparations, vitamin preparations, female hormone preparations, etc.).
また、骨質として、例えば、髄腔形状のタイプを含めてもよい。髄腔形状は、例えば、Dorr分類を用いることができる。髄腔形状は、例えば、皮質骨の厚み及び髄腔の形状の少なくとも1つを用いて、次のように分類することができる。
・Type A:皮質骨が厚く、髄腔が狭く細いタイプ。
・Type B:TypeAとTypeCの中間であり、髄腔が狭くも広くもないタイプ。
・Type C:皮質骨が薄く、髄腔が広がっているタイプ。
The bone quality may also include, for example, the type of medullary cavity shape. For example, the Dorr classification can be used for the medullary cavity shape. The medullary cavity shape can be classified as follows using at least one of the thickness of the cortical bone and the shape of the medullary cavity.
- Type A: The cortical bone is thick and the medullary cavity is narrow and thin.
- Type B: A type that is between Type A and Type C, with a medullary cavity that is neither narrow nor wide.
- Type C: A type in which the cortical bone is thin and the medullary cavity is wide.
なお、骨密度推定値は、骨の密度に関連する値であってもよい。骨密度推定値は、例えば、単位面積当りの骨ミネラル密度〔g/cm2〕、単位体積当りの骨ミネラル密度〔g/cm3〕、YAM〔%〕、Tスコア、及びZスコアの少なくとも1種類によって表される。YAM〔%〕は、“Young Adult Mean”の略であって、若年成人平均パーセントと呼ばれることがある。骨密度推定値は、骨粗鬆症のガイドライン、例えば、「一般社団法人 日本骨粗鬆症学会 予防と治療ガイドライン2015年版」等に使われる指標を用いてもよいし、独自の指標を用いてもよい。 The bone density estimate may be a value related to bone density. The bone density estimate is expressed, for example, by at least one of bone mineral density per unit area (g/cm2), bone mineral density per unit volume (g/cm3), YAM (%), T-score, and Z-score. YAM (%) is an abbreviation for "Young Adult Mean" and is sometimes called the young adult average percent. The bone density estimate may use an index used in osteoporosis guidelines, such as the "2015 Prevention and Treatment Guidelines of the Japan Osteoporosis Society," or may use an original index.
また、予測部25は、第1被検体の第2画像G2を第2推定モデルに入力することで、第1被検体の筋肉量を示す情報である筋肉量推定値を出力する。そして、図4のS16において、提示制御部26は、骨折リスクYに加えて、骨密度推定値、骨質推定値、及び筋肉量推定値を、提示装置60に提示する。 Furthermore, the prediction unit 25 inputs the second image G2 of the first subject into the second estimation model, thereby outputting a muscle mass estimate, which is information indicating the muscle mass of the first subject. Then, in S16 of FIG. 4, the presentation control unit 26 presents the bone density estimate, bone quality estimate, and muscle mass estimate on the presentation device 60, in addition to the fracture risk Y.
第1推定モデルは、第2被検体の第1画像G1を説明変数とし、第2被検体の骨密度及び骨質の測定結果を示す骨密度情報を目的変数として用いた機械学習により生成されたものである。第2推定モデルは、第2被検体の第2画像G2を説明変数とし、第2被検体の筋肉量の測定結果を示す筋肉量情報を目的変数として用いた機械学習により生成されたものである。 The first estimation model was generated by machine learning using the first image G1 of the second subject as an explanatory variable and bone density information indicating the measurement results of the second subject's bone density and bone quality as a dependent variable. The second estimation model was generated by machine learning using the second image G2 of the second subject as an explanatory variable and muscle mass information indicating the measurement results of the second subject's muscle mass as a dependent variable.
なお、第2被検体の骨密度は、例えば、DXA法、超音波法、MD(Micro Densitometry)法、及び定量的CT(Quantitative Computed Tomography)法等を用いて測定できる。DXA法を用いて骨密度を測定するDXA装置では、腰椎の骨密度が測定される場合、被検体の腰椎に対してその正面からX線が照射される。また、DXA装置では、大腿骨近位部の骨密度が測定される場合、被検体の大腿骨近位部に対してその正面からX線が照射される。ここで、「腰椎に対してその正面」及び「大腿骨近位部に対してその正面」とは、腰椎及び大腿骨近位部等の撮影部位に正しく向き合う方向を意図しており、被検体の体の腹側であってもよいし、被検体の背中側であってもよい。なお、大腿骨近位部は、例えば、頚部、転子部、骨幹部、及び全大腿骨近位部(頚部、転子部、及び骨幹部)の少なくとも1つの部位を含む。MD法は、例えば手部にX線が照射される。 The bone density of the second subject can be measured using, for example, DXA, ultrasound, MD (Micro Densitometry), and CT (Quantitative Computed Tomography). In a DXA device that measures bone density using DXA, when measuring bone density of the lumbar vertebrae, X-rays are irradiated from the front of the subject's lumbar vertebrae. In addition, when measuring bone density of the proximal femur, X-rays are irradiated from the front of the subject's proximal femur. Here, "front of the lumbar vertebrae" and "front of the proximal femur" refer to the direction that correctly faces the imaging site, such as the lumbar vertebrae and proximal femur, and may be on the ventral side of the subject's body or on the back side of the subject. The proximal femur includes, for example, at least one of the neck, trochanter, shaft, and the entire proximal femur (neck, trochanter, and shaft). In the MD method, for example, X-rays are irradiated onto the hand.
第2被検体の骨質の測定方法としては、第2被検体の尿又は血液中の骨代謝マーカの濃度を算出する方法を用いることができる。骨代謝マーカとしては、例えば、I型コラーゲン架橋N-テロペプチド(NTX)、I型コラーゲン架橋C-テロペプチド(CTX)、酒石酸抵抗性酸ホスファターゼ(TRACP-5b)、デオキシピリジノリン(DPD)等を用いることができる。 The bone quality of the second subject can be measured by calculating the concentration of a bone metabolism marker in the urine or blood of the second subject. Examples of bone metabolism markers that can be used include type I collagen cross-linked N-telopeptide (NTX), type I collagen cross-linked C-telopeptide (CTX), tartrate-resistant acid phosphatase (TRACP-5b), and deoxypyridinoline (DPD).
また、第2被検体の筋肉量の測定としては、例えば、身体機能測定、体組成計による測定、ロコモ度テスト、サルコペニア診断、重心動揺測定、下肢筋力測定、立ち上がり速度測定、超音波画像診断による筋肉の厚み測定等を用いることができる。 Furthermore, the muscle mass of the second subject can be measured by, for example, physical function measurement, measurement using a body composition scale, locomotive syndrome test, sarcopenia diagnosis, center of gravity sway measurement, lower limb muscle strength measurement, standing speed measurement, muscle thickness measurement using ultrasound imaging diagnosis, etc.
上記したその他の実施形態2の情報処理システム1によれば、第1被検体の第1画像G1及び第2画像G2から、第1被検体の骨密度推定値、骨質推定値、及び筋肉量推定値を出力することができる。これにより、医療施設の医師等は、提示装置60に提示された骨密度推定値、骨質推定値、及び筋肉量推定値を参照することで、患者である第1被験体に対して、各推定値を考慮したより具体的な診断結果を提供できる。 According to the information processing system 1 of the above-described alternative embodiment 2, it is possible to output the estimated bone density, bone quality, and muscle mass of the first subject from the first image G1 and second image G2 of the first subject. This allows doctors and others at medical facilities to refer to the estimated bone density, bone quality, and muscle mass presented on the presentation device 60 and provide the first subject, who is a patient, with more specific diagnostic results that take into account each estimated value.
〔その他の実施形態3〕
上記した情報処理システム1では、提示制御部26は、図4のS16において、提示制御部26は、骨折リスクYに加えて、骨密度推定値、骨質推定値、及び筋肉量推定値を提示装置60に提示するものとしたが、更に、第1被検体を支援する支援情報を提示装置60に提示してもよい。
Other embodiment 3
In the above-described information processing system 1, in S16 of FIG. 4 , the presentation control unit 26 presents to the presentation device 60 the bone density estimate, the bone quality estimate, and the muscle mass estimate in addition to the fracture risk Y. However, the presentation control unit 26 may also present to the presentation device 60 support information for supporting the first subject.
この場合、予測部25は、骨折リスクY、骨密度推定値、骨質推定値、及び筋肉量推定値と、第1被検体の年齢及び/又は性別に応じた骨密度を示す情報、骨質を示す情報、及び筋肉量を示す基準情報とを比較することにより、第1被検体を支援する支援情報を出力すればよい。ここで、骨密度推定値、骨質推定値、及び筋肉量推定値は、第1被検体の推定情報に相当する。 In this case, the prediction unit 25 outputs support information to support the first subject by comparing the fracture risk Y, estimated bone density, estimated bone quality, and estimated muscle mass with reference information indicating bone density, bone quality, and muscle mass according to the age and/or sex of the first subject. Here, the estimated bone density, bone quality, and muscle mass correspond to the estimated information of the first subject.
例えば、予測部25は、第1被検体の年齢及び性別と同じ又は近い人の骨密度の平均値と比較して、骨密度推定値が低い場合、例えば第1被検体にカルシウムを多く摂取すること、日光に浴びること、運動すること等を促す支援情報を提示制御部26へ出力する。また、予測部25は、第1被検体と、年齢及び性別が同じ又は近い人の筋肉量の平均値と比較して、筋肉量推定値が低い場合、運動量を増やすことを促す支援情報を提示制御部26へ出力する。 For example, if the estimated bone density value is low compared to the average bone density of people of the same or similar age and sex as the first subject, the prediction unit 25 outputs support information to the presentation control unit 26 that encourages the first subject to take in more calcium, get more sunlight, exercise, etc. Furthermore, if the estimated muscle mass value is low compared to the average muscle mass of people of the same or similar age and sex as the first subject, the prediction unit 25 outputs support information to the presentation control unit 26 that encourages the first subject to increase the amount of exercise.
なお、予測部25は、解析部22により、第2画像G2であるエコー画像の輝度を解析することにより、第1被検体の属性情報を予測してもよい。当該属性情報は、第1被検体の年齢、性別、及び筋肉の質のうち少なくともいずれかを含む情報である。 The prediction unit 25 may predict attribute information of the first subject by analyzing the brightness of the echo image, which is the second image G2, using the analysis unit 22. The attribute information includes at least one of the age, sex, and muscle quality of the first subject.
〔その他の実施形態4〕
上記した実施形態1の情報処理システム1では、予測部25は、特定の部位である胸部の骨の骨折リスクYを出力するものとしたが、これに限定されない。予測部25は、第1画像G1及び第2画像G2から、予測モデル32を用いて、第1被検体の骨の複数の部位毎に、骨折リスクを出力してもよい。
Other embodiment 4
In the information processing system 1 of the first embodiment described above, the prediction unit 25 outputs the fracture risk Y of the bones in the chest, which is a specific region, but this is not limiting. The prediction unit 25 may output the fracture risk for each of multiple regions of the bones of the first subject from the first image G1 and the second image G2 using the prediction model 32.
この場合、第1画像G1には、第1被検体の複数の部位の骨が写っている。第2画像G2には、第1被検体の複数の部位の筋肉が写っている。予測モデル32は、第2被検体の複数の部位の骨が写る第3画像、及び第2被検体の複数の部位の筋肉が写る第4画像を説明変数とし、第3画像及び/又は第4画像が撮像されてから所定期間に発生した部位毎の異常発生情報を目的変数として用いた機械学習により生成される。なお、第3画像と第4画像の撮像時期は異なっていてもよいし、同じであってもよい。第3画像と第4画像の撮像時期が異なる場合、どちらかの撮像日を基準日とすればよい。また、第3画像の撮像日と第4画像の撮像日の中間に該当する日を基準日としてもよい。 In this case, the first image G1 shows bones in multiple locations on the first subject. The second image G2 shows muscles in multiple locations on the first subject. The prediction model 32 is generated by machine learning using the third image, which shows bones in multiple locations on the second subject, and the fourth image, which shows muscles in multiple locations on the second subject, as explanatory variables, and abnormality occurrence information for each location that occurred within a specified period after the third image and/or the fourth image were captured as the objective variable. Note that the third image and the fourth image may be captured at different times or the same. If the third image and the fourth image are captured at different times, the date on which either image was captured can be used as the reference date. The reference date may also be a date halfway between the date on which the third image and the date on which the fourth image were captured.
上記した構成によれば、医師等は、第1被検体に対して、骨Bの部位毎にそれぞれの骨折リスクYを伝えることができ、よりきめ細やかな診断を行うことができる。なお、予測部25は、第1被検体の骨Bの部位毎の骨折リスクYを組み合わせて、第1被検体の全体の骨の骨折リスクYを予測してもよい。 With the above-described configuration, a doctor or other medical professional can inform the first subject of the fracture risk Y for each part of bone B, enabling a more detailed diagnosis. The prediction unit 25 may also combine the fracture risks Y for each part of bone B of the first subject to predict the fracture risk Y for all of the first subject's bones.
また、予測部25は、第1画像G1及び第2画像G2から、予測モデル32を用いて、第1被検体の骨の複数の部位毎に、骨折リスクY、骨密度推定値、骨質推定値、及び筋肉量推定値を出力してもよい。部位毎とは、例えば、頚椎、胸椎、腰椎等の領域毎に分けられたものであってもよいし、腰椎L1,L2,L3,L4等といった椎体毎に分けられたものであってもよい。 Furthermore, the prediction unit 25 may use the prediction model 32 to output the fracture risk Y, bone mineral density estimate, bone quality estimate, and muscle mass estimate for each of multiple bone regions of the first subject from the first image G1 and the second image G2. "By region" may be, for example, divided into regions such as the cervical vertebrae, thoracic vertebrae, and lumbar vertebrae, or may be divided into vertebral bodies such as the lumbar vertebrae L1, L2, L3, and L4.
そして、予測部25は、骨折リスクY、骨密度推定値、骨質推定値、及び筋肉量推定値に基づいて、第1被検体の骨の複数の部位の中から、骨折への関連性が高い注目部位を特定して、提示制御部26により、上記注目部位を提示してもよい。これにより、医師等は、第1被検体の診断時に、上記注目部位を重点的に診察することで、より適切な治療を行うことが可能となる。 The prediction unit 25 may then identify a region of interest that is highly related to fracture from among multiple regions of the bones of the first subject based on the fracture risk Y, the estimated bone density value, the estimated bone quality value, and the estimated muscle mass value, and present the region of interest via the presentation control unit 26. This allows doctors and other medical professionals to provide more appropriate treatment by focusing their examination on the region of interest when diagnosing the first subject.
〔その他の実施形態5〕
上記した実施形態2の情報処理システム1Aでは、予測装置10Aの学習部24が予測モデル32A及び推定モデル35を生成するものとしたが、これに限らず、予測装置10A以外の別の装置が予測モデル32A及び推定モデル35を生成してもよい。この場合、別の装置が生成した予測モデル32A及び推定モデル35を記憶部3に記憶させておけばよく、学習部24はなくてもよい。なお、別の装置が生成した予測モデル32A及び推定モデル35は、図示しない通信部が通信ネットワークを介して受信し、受信した予測モデル32A及び推定モデル35を制御部2が記憶部3に記憶させればよい。あるいは、別の装置が生成した予測モデル32A及び推定モデル35を、USBメモリ又はDVD等の記録媒体に記録した後、当該記録媒体を介して、記憶部3に予測モデル32A及び推定モデル35を記憶させてもよい。
Other embodiment 5
In the information processing system 1A of the second embodiment described above, the learning unit 24 of the prediction device 10A generates the prediction model 32A and the estimation model 35. However, this is not limiting, and the prediction model 32A and the estimation model 35 may be generated by a device other than the prediction device 10A. In this case, the prediction model 32A and the estimation model 35 generated by the other device may be stored in the storage unit 3, and the learning unit 24 may be omitted. Note that the prediction model 32A and the estimation model 35 generated by the other device may be received by a communication unit (not shown) via a communication network, and the control unit 2 may store the received prediction model 32A and the estimation model 35 in the storage unit 3. Alternatively, the prediction model 32A and the estimation model 35 generated by the other device may be recorded on a recording medium such as a USB memory or a DVD, and then the prediction model 32A and the estimation model 35 may be stored in the storage unit 3 via the recording medium.
また、上記した実施形態2の情報処理システム1Aでは、制御部2及び記憶部3が予測装置10Aに備えられているものとしたが、これに限定されない。予測装置10Aは、クラウド上に導入されたクラウド型の装置であってもよい。この場合、第1被検体の将来の骨の状態に関する推定情報を、通信ネットワークを介してクラウド上の予測装置10Aに送信し、予測装置10Aが予測した予測情報を、通信ネットワークを介して提示装置60が受信する。また、予測装置10Aは、医療施設、又は解析サービスを提供する企業内に設けられたオンプレミス型の装置であってもよい。 Furthermore, in the information processing system 1A of the above-described second embodiment, the control unit 2 and the memory unit 3 are provided in the prediction device 10A, but this is not limited to this. The prediction device 10A may be a cloud-based device installed on the cloud. In this case, estimated information regarding the future bone condition of the first subject is transmitted to the prediction device 10A on the cloud via a communications network, and the prediction information predicted by the prediction device 10A is received by the presentation device 60 via the communications network. Furthermore, the prediction device 10A may be an on-premise device installed in a medical facility or a company that provides analysis services.
また、情報処理システム1Aは、第1画像G1及び第3画像を撮像する撮像装置と、予測装置10Aとが一体に構成されていてもよい。この場合、画像管理装置40及び電子カルテ管理装置50は不要となる。 Furthermore, the information processing system 1A may be configured such that the imaging device that captures the first image G1 and the third image and the prediction device 10A are integrated into one unit. In this case, the image management device 40 and the electronic medical record management device 50 are not required.
上記した実施形態2の情報処理システム1Aでは、予測情報は、第1画像G1aに写る部位に異常が発生する可能性を示すものとしたが、これに限定されない。予測情報は、第1画像G1aに写らない部位に異常が発生する可能性を示すものであってもよい。 In the information processing system 1A of the second embodiment described above, the prediction information indicates the possibility of an abnormality occurring in an area shown in the first image G1a, but this is not limited to this. The prediction information may also indicate the possibility of an abnormality occurring in an area not shown in the first image G1a.
例えば、予測部25は、第1被検体の胸部が写る第1画像G1aから、予測モデル32Aを用いて、腰椎又は大腿骨の骨折リスクYを予測してもよい。この場合、予測モデル32Aは、第2被検体の胸部が写る第2画像及び第2被検体の骨に関する情報を説明変数とし、第2被検体の腰椎又は大腿骨に発生した骨折に関する異常情報を目的変数として用いた機械学習により生成される。 For example, the prediction unit 25 may predict the risk Y of a lumbar or femur fracture from a first image G1a showing the chest of a first subject using a prediction model 32A. In this case, the prediction model 32A is generated by machine learning using a second image showing the chest of a second subject and information about the bones of the second subject as explanatory variables, and abnormality information about a fracture that has occurred in the lumbar or femur of the second subject as a response variable.
上記した実施形態2の情報処理システム1Aでは、提示装置60に、第1被検体の骨の骨密度を示す情報、骨質を示す情報、筋肉量を示す情報、及び骨折リスクYに関する情報が数値形式で提示されるものとしたが、これに限らず、例えば、各情報がヒートマップ形式で提示されてもよい。また、骨質を示す情報は、第3画像の少なくとも一部をテクスチャ解析して得られる特微量であってもよい。 In the information processing system 1A of the second embodiment described above, the presentation device 60 presents information indicating the bone density of the first subject's bones, information indicating the bone quality, information indicating the muscle mass, and information regarding the fracture risk Y in numerical format, but this is not limited to this. For example, each piece of information may be presented in heat map format. Furthermore, the information indicating the bone quality may be a feature quantity obtained by texture analysis of at least a portion of the third image.
〔その他の実施形態6〕
上記した実施形態2の情報処理システム1Aでは、予測モデル32A及び推定モデル35として、ニューラルネットワークを用いるものとしたが、これに限らず、他にも、線形回帰モデル等を用いてもよい。
Other embodiment 6
In the information processing system 1A of the second embodiment described above, a neural network is used as the prediction model 32A and the estimation model 35, but this is not limiting, and other models such as a linear regression model may also be used.
上記した実施形態2では、予測部25は、特定の部位である胸部の第1画像G1aを用いて、胸部の骨Bの骨折リスクYを出力するものとしたが、これに限定されない。予測部25は、予測モデル32Aを用いて、第1被検体の骨Bの部位毎の異常を出力するようにしてもよい。ここで、部位毎に異常を出力するとは、例えば、頚椎、胸椎、腰椎等の領域毎、又は、各領域の椎骨(例えば胸椎T1~T12、腰椎L1~L5等)毎に、それぞれ骨折リスクY等を出力することをいう。 In the above-described second embodiment, the prediction unit 25 outputs the fracture risk Y of the chest bone B using the first image G1a of the chest, which is a specific region, but this is not limited to this. The prediction unit 25 may also use the prediction model 32A to output abnormalities for each region of the bone B of the first subject. Here, outputting abnormalities for each region means, for example, outputting the fracture risk Y, etc. for each region, such as the cervical vertebrae, thoracic vertebrae, and lumbar vertebrae, or for each vertebra in each region (e.g., thoracic vertebrae T1-T12, lumbar vertebrae L1-L5, etc.).
上記予測モデル32Aは、第3画像、及び第2被検体の骨Bの部位毎の情報を説明変数とし、第2時点で第2被検体の骨Bの部位毎に発生した骨折に関する異常情報を目的変数として用いた機械学習により生成されるものである。この構成によれば、医師等は、第1被検体に対して、骨Bの部位毎にそれぞれの骨折リスクYを伝えることができ、よりきめ細やかな診断を行うことができる。なお、予測部25は、第1被検体の骨Bの部位毎の骨折リスクYを組み合わせて、第1被検体の全体の骨の骨折リスクYを予測してもよい。 The prediction model 32A is generated by machine learning using the third image and information about each part of bone B of the second subject as explanatory variables, and abnormality information about fractures that occurred in each part of bone B of the second subject at the second time point as the objective variable. With this configuration, a doctor or other medical professional can inform the first subject of the fracture risk Y for each part of bone B, allowing for a more detailed diagnosis. The prediction unit 25 may also combine the fracture risk Y for each part of bone B of the first subject to predict the fracture risk Y for all bones of the first subject.
〔ソフトウェアによる実現例〕
予測装置10,10Aの機能は、当該予測装置10,10Aとしてコンピュータを機能させるためのプログラムであって、当該予測装置10,10Aの各制御ブロック(特に、予測部25及び提示制御部26)としてコンピュータを機能させるためのプログラムにより実現することができる。
[Software implementation example]
The functions of the prediction devices 10, 10A can be realized by a program that causes a computer to function as the prediction devices 10, 10A, and a program that causes a computer to function as each control block (particularly, the prediction unit 25 and the presentation control unit 26) of the prediction devices 10, 10A.
この場合、上記予測装置10,10Aは、上記プログラムを実行するためのハードウェアとして、少なくとも1つの制御装置(例えばプロセッサ)と少なくとも1つの記憶装置(例えばメモリ)を有するコンピュータを備えている。この制御装置と記憶装置により上記プログラムを実行することにより、上記各実施形態で説明した各機能が実現される。 In this case, the prediction devices 10 and 10A include a computer having at least one control device (e.g., a processor) and at least one storage device (e.g., a memory) as hardware for executing the programs. By executing the programs using this control device and storage device, the functions described in each of the above embodiments are realized.
上記プログラムは、一時的ではなく、コンピュータ読み取り可能な、1または複数の記録媒体に記録されていてもよい。この記録媒体は、上記装置が備えていてもよいし、備えていなくてもよい。後者の場合、上記プログラムは、有線または無線の任意の伝送媒体を介して上記装置に供給されてもよい。 The above program may be stored non-temporarily on one or more computer-readable storage media. These storage media may or may not be included in the device. In the latter case, the program may be supplied to the device via any wired or wireless transmission medium.
また、上記各制御ブロックの機能の一部または全部は、論理回路により実現することも可能である。例えば、上記各制御ブロックとして機能する論理回路が形成された集積回路も本開示の範疇に含まれる。この他にも、例えば量子コンピュータにより上記各制御ブロックの機能を実現することも可能である。 Furthermore, some or all of the functions of each of the above control blocks can be realized by logic circuits. For example, integrated circuits incorporating logic circuits that function as each of the above control blocks are also included in the scope of this disclosure. In addition, the functions of each of the above control blocks can also be realized by, for example, a quantum computer.
以上、本開示に係る発明について、諸図面及び実施例に基づいて説明してきた。しかし、本開示に係る発明は上述した各実施形態に限定されるものではない。すなわち、本開示に係る発明は本開示で示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示に係る発明の技術的範囲に含まれる。つまり、当業者であれば本開示に基づき種々の変形または修正を行うことが容易であることに注意されたい。また、これらの変形または修正は本開示の範囲に含まれることに留意されたい。 The invention according to the present disclosure has been described above based on various drawings and examples. However, the invention according to the present disclosure is not limited to the above-described embodiments. In other words, the invention according to the present disclosure can be modified in various ways within the scope set forth in this disclosure, and embodiments obtained by appropriately combining the technical means disclosed in different embodiments are also included in the technical scope of the invention according to the present disclosure. In other words, it should be noted that a person skilled in the art would easily be able to make various modifications or corrections based on this disclosure. It should also be noted that these modifications or corrections are included in the scope of this disclosure.
〔まとめ1〕
本開示の態様1に係る情報処理システムは、第1被検体の少なくとも一部が写る第1画像及び第1データから、予測モデルを用いて予測情報を出力する予測部を備えている。前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である。
[Summary 1]
An information processing system according to a first aspect of the present disclosure includes a prediction unit that outputs prediction information using a prediction model based on a first image and first data showing at least a portion of a first subject. The prediction model is generated by machine learning using a third image and second data showing at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality occurring in the bones of the second subject at a second time point different from a first time point when the third image was captured as a dependent variable. The prediction information is information indicating the possibility of an abnormality occurring in the bones of the first subject.
本開示の態様2に係る情報処理システムでは、上記態様1において、前記第1データは、第2画像を含むデータである。前記第2データは、第4画像を含むデータであってもよい。 In the information processing system according to aspect 2 of the present disclosure, in aspect 1 above, the first data is data including a second image. The second data may also be data including a fourth image.
本開示の態様3に係る情報処理システムでは、上記態様2において、前記第1画像は、前記第1被検体の所定部位が写る画像である。前記第2画像は、前記第1被検体の前記所定部位に対応した部位が写る画像である。前記第3画像は、前記第2被検体の所定部位が写る画像である。前記第4画像は、前記第2被検体の前記所定部位に対応した部位が写る画像であってもよい。 In the information processing system according to aspect 3 of the present disclosure, in aspect 2 above, the first image is an image depicting a predetermined region of the first subject. The second image is an image depicting a region of the first subject corresponding to the predetermined region. The third image is an image depicting a predetermined region of the second subject. The fourth image may be an image depicting a region of the second subject corresponding to the predetermined region.
本開示の態様4に係る情報処理システムでは、上記態様1から3のいずれかにおいて、前記予測情報は、前記第1画像が撮像された第3時点とは異なる時点である第4時点に、前記第1被検体の前記骨に異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect 4 of the present disclosure, in any of aspects 1 to 3 above, the prediction information may be information indicating the possibility of an abnormality occurring in the bone of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
本開示の態様5に係る情報処理システムは、上記態様2から4のいずれかにおいて、前記第1画像には、前記第1被検体の複数の部位の骨が写り、前記第2画像には、前記第1被検体の複数の部位の筋肉が写り、前記第3画像には、前記第2被検体の複数の部位の骨が写り、前記第4画像には、前記第2被検体の複数の部位の筋肉が写っていてもよい。 In the information processing system according to aspect 5 of the present disclosure, in any one of aspects 2 to 4 above, the first image may show bones at multiple locations on the first subject, the second image may show muscles at multiple locations on the first subject, the third image may show bones at multiple locations on the second subject, and the fourth image may show muscles at multiple locations on the second subject.
本開示の態様6に係る情報処理システムでは、上記態様2から5のいずれかにおいて、前記予測モデルは、前記第3画像及び前記第4画像を説明変数とし、前記第3画像及び/又は前記第4画像が撮像されてから所定期間に発生した前記部位毎の前記異常情報を目的変数として用いた機械学習により生成され、前記予測部は、前記第1画像及び前記第2画像から、前記予測モデルを用いて、前記第1被検体の骨の前記複数の部位毎の前記予測情報を出力してもよい。 In the information processing system according to aspect 6 of the present disclosure, in any of aspects 2 to 5 above, the prediction model may be generated by machine learning using the third image and the fourth image as explanatory variables and the abnormality information for each of the parts that has occurred within a predetermined period since the third image and/or the fourth image was captured as a target variable, and the prediction unit may use the prediction model to output the prediction information for each of the multiple parts of the bones of the first subject from the first image and the second image.
本開示の態様7に係る情報処理システムでは、上記態様5または6において、前記部位は、胸部、腰部、足部、及び手部のうち少なくともいずれかを含んでいてもよい。 In the information processing system according to aspect 7 of the present disclosure, in aspect 5 or 6 above, the body parts may include at least one of the chest, waist, feet, and hands.
本開示の態様8に係る情報処理システムでは、上記態様2から7のいずれかにおいて、前記第2画像は、前記第1被検体の1つの部位、又は、複数の部位が写っていてもよい。 In the information processing system according to aspect 8 of the present disclosure, in any of aspects 2 to 7 above, the second image may show one region or multiple regions of the first subject.
本開示の態様9に係る情報処理システムでは、上記態様2から8のいずれかにおいて、前記第2画像は、静止画、及び動画のうち少なくともいずれかを含んでいてもよい。 In the information processing system according to aspect 9 of the present disclosure, in any of aspects 2 to 8 above, the second image may include at least one of a still image and a video.
本開示の態様10に係る情報処理システムでは、上記態様2から9のいずれかにおいて、前記第1画像及び前記第2画像は、単純X線画像、CT(Computed Tomography)画像、MRI(Magnetic Resonance Imaging)画像、DXA(Dual Energy X-ray Absorptiometry)画像、エコー画像、及びDES(Dual Energy Subtraction)による画像のうち少なくともいずれかを含んでいてもよい。 In the information processing system according to aspect 10 of the present disclosure, in any of aspects 2 to 9 above, the first image and the second image may include at least one of a plain X-ray image, a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, a DXA (Dual Energy X-ray Absorptiometry) image, an echo image, and an image obtained by DES (Dual Energy Subtraction).
本開示の態様11に係る情報処理システムでは、上記態様2から10のいずれかにおいて、前記第2画像は、前記第1画像とは画像の種類が異なる。前記第4画像は、前記第3画像とは画像の種類が異なっていてもよい。 In the information processing system according to aspect 11 of the present disclosure, in any of aspects 2 to 10 above, the second image may be a different image type from the first image. The fourth image may be a different image type from the third image.
本開示の態様12に係る情報処理システムでは、上記態様3から11において、前記予測情報は、前記第1画像に写る前記所定部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect 12 of the present disclosure, in aspects 3 to 11 above, the prediction information may be information indicating the possibility that the abnormality will occur in the specified area shown in the first image.
本開示の態様13に係る情報処理システムでは、上記態様3から12のいずれかにおいて、前記予測情報は、前記第1画像に写らない前記所定部位とは異なる部位に前記異常が発生する可能性を示す情報である構成としてもよい。 In the information processing system according to aspect 13 of the present disclosure, in any of aspects 3 to 12 above, the prediction information may be information indicating the possibility that the abnormality will occur in a region other than the predetermined region that is not captured in the first image.
本開示の態様14に係る情報処理システムでは、上記態様2から13のいずれかにおいて、前記異常は、運動器疾患であってもよい。 In the information processing system according to aspect 14 of the present disclosure, in any of aspects 2 to 13 above, the abnormality may be a musculoskeletal disorder.
本開示の態様15に係る情報処理システムでは、上記態様2から14のいずれかにおいて、前記予測部は、前記第1被検体の前記第1画像から、第1推定モデルを用いて、前記第1被検体の骨密度及び/又は骨質を出力し、前記第1被検体の前記第2画像から、第2推定モデルを用いて、前記第1被検体の筋肉量を出力し、前記第1推定モデルは、前記第2被検体の前記第3画像を説明変数とし、前記第2被検体の骨密度及び/又は骨質を示す骨情報を目的変数として用いた機械学習により生成され、前記第2推定モデルは、前記第2被検体の前記第4画像を説明変数とし、前記第2被検体の筋肉量を示す筋肉情報を目的変数として用いた機械学習により生成されてもよい。 In an information processing system according to aspect 15 of the present disclosure, in any of aspects 2 to 14 above, the prediction unit may output the bone density and/or bone quality of the first subject from the first image of the first subject using a first estimation model, and output the muscle mass of the first subject from the second image of the first subject using a second estimation model, the first estimation model being generated by machine learning using the third image of the second subject as an explanatory variable and bone information indicating the bone density and/or bone quality of the second subject as an objective variable, and the second estimation model being generated by machine learning using the fourth image of the second subject as an explanatory variable and muscle information indicating the muscle mass of the second subject as an objective variable.
本開示の態様16に係る情報処理システムでは、上記態様15において、前記予測モデルは、前記第3画像、前記第4画像、前記第3画像を前記第1推定モデルに入力して得られる前記骨情報、及び前記第4画像を前記第2推定モデルに入力して得られる前記筋肉情報のうち少なくともいずれかを説明変数とし、前記異常情報を目的変数として用いた機械学習により生成されてもよい。 In the information processing system according to aspect 16 of the present disclosure, in aspect 15 above, the prediction model may be generated by machine learning using at least one of the third image, the fourth image, the bone information obtained by inputting the third image into the first estimation model, and the muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and the abnormality information as a target variable.
本開示の態様17に係る情報処理システムは、上記態様2から16のいずれかにおいて、前記第2画像を用いて、前記第1被検体の筋肉及び脂肪の量、厚さ、萎縮量、及び柔軟性のうち少なくともいずれかを含む情報を解析する解析部と、前記第1画像に所定の補正を行う補正部と、を更に備えている。前記解析部は、前記第2画像をセグメンテーションすることにより、軟部組織の領域を特定し、前記補正部は、前記第1画像から、前記解析部により特定された前記軟部組織の領域を除く補正を行い、前記予測部は、前記補正部により補正された前記第1画像、及び前記第2画像から、前記予測モデルを用いて、前記予測情報を出力してもよい。 The information processing system according to aspect 17 of the present disclosure is in any one of aspects 2 to 16 above, further comprising an analysis unit that uses the second image to analyze information including at least one of the amount, thickness, amount of atrophy, and flexibility of muscle and fat of the first subject, and a correction unit that performs a predetermined correction on the first image. The analysis unit may identify a soft tissue region by segmenting the second image, the correction unit may perform a correction on the first image to remove the soft tissue region identified by the analysis unit, and the prediction unit may output the prediction information using the prediction model from the first image corrected by the correction unit and the second image.
本開示の態様18に係る情報処理システムでは、上記態様17において、前記第2画像は、エコー画像を含み、前記予測部は、前記解析部により前記エコー画像の輝度を解析することにより、前記第1被検体の属性情報を予測してもよい。 In the information processing system according to aspect 18 of the present disclosure, in accordance with aspect 17 above, the second image may include an echo image, and the prediction unit may predict attribute information of the first subject by analyzing the brightness of the echo image using the analysis unit.
本開示の態様19に係る情報処理システムでは、上記態様18において、前記属性情報は、前記第1被検体の年齢、性別、及び筋肉の質のうち少なくともいずれかを含む情報であってもよい。 In the information processing system according to aspect 19 of the present disclosure, in aspect 18 above, the attribute information may include at least one of the age, sex, and muscle quality of the first subject.
本開示の態様20に係る情報処理システムでは、上記態様2から19のいずれかにおいて、前記予測部は、前記予測情報、前記第1被検体の推定情報、及び基準情報のうち少なくとも1つを用いて、前記第1被検体を支援する支援情報を出力し、前記推定情報は、前記第1被検体の骨密度、骨質、及び筋肉量のうち少なくとも1つであり、前記基準情報は、前記第1被検体の年齢及び/又は性別に応じた骨密度、骨質、及び筋肉量のうち少なくとも1つであってもよい。 In the information processing system according to aspect 20 of the present disclosure, in any of aspects 2 to 19 above, the prediction unit outputs support information to support the first subject using at least one of the predicted information, estimated information about the first subject, and reference information, and the estimated information may be at least one of the bone density, bone quality, and muscle mass of the first subject, and the reference information may be at least one of the bone density, bone quality, and muscle mass according to the age and/or sex of the first subject.
本開示の態様21に係る情報処理システムでは、上記態様20において、前記予測部は、前記予測情報及び/又は前記推定情報に基づいて、前記異常への関連性が高い注目部位を特定してもよい。 In the information processing system according to aspect 21 of the present disclosure, in aspect 20 above, the prediction unit may identify a region of interest that is highly related to the abnormality based on the prediction information and/or the estimation information.
本開示の態様22に係る情報処理システムでは、上記態様2から21のいずれかにおいて、前記予測情報は、前記第1画像及び前記第2画像のそれぞれが前記異常に与える影響度合いを示す影響度を含んでいてもよい。 In an information processing system according to aspect 22 of the present disclosure, in any of aspects 2 to 21 above, the prediction information may include an influence degree indicating the degree to which each of the first image and the second image affects the abnormality.
本開示の態様23に係る情報処理システムでは、上記態様1から22のいずれかにおいて、前記予測情報は、前記第1被検体に前記異常が発生する可能性が高い時期を示す情報を含んでいてもよい。 In the information processing system according to aspect 23 of the present disclosure, in any of aspects 1 to 22 above, the prediction information may include information indicating a time when the abnormality is likely to occur in the first subject.
本開示の態様24に係る情報処理システムは、上記態様1から23のいずれかにおいて、前記予測情報を提示装置に提示させる提示制御部を備えていてもよい。 The information processing system according to aspect 24 of the present disclosure may be any of aspects 1 to 23 above, and may include a presentation control unit that causes the presentation device to present the prediction information.
本開示の態様25に係る情報処理システムは、上記態様1において、前記第1画像から、推定モデルを用いて前記第1被検体の骨に関する第1推定情報を含む前記第1データを出力する推定部を更に備えている。前記推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する情報を含む前記第2データを目的変数として用いた機械学習により生成される。 The information processing system according to aspect 25 of the present disclosure is in accordance with aspect 1 above, further comprising an estimation unit that outputs the first data including first estimated information related to the bones of the first subject from the first image using an estimation model. The estimation model is generated by machine learning using the third image as an explanatory variable and the second data including information related to the bones of the second subject as a target variable.
本開示の態様26に係る情報処理システムは、上記態様1において、前記第1画像から、複数の推定モデルを用いて前記第1被検体の骨に関する複数の第1推定情報を含む前記第1データを出力する推定部を更に備えている。前記複数の推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する複数の情報を含む前記第2データを目的変数として用いた機械学習により生成される。 The information processing system according to aspect 26 of the present disclosure is in accordance with aspect 1 above, further comprising an estimation unit that outputs the first data including multiple pieces of first estimated information related to the bones of the first subject from the first image using multiple estimation models. The multiple estimation models are generated by machine learning using the third image as an explanatory variable and the second data including multiple pieces of information related to the bones of the second subject as a target variable.
本開示の態様27に係る情報処理システムでは、上記態様26において、前記予測情報は、前記第1画像を撮像した第3時点とは異なる時点である第4時点で前記第1被検体の組織に異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect 27 of the present disclosure, in aspect 26 above, the prediction information may be information indicating the possibility of an abnormality occurring in the tissue of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
本開示の態様28に係る情報処理システムでは、上記態様25から27のいずれかにおいて、前記予測情報は、前記第1画像に写る部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect 28 of the present disclosure, in any of aspects 25 to 27 above, the prediction information may be information indicating the possibility that the abnormality will occur in the area shown in the first image.
本開示の態様29に係る情報処理システムでは、上記態様25から27のいずれかにおいて、前記予測情報は、前記第1画像に写らない部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect 29 of the present disclosure, in any of aspects 25 to 27 above, the prediction information may be information indicating the possibility that the abnormality will occur in an area not captured in the first image.
本開示の態様30に係る情報処理システムでは、上記態様25から29のいずれかにおいて、前記第1画像は、前記第1被検体の骨及び/又は筋肉の少なくとも一部が写る単純X線画像であり、前記第3画像は、前記第2被検体の骨及び/又は筋肉の少なくとも一部が写る単純X線画像であってもよい。 In the information processing system according to aspect 30 of the present disclosure, in any of aspects 25 to 29 above, the first image may be a plain X-ray image showing at least a portion of the bones and/or muscles of the first subject, and the third image may be a plain X-ray image showing at least a portion of the bones and/or muscles of the second subject.
本開示の態様31に係る情報処理システムでは、上記態様25から30のいずれかにおいて、前記第1画像は、正面像又は側面像であり、前記第3画像は、前記第1画像と同じ向きの像であってもよい。 In the information processing system according to aspect 31 of the present disclosure, in any of aspects 25 to 30 above, the first image may be a front image or a side image, and the third image may be an image oriented in the same direction as the first image.
本開示の態様32に係る情報処理システムでは、上記態様25から31のいずれかにおいて、前記推定部は、前記第3画像を説明変数とし、前記第2被検体の骨の骨密度、骨量、及び骨質の少なくとも1つの測定結果を示す骨強度情報を目的変数として用いた機械学習により生成された骨強度推定モデルと、前記第3画像を説明変数とし、前記第2被検体の筋肉量及び姿勢の少なくとも1つの測定結果を示す骨負荷情報を目的変数として用いた機械学習により生成された骨負荷推定モデルと、のうちの少なくともいずれかを用いてもよい。 In the information processing system according to aspect 32 of the present disclosure, in any of aspects 25 to 31 above, the estimation unit may use at least one of a bone strength estimation model generated by machine learning using the third image as an explanatory variable and bone strength information indicating at least one measurement result of the bone mineral density, bone mass, and bone quality of the bones of the second subject as an objective variable, and a bone stress estimation model generated by machine learning using the third image as an explanatory variable and bone stress information indicating at least one measurement result of the muscle mass and posture of the second subject as an objective variable.
本開示の態様33に係る情報処理システムは、上記態様32において、前記骨強度推定モデルは、前記第1画像から前記第1被検体の骨の骨密度を示す情報を出力する第1推定モデルと、前記第1画像から前記第1被検体の骨質を示す情報を出力する第2推定モデルと、を含む。前記骨負荷推定モデルは、前記第1画像から前記第1被検体の筋肉量を示す情報を出力する第3推定モデルを含んでもよい。 In the information processing system according to aspect 33 of the present disclosure, in the above-mentioned aspect 32, the bone strength estimation model includes a first estimation model that outputs information indicating the bone density of the bone of the first subject from the first image, and a second estimation model that outputs information indicating the bone quality of the first subject from the first image. The bone load estimation model may include a third estimation model that outputs information indicating the muscle mass of the first subject from the first image.
本開示の態様34に係る情報処理システムは、上記態様33において、前記第1推定情報は、前記第1推定モデルから出力された前記第1被検体の前記骨の骨密度を示す情報、前記第2推定モデルから出力された前記第1被検体の前記骨質を示す情報、及び前記第3推定モデルから出力された前記第1被検体の前記筋肉量を示す情報、のうちの2以上の情報を含み、前記予測部は、前記2以上の情報のそれぞれに、前記異常の発生との因果関係の強さに基づく重み付けを施して、前記予測モデルに入力してもよい。 In the information processing system according to aspect 34 of the present disclosure, in the above aspect 33, the first estimated information includes two or more pieces of information from the group consisting of information indicating the bone density of the bone of the first subject output from the first estimation model, information indicating the bone quality of the first subject output from the second estimation model, and information indicating the muscle mass of the first subject output from the third estimation model, and the prediction unit may weight each of the two or more pieces of information based on the strength of the causal relationship with the occurrence of the abnormality and input the weighted information into the prediction model.
本開示の態様35に係る情報処理システムでは、上記態様33において、前記第1被検体の前記骨密度を示す情報は、単位面積当りの骨ミネラル密度、単位体積当りの骨ミネラル密度、YAM(Young Adult Mean)、Tスコア、及びZスコアのうち少なくとも1つにより表されてもよい。 In the information processing system according to aspect 35 of the present disclosure, in aspect 33 above, the information indicating the bone density of the first subject may be expressed by at least one of bone mineral density per unit area, bone mineral density per unit volume, YAM (Young Adult Mean), T-score, and Z-score.
本開示の態様36に係る情報処理システムでは、上記態様32において、前記骨強度情報は、DXA(Dual-energy X-ray Absorptiometry)法、超音波法、及び前記第2被検体の尿又は血液中の骨代謝マーカの濃度を算出する方法のうち少なくともいずれかを含む方法を用いて測定された情報であり、前記骨負荷情報は、前記第2被検体の筋肉量、及び前記第2被検体の姿勢のうち少なくともいずれかを測定した結果を示す情報であってもよい。 In the information processing system according to aspect 36 of the present disclosure, in the above aspect 32, the bone strength information is information measured using a method including at least one of DXA (Dual-energy X-ray Absorptiometry), ultrasound, and a method of calculating the concentration of a bone metabolic marker in the urine or blood of the second subject, and the bone load information may be information indicating the results of measuring at least one of the muscle mass of the second subject and the posture of the second subject.
本開示の態様37に係る情報処理システムは、上記態様25から36のいずれかにおいて、前記異常は、運動器疾患であってもよい。 In the information processing system according to aspect 37 of the present disclosure, in any of aspects 25 to 36 above, the abnormality may be a musculoskeletal disorder.
本開示の態様38に係る情報処理システムでは、上記態様27において、前記予測モデルは、前記第3画像及び/又は前記第2被検体の前記骨の部位毎の情報を説明変数とし、前記第2時点で前記第2被検体の前記骨の部位毎に発生した前記異常に関する前記異常情報を目的変数として用いた機械学習により生成され、前記予測情報は、前記第4時点で前記第1被検体の前記組織の部位毎の前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect 38 of the present disclosure, in aspect 27 above, the prediction model may be generated by machine learning using the third image and/or information for each bone region of the second subject as explanatory variables and the abnormality information regarding the abnormality that occurred for each bone region of the second subject at the second time point as a target variable, and the prediction information may be information indicating the possibility of the abnormality occurring for each tissue region of the first subject at the fourth time point.
本開示の態様39に係る情報処理システムでは、上記態様27において、前記予測情報は、前記第1被検体の前記組織に前記異常が発生する可能性が高い時期を示す情報を含んでいてもよい。 In the information processing system according to aspect 39 of the present disclosure, in aspect 27 above, the prediction information may include information indicating a time when the abnormality is likely to occur in the tissue of the first subject.
本開示の態様40に係る情報処理システムでは、上記態様25から39のいずれかにおいて、前記予測部は、前記第1画像及び/又は前記第1推定情報から、前記第1被検体の属性情報に対応した前記予測モデルを用いて、前記第1被検体を支援する支援情報を出力してもよい。 In the information processing system according to aspect 40 of the present disclosure, in any of aspects 25 to 39 above, the prediction unit may output support information for supporting the first subject from the first image and/or the first estimated information using the prediction model corresponding to attribute information of the first subject.
本開示の態様41に係る予測装置は、上記態様25から40のいずれかにおいて、前記予測情報を提示装置に提示させる提示制御部を備えていてもよい。 The prediction device according to aspect 41 of the present disclosure may be any of aspects 25 to 40 above, and may include a presentation control unit that causes the presentation device to present the prediction information.
本開示の態様42に係る予測装置は、上記態様1から24のいずれかの情報処理システムにおける前記予測部を備えている。 A prediction device according to aspect 42 of the present disclosure includes the prediction unit in the information processing system of any one of aspects 1 to 24 above.
本開示の態様43に係る予測装置は、上記態様25から41のいずれかの情報処理システムにおける前記推定部及び前記予測部を備えている。 A prediction device according to aspect 43 of the present disclosure includes the estimation unit and the prediction unit in the information processing system of any one of aspects 25 to 41 above.
本開示の態様44に係る情報処理方法は、1または複数のコンピュータが実行する情報処理方法であって、第1被検体の少なくとも一部が写る第1画像及び第1データから、予測モデルを用いて予測情報を出力する予測ステップを含む。前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である。 An information processing method according to aspect 44 of the present disclosure is an information processing method executed by one or more computers, and includes a prediction step of outputting predicted information using a prediction model from a first image and first data that show at least a portion of a first subject. The prediction model is generated by machine learning using a third image and second data that show at least a portion of a second subject as explanatory variables, and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the third image was captured as a target variable. The predicted information is information that indicates the possibility of an abnormality occurring in the bones of the first subject.
本開示の態様45に係る情報処理方法では、上記態様44において、前記第1データは、第2画像を含むデータである。前記第2データは、第4画像を含むデータであってもよい。 In the information processing method according to aspect 45 of the present disclosure, in aspect 44 above, the first data is data including a second image. The second data may also be data including a fourth image.
本開示の態様46に係る情報処理方法は、上記態様44において、前記第1画像から、推定モデルを用いて前記第1被検体の骨に関する第1推定情報を含む前記第1データを出力する推定ステップを更に含む。前記推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する情報を含む前記第2データを目的変数として用いた機械学習により生成されてもよい。 The information processing method according to aspect 46 of the present disclosure is in accordance with aspect 44 above, further comprising an estimation step of outputting, from the first image, the first data including first estimated information related to the bones of the first subject using an estimation model. The estimation model may be generated by machine learning using the third image as an explanatory variable and the second data including information related to the bones of the second subject as a target variable.
本開示の態様47に係る情報処理方法は、上記態様44において、前記第1画像から、複数の推定モデルを用いて前記第1被検体の骨に関する複数の第1推定情報を含む前記第1データを出力する推定ステップを更に含む。前記複数の推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する複数の情報を含む前記第2データを目的変数として用いた機械学習によりそれぞれ生成され、前記予測モデルは、前記第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の前記骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成されてもよい。 The information processing method according to aspect 47 of the present disclosure is in accordance with aspect 44 above, further comprising an estimation step of outputting the first data, including multiple first estimated pieces of information related to the bones of the first subject, from the first image using multiple estimation models. The multiple estimation models may be generated by machine learning using the third image as an explanatory variable and the second data, including multiple pieces of information related to the bones of the second subject, as an objective variable, and the prediction model may be generated by machine learning using the second data as an explanatory variable and abnormality information related to an abnormality that occurred in the bones of the second subject at a second time point different from the first time point when the third image was captured as an objective variable.
本開示の態様48に係る制御プログラムは、上記態様1から24のいずれかの情報処理システムとしてコンピュータを機能させるための制御プログラムであって、前記予測部として前記コンピュータを機能させるための制御プログラムであってもよい。 The control program according to aspect 48 of the present disclosure may be a control program for causing a computer to function as any of the information processing systems of aspects 1 to 24 above, and may be a control program for causing the computer to function as the prediction unit.
本開示の態様49に係る制御プログラムは、上記態様25から41のいずれかの情報処理システムとしてコンピュータを機能させるための制御プログラムであって、前記推定部、及び前記予測部として前記コンピュータを機能させるための制御プログラムであってもよい。 The control program according to aspect 49 of the present disclosure may be a control program for causing a computer to function as any of the information processing systems of aspects 25 to 41 above, and may be a control program for causing the computer to function as the estimation unit and the prediction unit.
本開示の態様50に係る記録媒体は、上記態様48の制御プログラムを記録したコンピュータ読み取り可能な非一時的な記録媒体であってもよい。 The recording medium according to aspect 50 of the present disclosure may be a computer-readable, non-transitory recording medium on which the control program according to aspect 48 above is recorded.
本開示の態様51に係る記録媒体は、上記態様49の制御プログラムを記録したコンピュータ読み取り可能な非一時的な記録媒体であってもよい。 The recording medium according to aspect 51 of the present disclosure may be a computer-readable, non-transitory recording medium on which the control program according to aspect 49 above is recorded.
〔まとめ2〕
本開示の態様A1に係る情報処理システムは、第1被検体の少なくとも一部が写る第1画像及び第2画像から、予測モデルを用いて予測情報を出力する予測部を備え、前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第4画像を説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成され、前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である。
[Summary 2]
An information processing system according to aspect A1 of the present disclosure includes a prediction unit that uses a prediction model to output predictive information from a first image and a second image that show at least a portion of a first subject, the predictive model being generated by machine learning using a third image and a fourth image that show at least a portion of a second subject as explanatory variables and abnormality information relating to an abnormality that occurred in the bones of the second subject at a second time point that is different from a first time point when the third image was captured as a dependent variable, and the predictive information being information indicating the possibility of an abnormality occurring in the bones of the first subject.
本開示の態様A2に係る情報処理システムでは、上記態様A1において、前記第1画像は、前記第1被検体の所定部位が写る画像であり、前記第2画像は、前記第1被検体の前記所定部位に対応した部位が写る画像であり、前記第3画像は、前記第2被検体の所定部位が写る画像であり、前記第4画像は、前記第2被検体の前記所定部位に対応した部位が写る画像であってもよい。 In the information processing system according to aspect A2 of the present disclosure, in aspect A1 above, the first image may be an image depicting a predetermined region of the first subject, the second image may be an image depicting a region of the first subject corresponding to the predetermined region, the third image may be an image depicting a predetermined region of the second subject, and the fourth image may be an image depicting a region of the second subject corresponding to the predetermined region.
本開示の態様A3に係る情報処理システムは、上記態様A1またはA2において、前記予測情報は、前記第1画像が撮像された第3時点とは異なる時点である第4時点に、前記第1被検体の前記骨に異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect A3 of the present disclosure, in aspect A1 or A2 above, the prediction information may be information indicating the possibility of an abnormality occurring in the bone of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
本開示の態様A4に係る情報処理システムは、上記態様A1からA3のいずれかにおいて、記第1画像には、前記第1被検体の複数の部位の骨が写り、前記第2画像には、前記第1被検体の複数の部位の筋肉が写り、前記第3画像には、前記第2被検体の複数の部位の骨が写り、前記第4画像には、前記第2被検体の複数の部位の筋肉が写っていてもよい。 In the information processing system according to aspect A4 of the present disclosure, in any of aspects A1 to A3 above, the first image may show bones at multiple locations on the first subject, the second image may show muscles at multiple locations on the first subject, the third image may show bones at multiple locations on the second subject, and the fourth image may show muscles at multiple locations on the second subject.
本開示の態様A5に係る情報処理システムでは、上記態様A4において、前記予測モデルは、前記第3画像及び前記第4画像を説明変数とし、前記第3画像及び/又は前記第4画像が撮像されてから所定期間に発生した前記部位毎の前記異常情報を目的変数として用いた機械学習により生成され、前記予測部は、前記第1画像及び前記第2画像から、前記予測モデルを用いて、前記第1被検体の骨の前記複数の部位毎の前記予測情報を出力してもよい。 In the information processing system according to aspect A5 of the present disclosure, in aspect A4 above, the prediction model may be generated by machine learning using the third image and the fourth image as explanatory variables and the abnormality information for each of the parts that has occurred within a predetermined period since the third image and/or the fourth image was captured as a target variable, and the prediction unit may use the prediction model to output the prediction information for each of the multiple parts of the bones of the first subject from the first image and the second image.
本開示の態様A6に係る情報処理システムでは、上記態様A4またはA5において、前記部位は、胸部、腰部、足部、及び手部のうち少なくともいずれかを含んでもよい。 In the information processing system according to aspect A6 of the present disclosure, in aspect A4 or A5 above, the body parts may include at least one of the chest, waist, feet, and hands.
本開示の態様A7に係る情報処理システムでは、上記態様A1からA6のいずれかにおいて、前記第2画像は、前記第1被検体の1つの部位、又は、複数の部位が写っていてもよい。 In the information processing system according to aspect A7 of the present disclosure, in any of aspects A1 to A6 above, the second image may show one region or multiple regions of the first subject.
本開示の態様A8に係る情報処理システムでは、上記態様A1からA7のいずれかにおいて、前記第2画像は、静止画、及び動画のうち少なくともいずれかを含んでいてもよい。 In the information processing system according to aspect A8 of the present disclosure, in any of aspects A1 to A7 above, the second image may include at least one of a still image and a video.
本開示の態様A9に係る情報処理システムでは、上記態様A1からA8のいずれかにおいて、前記第1画像及び前記第2画像は、単純X線画像、CT(Computed Tomography)画像、MRI(Magnetic Resonance Imaging)画像、DXA(Dual Energy X-ray Absorptiometry)画像、エコー画像、及びDES(Dual Energy Subtraction)による画像のうち少なくともいずれかを含んでいてもよい。 In the information processing system according to aspect A9 of the present disclosure, in any of aspects A1 to A8 above, the first image and the second image may include at least one of a plain X-ray image, a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, a DXA (Dual Energy X-ray Absorptiometry) image, an echo image, and an image obtained by DES (Dual Energy Subtraction).
本開示の態様A10に係る情報処理システムでは、上記態様A1からA9のいずれかにおいて、前記第2画像は、前記第1画像とは画像の種類が異なり、前記第4画像は、前記第3画像とは画像の種類が異なっていてもよい。 In the information processing system according to aspect A10 of the present disclosure, in any of aspects A1 to A9 above, the second image may be a different image type from the first image, and the fourth image may be a different image type from the third image.
本開示の態様A11に係る情報処理システムは、上記態様A2において、前記予測情報は、前記第1画像に写る前記所定部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect A11 of the present disclosure, in aspect A2 above, the prediction information may be information indicating the possibility that the abnormality will occur in the specified area shown in the first image.
本開示の態様A12に係る情報処理システムでは、上記態様A2において、前記予測情報は、前記第1画像に写らない前記所定部位とは異なる部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect A12 of the present disclosure, in aspect A2 above, the prediction information may be information indicating the possibility of the abnormality occurring in a region other than the predetermined region that is not captured in the first image.
本開示の態様A13に係る情報処理システムでは、上記態様A1からA12のいずれかにおいて、前記異常は、運動器疾患であってもよい。 In the information processing system according to aspect A13 of the present disclosure, in any of aspects A1 to A12 above, the abnormality may be a musculoskeletal disorder.
本開示の態様A14に係る情報処理システムでは、上記態様A1からA13のいずれかにおいて、前記予測部は、前記第1被検体の前記第1画像から、第1推定モデルを用いて、前記第1被検体の骨密度及び/又は骨質を出力し、前記第1被検体の前記第2画像から、第2推定モデルを用いて、前記第1被検体の筋肉量を出力し、前記第1推定モデルは、前記第2被検体の前記第3画像を説明変数とし、前記第2被検体の骨密度及び/又は骨質を示す骨情報を目的変数として用いた機械学習により生成され、前記第2推定モデルは、前記第2被検体の前記第4画像を説明変数とし、前記第2被検体の筋肉量を示す筋肉情報を目的変数として用いた機械学習により生成されてもよい。 In the information processing system according to aspect A14 of the present disclosure, in any of aspects A1 to A13 above, the prediction unit may output the bone density and/or bone quality of the first subject from the first image of the first subject using a first estimation model, and output the muscle mass of the first subject from the second image of the first subject using a second estimation model, the first estimation model being generated by machine learning using the third image of the second subject as an explanatory variable and bone information indicating the bone density and/or bone quality of the second subject as an objective variable, and the second estimation model being generated by machine learning using the fourth image of the second subject as an explanatory variable and muscle information indicating the muscle mass of the second subject as an objective variable.
本開示の態様A15に係る情報処理システムは、上記態様A1からA14のいずれかにおいて、前記予測モデルは、前記第3画像、前記第4画像、前記第3画像を前記第1推定モデルに入力して得られる前記骨情報、及び前記第4画像を前記第2推定モデルに入力して得られる前記筋肉情報のうち少なくともいずれかを説明変数とし、前記異常情報を目的変数として用いた機械学習により生成されてもよい。 In the information processing system according to aspect A15 of the present disclosure, in any of aspects A1 to A14 above, the prediction model may be generated by machine learning using at least one of the third image, the fourth image, the bone information obtained by inputting the third image into the first estimation model, and the muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and the abnormality information as a target variable.
本開示の態様A16に係る情報処理システムは、上記態様A1からA15のいずれかにおいて、前記第2画像を用いて、前記第1被検体の筋肉及び脂肪の量、厚さ、萎縮量、及び柔軟性のうち少なくともいずれかを含む情報を解析する解析部と、前記第1画像に所定の補正を行う補正部と、を更に備える。前記解析部は、前記第2画像をセグメンテーションすることにより、軟部組織の領域を特定する。前記補正部は、前記第1画像から、前記解析部により特定された前記軟部組織の領域を除く補正を行う。前記予測部は、前記補正部により補正された前記第1画像、及び前記第2画像から、前記予測モデルを用いて、前記予測情報を出力してもよい。 The information processing system according to aspect A16 of the present disclosure is any of aspects A1 to A15 above, further comprising an analysis unit that uses the second image to analyze information including at least one of the amount, thickness, amount of atrophy, and flexibility of muscle and fat of the first subject, and a correction unit that performs a predetermined correction on the first image. The analysis unit identifies soft tissue regions by segmenting the second image. The correction unit performs correction on the first image to remove the soft tissue regions identified by the analysis unit. The prediction unit may output the prediction information using the prediction model from the first image corrected by the correction unit and the second image.
本開示の態様A17に係る情報処理システムでは、上記態様A16において、前記第2画像は、エコー画像を含む。前記予測部は、前記解析部により前記エコー画像の輝度を解析することにより、前記第1被検体の属性情報を予測してもよい。 In the information processing system according to aspect A17 of the present disclosure, in the above-mentioned aspect A16, the second image includes an echo image. The prediction unit may predict attribute information of the first subject by analyzing the brightness of the echo image using the analysis unit.
本開示の態様A18に係る情報処理システムでは、上記態様A17において、前記属性情報は、前記第1被検体の年齢、性別、及び筋肉の質のうち少なくともいずれかを含む情報であってもよい。 In the information processing system according to aspect A18 of the present disclosure, in aspect A17 above, the attribute information may include at least one of the age, sex, and muscle quality of the first subject.
本開示の態様A19に係る情報処理システムでは、上記態様A1からA18のいずれかにおいて、前記予測部は、前記予測情報、前記第1被検体の推定情報、及び基準情報のうち少なくとも1つを用いて、前記第1被検体を支援する支援情報を出力する。前記推定情報は、前記第1被検体の骨密度、骨質、及び筋肉量のうち少なくとも1つであり、前記基準情報は、前記第1被検体の年齢及び/又は性別に応じた骨密度、骨質、及び筋肉量のうち少なくとも1つであってもよい。 In the information processing system according to aspect A19 of the present disclosure, in any of aspects A1 to A18 above, the prediction unit outputs support information to support the first subject using at least one of the predicted information, estimated information about the first subject, and reference information. The estimated information may be at least one of the bone density, bone quality, and muscle mass of the first subject, and the reference information may be at least one of the bone density, bone quality, and muscle mass according to the age and/or sex of the first subject.
本開示の態様A20に係る情報処理システムでは、上記態様A19において、前記予測部は、前記予測情報及び/又は前記推定情報に基づいて、前記異常への関連性が高い注目部位を特定してもよい。 In the information processing system according to aspect A20 of the present disclosure, in aspect A19 above, the prediction unit may identify a region of interest that is highly related to the abnormality based on the prediction information and/or the estimation information.
本開示の態様A21に係る情報処理システムは、上記態様A1からA20のいずれかにおいて、前記予測情報は、前記第1画像及び前記第2画像のそれぞれが前記異常に与える影響度合いを示す影響度を含んでもよい。 In the information processing system according to aspect A21 of the present disclosure, in any of aspects A1 to A20 above, the prediction information may include an influence degree indicating the degree to which each of the first image and the second image affects the abnormality.
本開示の態様A22に係る情報処理システムは、上記態様A1からA21のいずれかにおいて、前記予測情報は、前記第1被検体に前記異常が発生する可能性が高い時期を示す情報を含んでもよい。 In the information processing system according to aspect A22 of the present disclosure, in any of aspects A1 to A21 above, the prediction information may include information indicating a time when the abnormality is likely to occur in the first subject.
本開示の態様A23に係る情報処理システムは、上記態様A1からA22のいずれかにおいて、前記予測情報を提示装置に提示させる提示制御部を備えてもよい。 The information processing system according to aspect A23 of the present disclosure may be any of aspects A1 to A22 above, and may include a presentation control unit that causes the presentation device to present the prediction information.
本開示の態様A24に係る予測装置は、上記態様A1からA23のいずれかの情報処理システムにおける前記予測部を備える。 A prediction device according to aspect A24 of the present disclosure includes the prediction unit in the information processing system of any one of aspects A1 to A23 above.
本開示の態様A25に係る情報処理方法は、1または複数のコンピュータが実行する情報処理方法であって、第1被検体の少なくとも一部が写る第1画像及び第2画像から、予測モデルを用いて予測情報を出力する予測ステップを含み、前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第4画像を説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成され、前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である。 An information processing method according to aspect A25 of the present disclosure is an information processing method executed by one or more computers, and includes a prediction step of outputting predicted information using a prediction model from a first image and a second image that show at least a portion of a first subject, wherein the prediction model is generated by machine learning using a third image and a fourth image that show at least a portion of a second subject as explanatory variables and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the third image was captured as a target variable, and the predicted information is information that indicates the possibility of an abnormality occurring in the bones of the first subject.
本開示の態様A26に係る制御プログラムは、上記態様A1からA23のいずれかの情報処理システムとしてコンピュータを機能させるための制御プログラムであって、前記予測部として前記コンピュータを機能させるための制御プログラムであってもよい。 The control program according to aspect A26 of the present disclosure may be a control program for causing a computer to function as any of the information processing systems of aspects A1 to A23 described above, and may be a control program for causing the computer to function as the prediction unit.
本開示の態様A27に係る記録媒体は、上記態様A26の制御プログラムを記録したコンピュータ読み取り可能な非一時的な記録媒体であってもよい。 The recording medium according to aspect A27 of the present disclosure may be a computer-readable, non-transitory recording medium on which the control program of aspect A26 above is recorded.
〔まとめ3〕
本開示の態様B1に係る情報処理システムは、第1被検体の組織の少なくとも一部が写る第1画像から、推定モデルを用いて前記第1被検体の骨に関する第1推定情報を出力する推定部と、前記第1画像及び前記第1推定情報から、予測モデルを用いて予測情報を出力する予測部と、を備えている。前記推定モデルは、第2被検体の組織が写る第2画像を説明変数とし、前記第2被検体の骨に関する情報を目的変数として用いた機械学習により生成される。前記予測モデルは、前記第2画像及び前記第2被検体の骨に関する情報を説明変数とし、前記第2画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の前記骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の組織に異常が発生する可能性を示す情報である。
[Summary 3]
An information processing system according to aspect B1 of the present disclosure includes an estimation unit that outputs first estimated information regarding a bone of a first subject using an estimation model from a first image capturing at least a portion of the tissue of the first subject, and a prediction unit that outputs predicted information from the first image and the first estimated information using a prediction model. The estimation model is generated by machine learning using a second image capturing the tissue of a second subject as an explanatory variable and information about the bone of the second subject as a dependent variable. The prediction model is generated by machine learning using the second image and information about the bone of the second subject as explanatory variables and anomaly information regarding an abnormality that occurred in the bone of the second subject at a second time point different from the first time point when the second image was captured as a dependent variable. The predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
本開示の態様B2に係る情報処理システムは、第1被検体の組織の少なくとも一部が写る第1画像から、複数の推定モデルを用いて前記第1被検体の骨に関する複数の第1推定情報を出力する推定部と、前記複数の第1推定情報から、予測モデルを用いて予測情報を出力する予測部と、を備えている。前記複数の推定モデルは、第2被検体の組織が写る第2画像を説明変数とし、前記第2被検体の骨に関する複数の情報を目的変数として用いた機械学習により生成される。前記予測モデルは、前記第2被検体の骨に関する複数の情報を説明変数とし、前記第2画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の前記骨に発生した異常に関する異常情報を目的変数として用いた機械学習によりそれぞれ生成される。前記予測情報は、前記第1被検体の組織に異常が発生する可能性を示す情報である。 An information processing system according to aspect B2 of the present disclosure includes an estimation unit that uses a plurality of estimation models to output a plurality of first estimated information items related to the bones of a first subject from a first image that shows at least a portion of the tissue of the first subject, and a prediction unit that uses a prediction model to output predicted information from the plurality of first estimated information items. The plurality of estimation models are generated by machine learning using a second image that shows the tissue of a second subject as an explanatory variable and a plurality of pieces of information related to the bones of the second subject as an objective variable. The prediction models are each generated by machine learning using a plurality of pieces of information related to the bones of the second subject as an explanatory variable and abnormality information related to an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the second image was captured as an objective variable. The predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
本開示の態様B3に係る情報処理システムでは、上記態様B1またはB2において、前記予測情報は、前記第1画像を撮像した第3時点とは異なる時点である第4時点で前記第1被検体の前記組織に異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect B3 of the present disclosure, in aspect B1 or B2 above, the prediction information may be information indicating the possibility of an abnormality occurring in the tissue of the first subject at a fourth time point that is different from the third time point at which the first image was captured.
本発明の態様B4に係る情報処理システムでは、上記態様B1からB3のいずれかにおいて、前記予測情報は、前記第1画像に写る部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect B4 of the present invention, in any of aspects B1 to B3 above, the prediction information may be information indicating the possibility that the abnormality will occur in the area shown in the first image.
本発明の態様B5に係る情報処理システムでは、上記態様B1からB3のいずれかにおいて、前記予測情報は、前記第1画像に写らない部位に前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect B5 of the present invention, in any of aspects B1 to B3 above, the prediction information may be information indicating the possibility that the abnormality will occur in an area not captured in the first image.
本開示の態様B6に係る情報処理システムでは、上記態様B1からB5のいずれかにおいて、前記第1画像は、前記第1被検体の骨及び/又は筋肉の少なくとも一部が写る単純X線画像であり、前記第2画像は、前記第2被検体の骨及び/又は筋肉の少なくとも一部が写る単純X線画像であってもよい。 In the information processing system according to aspect B6 of the present disclosure, in any of aspects B1 to B5 above, the first image may be a plain X-ray image showing at least a portion of the bones and/or muscles of the first subject, and the second image may be a plain X-ray image showing at least a portion of the bones and/or muscles of the second subject.
本開示の態様B7に係る情報処理システムでは、上記態様B1からB6のいずれかにおいて、前記第1画像は、正面像又は側面像であり、前記第2画像は、前記第1画像と同じ向きの像であってもよい。 In the information processing system according to aspect B7 of the present disclosure, in any of aspects B1 to B6 above, the first image may be a front image or a side image, and the second image may be an image oriented in the same direction as the first image.
本開示の態様B8に係る情報処理システムでは、上記態様B1からB7のいずれかにおいて、前記推定部は、前記第2画像を説明変数とし、前記第2被検体の骨の骨密度、骨量、及び骨質の少なくとも1つの測定結果を示す骨強度情報を目的変数として用いた機械学習により生成された骨強度推定モデルと、前記第2画像を説明変数とし、前記第2被検体の筋肉量及び姿勢の少なくとも1つの測定結果を示す骨負荷情報を目的変数として用いた機械学習により生成された骨負荷推定モデルと、のうちの少なくともいずれかを用いてもよい。 In the information processing system according to aspect B8 of the present disclosure, in any of aspects B1 to B7 above, the estimation unit may use at least one of a bone strength estimation model generated by machine learning using the second image as an explanatory variable and bone strength information indicating at least one measurement result of the bone mineral density, bone mass, and bone quality of the bones of the second subject as an objective variable, and a bone stress estimation model generated by machine learning using the second image as an explanatory variable and bone stress information indicating at least one measurement result of the muscle mass and posture of the second subject as an objective variable.
本開示の態様B9に係る情報処理システムでは、上記態様B8において、前記骨強度推定モデルは、前記第1画像から前記第1被検体の骨の骨密度を示す情報を出力する第1推定モデルと、前記第1画像から前記第1被検体の骨質を示す情報を出力する第2推定モデルと、を含む。前記骨負荷推定モデルは、前記第1画像から前記第1被検体の筋肉量を示す情報を出力する第3推定モデルを含んでもよい。 In the information processing system according to aspect B9 of the present disclosure, in the above-mentioned aspect B8, the bone strength estimation model includes a first estimation model that outputs information indicating the bone density of the bone of the first subject from the first image, and a second estimation model that outputs information indicating the bone quality of the first subject from the first image. The bone load estimation model may include a third estimation model that outputs information indicating the muscle mass of the first subject from the first image.
本開示の態様B10に係る情報処理システムでは、上記態様B9において、前記第1推定情報は、前記第1推定モデルから出力された前記第1被検体の前記骨の骨密度を示す情報、前記第2推定モデルから出力された前記第1被検体の前記骨質を示す情報、及び前記第3推定モデルから出力された前記第1被検体の前記筋肉量を示す情報、のうちの2以上の情報を含む。前記予測部は、前記2以上の情報のそれぞれに、前記異常の発生との因果関係の強さに基づく重み付けを施して、前記予測モデルに入力してもよい。 In the information processing system according to aspect B10 of the present disclosure, in the above aspect B9, the first estimated information includes two or more pieces of information from the following: information indicating the bone mineral density of the bone of the first subject output from the first estimation model; information indicating the bone quality of the first subject output from the second estimation model; and information indicating the muscle mass of the first subject output from the third estimation model. The prediction unit may weight each of the two or more pieces of information based on the strength of the causal relationship with the occurrence of the abnormality, and input the weighted information into the prediction model.
本開示の態様B11に係る情報処理システムでは、上記態様B9において、前記第1被検体の前記骨密度を示す情報は、単位面積当りの骨ミネラル密度、単位体積当りの骨ミネラル密度、YAM(Young Adult Mean)、Tスコア、及びZスコアのうち少なくとも1つにより表されてもよい。 In the information processing system according to aspect B11 of the present disclosure, in aspect B9 above, the information indicating the bone density of the first subject may be expressed by at least one of bone mineral density per unit area, bone mineral density per unit volume, YAM (Young Adult Mean), T-score, and Z-score.
本開示の態様B12に係る情報処理システムでは、上記態様B8において、前記骨強度情報は、DXA(Dual-energy X-ray Absorptiometry)法、超音波法、及び前記第2被検体の尿又は血液中の骨代謝マーカの濃度を算出する方法のうち少なくともいずれかを含む方法を用いて測定された情報である。前記骨負荷情報は、前記第2被検体の筋肉量、及び前記第2被検体の姿勢のうち少なくともいずれかを測定した結果を示す情報であってもよい。 In the information processing system according to aspect B12 of the present disclosure, in aspect B8 above, the bone strength information is information measured using a method including at least one of DXA (Dual-energy X-ray Absorptiometry), ultrasound, and a method of calculating the concentration of a bone metabolic marker in the urine or blood of the second subject. The bone load information may be information indicating the results of measuring at least one of the muscle mass of the second subject and the posture of the second subject.
本開示の態様B13に係る情報処理システムでは、上記態様B1からB12のいずれかにおいて、前記異常は、運動器疾患であってもよい。 In the information processing system according to aspect B13 of the present disclosure, in any of aspects B1 to B12 above, the abnormality may be a musculoskeletal disorder.
本開示の態様B14に係る情報処理システムは、上記態様B3からB13のいずれかにおいて、前記予測モデルは、前記第2画像及び/又は前記第2被検体の前記骨の部位毎の情報を説明変数とし、前記第2時点で前記第2被検体の前記骨の部位毎に発生した前記異常に関する前記異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第4時点で前記第1被検体の前記組織の部位毎の前記異常が発生する可能性を示す情報であってもよい。 In the information processing system according to aspect B14 of the present disclosure, in any of aspects B3 to B13 above, the prediction model is generated by machine learning using the second image and/or information for each bone region of the second subject as explanatory variables, and the abnormality information regarding the abnormality that occurred for each bone region of the second subject at the second time point as a target variable. The prediction information may be information indicating the possibility of the abnormality occurring for each tissue region of the first subject at the fourth time point.
本開示の態様B15に係る情報処理システムでは、上記態様B1からB14のいずれかにおいて、前記予測情報は、前記第1被検体の前記組織に前記異常が発生する可能性が高い時期を示す情報を含んでもよい。 In the information processing system according to aspect B15 of the present disclosure, in any of aspects B1 to B14 above, the prediction information may include information indicating a time when the abnormality is likely to occur in the tissue of the first subject.
本開示の態様B16に係る情報処理システムでは、上記態様B1からB15のいずれかにおいて、前記予測部は、前記第1画像及び/又は前記第1推定情報から、前記第1被検体の属性情報に対応した前記予測モデルを用いて、前記第1被検体を支援する支援情報を出力してもよい。 In the information processing system according to aspect B16 of the present disclosure, in any of aspects B1 to B15 above, the prediction unit may output support information for supporting the first subject from the first image and/or the first estimated information using the prediction model corresponding to attribute information of the first subject.
本開示の態様B17に係る情報処理システムは、上記態様B1からB16のいずれかにおいて、前記予測情報を提示装置に提示させる提示制御部を備えてもよい。 The information processing system according to aspect B17 of the present disclosure may be any of aspects B1 to B16 above, and may include a presentation control unit that causes the presentation device to present the prediction information.
本開示の態様B18に係る予測装置は、上記態様B1からB17のいずれかにおいて、前記情報処理システムにおける前記推定部及び前記予測部を備える。 A prediction device according to aspect B18 of the present disclosure is any of aspects B1 to B17 above, and includes the estimation unit and the prediction unit in the information processing system.
本開示の態様B19に係る情報処理方法は、1または複数のコンピュータが実行する情報処理方法であって、第1被検体の組織の少なくとも一部が写る第1画像から、推定モデルを用いて前記第1被検体の骨に関する第1推定情報を出力する推定ステップと、前記第1画像及び前記第1推定情報から、予測モデルを用いて予測情報を出力する予測ステップと、を含む。前記推定モデルは、第2被検体の組織が写る第2画像を説明変数とし、前記第2被検体の骨に関する情報を目的変数として用いた機械学習により生成される。前記予測モデルは、前記第2画像及び前記第2被検体の骨に関する情報を説明変数とし、前記第2画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の前記骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の組織に異常が発生する可能性を示す情報である。 An information processing method according to aspect B19 of the present disclosure is an information processing method executed by one or more computers, and includes an estimation step of outputting first estimated information regarding the bones of a first subject from a first image that shows at least a portion of the tissue of the first subject using an estimation model, and a prediction step of outputting predicted information from the first image and the first estimated information using a prediction model. The estimation model is generated by machine learning using a second image that shows the tissue of a second subject as an explanatory variable and information about the bones of the second subject as a dependent variable. The prediction model is generated by machine learning using the second image and information about the bones of the second subject as explanatory variables and abnormality information regarding an abnormality that occurred in the bones of the second subject at a second time point that is different from the first time point when the second image was captured as a dependent variable. The predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
本開示の態様B20に係る情報処理方法は、1または複数のコンピュータが実行する情報処理方法であって、第1被検体の組織の少なくとも一部が写る第1画像から、複数の推定モデルを用いて前記第1被検体の骨に関する複数の第1推定情報を出力する推定ステップと、前記複数の第1推定情報から、予測モデルを用いて予測情報を出力する予測ステップと、を含む。前記複数の推定モデルは、第2被検体の組織が写る第2画像を説明変数とし、前記第2被検体の骨に関する複数の情報を目的変数として用いた機械学習によりそれぞれ生成される。前記予測モデルは、前記第2被検体の骨に関する複数の情報を説明変数とし、前記第2画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の前記骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される。前記予測情報は、前記第1被検体の組織に異常が発生する可能性を示す情報である。 An information processing method according to aspect B20 of the present disclosure is an information processing method executed by one or more computers, and includes an estimation step of outputting, from a first image showing at least a portion of the tissue of a first subject, a plurality of first estimated information items related to the bones of the first subject using a plurality of estimation models, and a prediction step of outputting predicted information from the plurality of first estimated information items using a prediction model. The plurality of estimation models are each generated by machine learning using a second image showing the tissue of a second subject as an explanatory variable and a plurality of pieces of information related to the bones of the second subject as an objective variable. The prediction model is generated by machine learning using a plurality of pieces of information related to the bones of the second subject as an explanatory variable and abnormality information related to an abnormality that occurred in the bones of the second subject at a second time point different from the first time point when the second image was captured as an objective variable. The predicted information is information indicating the possibility of an abnormality occurring in the tissue of the first subject.
本開示の態様B21に係る制御プログラムは、上記態様B1からB17のいずれかの情報処理システムとしてコンピュータを機能させるための制御プログラムであって、前記推定部、及び前記予測部として前記コンピュータを機能させるための制御プログラムであってもよい。 The control program according to aspect B21 of the present disclosure may be a control program for causing a computer to function as any of the information processing systems of aspects B1 to B17 above, and may be a control program for causing the computer to function as the estimation unit and the prediction unit.
本開示の態様B22に係る記録媒体は、上記態様B21の制御プログラムを記録したコンピュータ読み取り可能な非一時的な記録媒体であってもよい。 The recording medium according to aspect B22 of the present disclosure may be a computer-readable, non-transitory recording medium on which the control program of aspect B21 described above is recorded.
1、1A 情報処理システム
2 制御部
3 記憶部
10、10A 予測装置
21 取得部
22 解析部
23 補正部
24 学習部
25 予測部
26 提示制御部
27 推定部
31 制御プログラム
32、32A 予測モデル
35 推定モデル
60 提示装置
351 第1推定モデル
352 第2推定モデル
353 第3推定モデル
E1 骨密度推定値
E2 骨質推定値
E3 筋肉量推定値
G1、G1a 第1画像
G2 第2画像
REFERENCE SIGNS LIST 1, 1A Information processing system 2 Control unit 3 Storage unit 10, 10A Prediction device 21 Acquisition unit 22 Analysis unit 23 Correction unit 24 Learning unit 25 Prediction unit 26 Presentation control unit 27 Estimation unit 31 Control program 32, 32A Prediction model 35 Estimation model 60 Presentation device 351 First estimation model 352 Second estimation model 353 Third estimation model E1 Bone density estimated value E2 Bone quality estimated value E3 Muscle mass estimated value G1, G1a First image G2 Second image
Claims (51)
前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成され、
前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である、
情報処理システム。 a prediction unit that outputs prediction information using a prediction model from a first image in which at least a part of the first object is captured and the first data;
the prediction model is generated by machine learning using a third image in which at least a part of a second subject is captured and the second data as explanatory variables, and abnormality information regarding an abnormality occurring in a bone of the second subject at a second time point that is different from a first time point at which the third image is captured as a dependent variable;
the prediction information is information indicating a possibility that an abnormality will occur in the bone of the first subject.
Information processing system.
前記第2データは、第4画像を含むデータである、
請求項1に記載の情報処理システム。 the first data is data including a second image,
the second data is data including a fourth image;
The information processing system according to claim 1 .
前記第2画像は、前記第1被検体の前記所定部位に対応した部位が写る画像であり、
前記第3画像は、前記第2被検体の所定部位が写る画像であり、
前記第4画像は、前記第2被検体の前記所定部位に対応した部位が写る画像である、
請求項2に記載の情報処理システム。 the first image is an image of a predetermined region of the first subject,
the second image is an image showing a region of the first subject corresponding to the predetermined region,
the third image is an image of a predetermined region of the second subject,
the fourth image is an image showing a region of the second subject corresponding to the predetermined region;
The information processing system according to claim 2 .
請求項2または3に記載の情報処理システム。 the prediction information is information indicating a possibility that an abnormality will occur in the bone of the first subject at a fourth time point that is different from a third time point at which the first image is captured.
4. The information processing system according to claim 2 or 3.
前記第2画像には、前記第1被検体の複数の部位の筋肉が写り、
前記第3画像には、前記第2被検体の複数の部位の骨が写り、
前記第4画像には、前記第2被検体の複数の部位の筋肉が写る、
請求項2から4のいずれか1項に記載の情報処理システム。 the first image shows bones at a plurality of sites of the first subject;
the second image shows muscles at a plurality of sites of the first subject;
the third image shows bones at a plurality of sites of the second subject;
the fourth image shows muscles at a plurality of sites of the second subject;
The information processing system according to any one of claims 2 to 4.
前記第3画像及び前記第4画像を説明変数とし、前記第3画像及び/又は前記第4画像が撮像されてから所定期間に発生した前記部位毎の前記異常情報を目的変数として用いた機械学習により生成され、
前記予測部は、
前記第1画像及び前記第2画像から、前記予測モデルを用いて、前記第1被検体の骨の前記複数の部位毎の前記予測情報を出力する、
請求項5に記載の情報処理システム。 The predictive model is
the third image and the fourth image are used as explanatory variables, and the abnormality information for each of the parts that has occurred within a predetermined period since the third image and/or the fourth image was captured is used as a target variable; and
The prediction unit
outputting the prediction information for each of the plurality of bone regions of the first subject from the first image and the second image using the prediction model;
The information processing system according to claim 5 .
請求項5または6に記載の情報処理システム。 The part includes at least one of a chest, a waist, a foot, and a hand.
7. The information processing system according to claim 5 or 6.
請求項2から7のいずれか1項に記載の情報処理システム。 The second image shows one or more parts of the first subject.
The information processing system according to any one of claims 2 to 7.
請求項2から8のいずれか1項に記載の情報処理システム。 The second image includes at least one of a still image and a video image.
The information processing system according to any one of claims 2 to 8.
請求項2から9のいずれか1項に記載の情報処理システム。 The first image and the second image include at least one of a plain X-ray image, a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, a DXA (Dual Energy X-ray Absorptiometry) image, an echo image, and an image by DES (Dual Energy Subtraction),
The information processing system according to any one of claims 2 to 9.
前記第4画像は、前記第3画像とは画像の種類が異なる、
請求項2から10のいずれか1項に記載の情報処理システム。 the second image is a different image type from the first image,
the fourth image is a different image type from the third image;
The information processing system according to any one of claims 2 to 10.
請求項2から13のいずれか1項に記載の情報処理システム。 The abnormality is a musculoskeletal disorder.
The information processing system according to any one of claims 2 to 13.
前記第1被検体の前記第1画像から、第1推定モデルを用いて、前記第1被検体の骨密度及び/又は骨質を出力し、
前記第1被検体の前記第2画像から、第2推定モデルを用いて、前記第1被検体の筋肉量を出力し、
前記第1推定モデルは、前記第2被検体の前記第3画像を説明変数とし、前記第2被検体の骨密度及び/又は骨質を示す骨情報を目的変数として用いた機械学習により生成され、
前記第2推定モデルは、前記第2被検体の前記第4画像を説明変数とし、前記第2被検体の筋肉量を示す筋肉情報を目的変数として用いた機械学習により生成される、
請求項2から14のいずれか1項に記載の情報処理システム。 The prediction unit
outputting a bone mineral density and/or a bone quality of the first subject from the first image of the first subject using a first estimation model;
outputting a muscle mass of the first subject from the second image of the first subject using a second estimation model;
the first estimation model is generated by machine learning using the third image of the second subject as an explanatory variable and bone information indicating a bone density and/or a bone quality of the second subject as a response variable;
the second estimation model is generated by machine learning using the fourth image of the second subject as an explanatory variable and muscle information indicating a muscle mass of the second subject as a dependent variable.
15. The information processing system according to claim 2.
前記第3画像、前記第4画像、前記第3画像を前記第1推定モデルに入力して得られる前記骨情報、及び前記第4画像を前記第2推定モデルに入力して得られる前記筋肉情報のうち少なくともいずれかを説明変数とし、前記異常情報を目的変数として用いた機械学習により生成される、
請求項15に記載の情報処理システム。 The predictive model is
generated by machine learning using at least one of the third image, the fourth image, the bone information obtained by inputting the third image into the first estimation model, and the muscle information obtained by inputting the fourth image into the second estimation model as explanatory variables, and the abnormality information as a target variable;
16. The information processing system according to claim 15.
前記第1画像に所定の補正を行う補正部と、を更に備え、
前記解析部は、前記第2画像をセグメンテーションすることにより、軟部組織の領域を特定し、
前記補正部は、前記第1画像から、前記解析部により特定された前記軟部組織の領域を除く補正を行い、
前記予測部は、前記補正部により補正された前記第1画像、及び前記第2画像から、前記予測モデルを用いて、前記予測情報を出力する、
請求項2から16のいずれか1項に記載の情報処理システム。 an analysis unit that analyzes information including at least one of the amount, thickness, amount of atrophy, and flexibility of muscle and fat of the first subject using the second image;
a correction unit that performs a predetermined correction on the first image,
The analysis unit identifies a soft tissue region by segmenting the second image;
the correction unit performs correction to remove the soft tissue region identified by the analysis unit from the first image;
the prediction unit outputs the prediction information using the prediction model from the first image corrected by the correction unit and the second image.
17. The information processing system according to claim 2.
前記予測部は、前記解析部により前記エコー画像の輝度を解析することにより、前記第1被検体の属性情報を予測する、
請求項17に記載の情報処理システム。 the second image includes an echo image;
the prediction unit predicts attribute information of the first subject by analyzing brightness of the echo image using the analysis unit.
18. The information processing system according to claim 17.
請求項18に記載の情報処理システム。 The attribute information is information including at least one of the age, sex, and muscle quality of the first subject.
19. The information processing system according to claim 18.
前記予測情報、前記第1被検体の推定情報、及び基準情報のうち少なくとも1つを用いて、前記第1被検体を支援する支援情報を出力し、
前記推定情報は、前記第1被検体の骨密度、骨質、及び筋肉量のうち少なくとも1つであり、
前記基準情報は、前記第1被検体の年齢及び/又は性別に応じた骨密度、骨質、及び筋肉量のうち少なくとも1つである、
請求項2から19のいずれか1項に記載の情報処理システム。 The prediction unit
outputting support information for supporting the first subject using at least one of the predicted information, the estimated information of the first subject, and the reference information;
the estimated information is at least one of bone mineral density, bone quality, and muscle mass of the first subject;
The reference information is at least one of bone mineral density, bone quality, and muscle mass according to the age and/or sex of the first subject.
20. The information processing system according to claim 2.
請求項20に記載の情報処理システム。 the prediction unit identifies a region of interest that is highly related to the abnormality based on the prediction information and/or the estimation information.
21. The information processing system according to claim 20.
請求項2から21のいずれか1項に記載の情報処理システム。 the prediction information includes an influence degree indicating a degree of influence that each of the first image and the second image has on the abnormality,
22. The information processing system according to any one of claims 2 to 21.
請求項1から22のいずれか1項に記載の情報処理システム。 the prediction information includes information indicating a time when the abnormality is likely to occur in the first subject;
23. The information processing system according to any one of claims 1 to 22.
請求項1から23のいずれか1項に記載の情報処理システム。 a presentation control unit that causes a presentation device to present the prediction information;
24. The information processing system according to any one of claims 1 to 23.
前記推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する情報を含む前記第2データを目的変数として用いた機械学習により生成される、
請求項1に記載の情報処理システム。 an estimation unit that outputs the first data including first estimated information on bones of the first subject from the first image using an estimation model;
the estimation model is generated by machine learning using the third image as an explanatory variable and the second data including information about bones of the second subject as a target variable.
The information processing system according to claim 1 .
前記複数の推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する複数の情報を含む前記第2データを目的変数として用いた機械学習により生成される、
請求項1に記載の情報処理システム。 an estimation unit that outputs the first data including a plurality of first estimated information related to bones of the first subject from the first image using a plurality of estimation models;
the plurality of estimation models are generated by machine learning using the third image as an explanatory variable and the second data including a plurality of pieces of information related to bones of the second subject as a dependent variable.
The information processing system according to claim 1 .
請求項26に記載の情報処理システム。 the prediction information is information indicating a possibility that an abnormality will occur in the tissue of the first subject at a fourth time point that is different from a third time point at which the first image is captured.
27. The information processing system according to claim 26.
請求項25から27のいずれか1項に記載の情報処理システム。 the prediction information is information indicating a possibility that the abnormality will occur in the part shown in the first image.
28. An information processing system according to any one of claims 25 to 27.
請求項25から27のいずれか1項に記載の情報処理システム。 the prediction information is information indicating a possibility that the abnormality will occur in a part not captured in the first image.
28. An information processing system according to any one of claims 25 to 27.
前記第3画像は、前記第2被検体の骨及び/又は筋肉の少なくとも一部が写る単純X線画像である、
請求項25から29のいずれか1項に記載の情報処理システム。 the first image is a plain X-ray image showing at least a part of a bone and/or a muscle of the first subject;
the third image is a plain X-ray image showing at least a part of a bone and/or a muscle of the second subject;
30. An information processing system according to any one of claims 25 to 29.
前記第3画像は、前記第1画像と同じ向きの像である、
請求項25から30のいずれか1項に記載の情報処理システム。 the first image is a front image or a side image,
The third image is an image in the same orientation as the first image.
31. The information processing system according to any one of claims 25 to 30.
前記第3画像を説明変数とし、前記第2被検体の骨の骨密度、骨量、及び骨質の少なくとも1つの測定結果を示す骨強度情報を目的変数として用いた機械学習により生成された骨強度推定モデルと、
前記第3画像を説明変数とし、前記第2被検体の筋肉量及び姿勢の少なくとも1つの測定結果を示す骨負荷情報を目的変数として用いた機械学習により生成された骨負荷推定モデルと、
のうちの少なくともいずれかを用いる、
請求項25から31のいずれか1項に記載の情報処理システム。 The estimation unit
a bone strength estimation model generated by machine learning using the third image as an explanatory variable and bone strength information indicating at least one measurement result of bone mineral density, bone mass, and bone quality of the second subject as a dependent variable;
a bone stress estimation model generated by machine learning using the third image as an explanatory variable and bone stress information indicating at least one measurement result of the muscle mass and posture of the second subject as a dependent variable;
At least one of the following is used:
32. The information processing system according to any one of claims 25 to 31.
前記骨負荷推定モデルは、前記第1画像から前記第1被検体の筋肉量を示す情報を出力する第3推定モデルを含む、
請求項32に記載の情報処理システム。 the bone strength estimation model includes a first estimation model that outputs information indicating a bone density of the bone of the first subject from the first image, and a second estimation model that outputs information indicating a bone quality of the first subject from the first image;
the bone load estimation model includes a third estimation model that outputs information indicating a muscle mass of the first subject from the first image.
33. The information processing system according to claim 32.
前記予測部は、
前記2以上の情報のそれぞれに、前記異常の発生との因果関係の強さに基づく重み付けを施して、前記予測モデルに入力する、
請求項33に記載の情報処理システム。 the first estimation information includes two or more pieces of information among information indicating the bone density of the bone of the first subject output from the first estimation model, information indicating the bone quality of the first subject output from the second estimation model, and information indicating the muscle mass of the first subject output from the third estimation model;
The prediction unit
weighting each of the two or more pieces of information based on the strength of a causal relationship with the occurrence of the abnormality, and inputting the weighted pieces of information into the prediction model;
34. The information processing system of claim 33.
請求項33に記載の情報処理システム。 The information indicating the bone mineral density of the first subject is expressed by at least one of bone mineral density per unit area, bone mineral density per unit volume, YAM (Young Adult Mean), T-score, and Z-score.
34. The information processing system of claim 33.
前記骨負荷情報は、前記第2被検体の筋肉量、及び前記第2被検体の姿勢のうち少なくともいずれかを測定した結果を示す情報である、
請求項32に記載の情報処理システム。 the bone strength information is information measured using a method including at least one of a DXA (Dual-energy X-ray Absorptiometry) method, an ultrasound method, and a method for calculating a concentration of a bone metabolic marker in the urine or blood of the second subject;
the bone load information is information indicating a result of measuring at least one of a muscle mass of the second subject and a posture of the second subject;
33. The information processing system according to claim 32.
請求項25から36のいずれか1項に記載の情報処理システム。 The abnormality is a musculoskeletal disorder.
37. An information processing system according to any one of claims 25 to 36.
前記予測情報は、前記第4時点で前記第1被検体の前記組織の部位毎の前記異常が発生する可能性を示す情報である、
請求項27に記載の情報処理システム。 the prediction model is generated by machine learning using the third image and/or information for each bone site of the second subject as an explanatory variable, and the abnormality information regarding the abnormality occurring for each bone site of the second subject at the second time point as a target variable;
the prediction information is information indicating a possibility of the abnormality occurring for each site of the tissue of the first subject at the fourth time point;
28. The information processing system according to claim 27.
請求項27に記載の情報処理システム。 the prediction information includes information indicating a time when the abnormality is likely to occur in the tissue of the first subject.
28. The information processing system according to claim 27.
請求項25から39のいずれか1項に記載の情報処理システム。 the prediction unit outputs support information for supporting the first subject from the first image and/or the first estimated information by using the prediction model corresponding to attribute information of the first subject.
40. An information processing system according to any one of claims 25 to 39.
請求項25から40のいずれか1項に記載の情報処理システム。 a presentation control unit that causes a presentation device to present the prediction information;
41. The information processing system according to any one of claims 25 to 40.
第1被検体の少なくとも一部が写る第1画像及び第1データから、予測モデルを用いて予測情報を出力する予測ステップを含み、
前記予測モデルは、第2被検体の少なくとも一部が写る第3画像及び第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成され、
前記予測情報は、前記第1被検体の骨に異常が発生する可能性を示す情報である、
情報処理方法。 1. An information processing method executed by one or more computers, comprising:
a prediction step of outputting prediction information using a prediction model from a first image showing at least a part of the first object and the first data;
the prediction model is generated by machine learning using a third image in which at least a part of a second subject is captured and the second data as explanatory variables, and abnormality information regarding an abnormality occurring in a bone of the second subject at a second time point that is different from a first time point at which the third image is captured as a dependent variable;
the prediction information is information indicating a possibility that an abnormality will occur in the bone of the first subject.
Information processing methods.
前記第2データは、第4画像を含むデータである、
請求項44に記載の情報処理方法。 the first data is data including a second image,
the second data is data including a fourth image;
45. The information processing method according to claim 44.
前記推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する情報を含む前記第2データを目的変数として用いた機械学習により生成される、
請求項44に記載の情報処理方法。 an estimation step of outputting the first data including first estimated information about bones of the first subject from the first image using an estimation model;
the estimation model is generated by machine learning using the third image as an explanatory variable and the second data including information about bones of the second subject as a target variable.
45. The information processing method according to claim 44.
前記複数の推定モデルは、前記第3画像を説明変数とし、前記第2被検体の骨に関する複数の情報を含む前記第2データを目的変数として用いた機械学習によりそれぞれ生成され、
前記予測モデルは、前記第2データを説明変数とし、前記第3画像が撮像された第1時点とは異なる時点である第2時点で前記第2被検体の前記骨に発生した異常に関する異常情報を目的変数として用いた機械学習により生成される、
請求項44に記載の情報処理方法。 an estimation step of outputting the first data including a plurality of first estimated information related to bones of the first subject from the first image using a plurality of estimation models;
the plurality of estimation models are generated by machine learning using the third image as an explanatory variable and the second data including a plurality of pieces of information related to bones of the second subject as a dependent variable;
the prediction model is generated by machine learning using the second data as an explanatory variable and abnormality information regarding an abnormality occurring in the bone of the second subject at a second time point that is different from the first time point at which the third image was captured as a dependent variable.
45. The information processing method according to claim 44.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024056885 | 2024-03-29 | ||
| JP2024-056885 | 2024-03-29 | ||
| JP2024-065603 | 2024-04-15 | ||
| JP2024065603 | 2024-04-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025206282A1 true WO2025206282A1 (en) | 2025-10-02 |
Family
ID=97219143
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2025/012706 Pending WO2025206282A1 (en) | 2024-03-29 | 2025-03-28 | Information processing system, prediction device, information processing method, control program, and recording medium |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025206282A1 (en) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018198791A (en) * | 2017-05-26 | 2018-12-20 | 株式会社グローバルヘルス | Information processing device |
| WO2020166561A1 (en) * | 2019-02-14 | 2020-08-20 | 富士フイルム株式会社 | Bone fracture risk evaluation value acquisition device, operation method therefor, and bone fracture risk evaluation value acquisition program |
| JP2021002339A (en) * | 2019-06-21 | 2021-01-07 | ストラックスコープ ピーティワイ リミテッドStraxcorp Pty Ltd | Method and system for machine learning classification based on structure or material segmentation in image |
| US20210015421A1 (en) * | 2019-07-16 | 2021-01-21 | 16 Bit Inc. | Systems and Methods for Approximating Bone Mineral Density and Fracture Risk using Single Energy X-Rays |
| JP2021065317A (en) * | 2019-10-18 | 2021-04-30 | 富士フイルム株式会社 | Information processing device, information processing method, and information processing program |
| JP2022122131A (en) * | 2021-02-09 | 2022-08-22 | 富士フイルム株式会社 | Musculoskeletal disease prediction device, method and program, learning device, method and program, and trained neural network |
| JP2022140050A (en) * | 2021-03-12 | 2022-09-26 | 富士フイルム株式会社 | Estimation device, method, and program |
| JP2022163614A (en) * | 2021-04-14 | 2022-10-26 | 富士フイルム株式会社 | Estimation device, method and program |
| WO2023054287A1 (en) * | 2021-10-01 | 2023-04-06 | 富士フイルム株式会社 | Bone disease prediction device, method, and program, learning device, method, and program, and trained neural network |
| WO2023224022A1 (en) * | 2022-05-20 | 2023-11-23 | 国立大学法人大阪大学 | Program, information processing method, and information processing device |
| JP2024003774A (en) * | 2022-06-20 | 2024-01-15 | 威久 山本 | At least one method for estimating and predicting fractures due to osteoporosis, fracture score output method, learning model generation method, learning model, method for estimating risk factors for osteoporosis fractures, graph creation method, program, information processing device, and learning data set How to make |
-
2025
- 2025-03-28 WO PCT/JP2025/012706 patent/WO2025206282A1/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018198791A (en) * | 2017-05-26 | 2018-12-20 | 株式会社グローバルヘルス | Information processing device |
| WO2020166561A1 (en) * | 2019-02-14 | 2020-08-20 | 富士フイルム株式会社 | Bone fracture risk evaluation value acquisition device, operation method therefor, and bone fracture risk evaluation value acquisition program |
| JP2021002339A (en) * | 2019-06-21 | 2021-01-07 | ストラックスコープ ピーティワイ リミテッドStraxcorp Pty Ltd | Method and system for machine learning classification based on structure or material segmentation in image |
| US20210015421A1 (en) * | 2019-07-16 | 2021-01-21 | 16 Bit Inc. | Systems and Methods for Approximating Bone Mineral Density and Fracture Risk using Single Energy X-Rays |
| JP2021065317A (en) * | 2019-10-18 | 2021-04-30 | 富士フイルム株式会社 | Information processing device, information processing method, and information processing program |
| JP2022122131A (en) * | 2021-02-09 | 2022-08-22 | 富士フイルム株式会社 | Musculoskeletal disease prediction device, method and program, learning device, method and program, and trained neural network |
| JP2022140050A (en) * | 2021-03-12 | 2022-09-26 | 富士フイルム株式会社 | Estimation device, method, and program |
| JP2022163614A (en) * | 2021-04-14 | 2022-10-26 | 富士フイルム株式会社 | Estimation device, method and program |
| WO2023054287A1 (en) * | 2021-10-01 | 2023-04-06 | 富士フイルム株式会社 | Bone disease prediction device, method, and program, learning device, method, and program, and trained neural network |
| WO2023224022A1 (en) * | 2022-05-20 | 2023-11-23 | 国立大学法人大阪大学 | Program, information processing method, and information processing device |
| JP2024003774A (en) * | 2022-06-20 | 2024-01-15 | 威久 山本 | At least one method for estimating and predicting fractures due to osteoporosis, fracture score output method, learning model generation method, learning model, method for estimating risk factors for osteoporosis fractures, graph creation method, program, information processing device, and learning data set How to make |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7157425B2 (en) | Estimation device, system and estimation method | |
| WO2025206282A1 (en) | Information processing system, prediction device, information processing method, control program, and recording medium | |
| US12511748B2 (en) | Estimation apparatus, estimation system, and computer-readable non-transitory medium storing estimation program | |
| WO2024181507A1 (en) | Information processing system, terminal device, method for controlling information processing system, control program, and recording medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25776739 Country of ref document: EP Kind code of ref document: A1 |