WO2025009146A1 - Dispositif de traitement, programme de traitement, procédé de traitement et système de traitement - Google Patents
Dispositif de traitement, programme de traitement, procédé de traitement et système de traitement Download PDFInfo
- Publication number
- WO2025009146A1 WO2025009146A1 PCT/JP2023/025095 JP2023025095W WO2025009146A1 WO 2025009146 A1 WO2025009146 A1 WO 2025009146A1 JP 2023025095 W JP2023025095 W JP 2023025095W WO 2025009146 A1 WO2025009146 A1 WO 2025009146A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- judgment
- image
- possibility
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/045—Control thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/24—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
Definitions
- the present disclosure aims to provide, in various embodiments, a processing device, processing program, processing method, or processing system for determining the possibility of contracting a specified disease using a judgment image of a subject.
- a processing device including at least one processor, the at least one processor being configured to perform processing to acquire one or more judgment images of a subject via a camera for capturing an image of the subject of a user, acquire at least one of medical history information and attribute information of the user, judge the possibility of the subject having a predetermined disease based on a trained judgment model for judging the possibility of the subject having a predetermined disease and at least one of the judgment image, the medical history information, and the attribute information, and determine the reliability of the possibility of the subject having a predetermined disease based on at least one of the judgment image, the medical history information, and the attribute information.
- a processing program that, when executed by at least one processor, causes the at least one processor to function in the following manner: acquire one or more judgment images of a subject via a camera for capturing an image of the subject of a user; acquire at least one of medical history information and attribute information of the user; judge the possibility of the subject having a predetermined disease based on a trained judgment model for judging the possibility of the subject having a predetermined disease and at least one of the judgment image, the medical history information, and the attribute information; and determine the reliability of the possibility of the subject having a predetermined disease based on at least one of the judgment image, the medical history information, and the attribute information.
- a processing method executed by at least one processor including the steps of: acquiring one or more judgment images of a subject via a camera for capturing an image of the subject of a user; acquiring at least one of medical history information and attribute information of the user; judging the possibility of a predetermined disease based on a trained judgment model for judging the possibility of the disease, the judgment image, the medical history information, and at least one of the attribute information; and determining the reliability of the possibility of the disease based on the judgment image, the medical history information, and at least one of the attribute information.
- a processing system including: "an image capturing device equipped with a camera for capturing an image of a subject of the user; and a processing device connected to the image capturing device via a wired or wireless network.”
- FIG. 1 is a diagram showing a state in which a processing system 1 according to an embodiment of the present disclosure is in use.
- FIG. 2 is a diagram showing a usage state of the imaging device 200 according to an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of a processing system 1 according to one embodiment of the present disclosure.
- FIG. 4 is a block diagram showing a configuration of the processing device 100 according to an embodiment of the present disclosure.
- FIG. 5 is a block diagram showing the configuration of the imaging device 200 and the terminal device 300 according to an embodiment of the present disclosure.
- FIG. 6A is a diagram conceptually illustrating an image management table stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 6B is a diagram conceptually illustrating a user table stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 7A is a schematic diagram of distribution information of accumulated data stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 7B is a schematic diagram of distribution information of accumulated data stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 7C is a schematic diagram of distribution information of accumulated data stored in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 8 is a diagram showing a processing sequence executed between the processing device 100, the imaging device 200, and the terminal device 300 according to an embodiment of the present disclosure.
- FIG. 9 is a diagram showing a processing flow executed in the imaging device 200 according to an embodiment of the present disclosure.
- FIG. 10 is a diagram showing a process flow executed in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 11 is a diagram showing a process flow executed in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 12 is a diagram illustrating a process flow for generating a trained model according to an embodiment of the present disclosure.
- FIG. 13 is a diagram showing a process flow executed in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 14 is a diagram illustrating a processing flow for generating a trained model according to an embodiment of the present disclosure.
- FIG. 15 is a diagram illustrating a process flow for generating a trained model according to an embodiment of the present disclosure.
- FIG. 16 is a diagram illustrating a processing flow for generating a trained model according to an embodiment of the present disclosure.
- FIG. 17 is a diagram showing a process flow executed in the processing device 100 according to an embodiment of the present disclosure.
- FIG. 18 is a schematic diagram of a reliability determination process according to an embodiment of the present disclosure.
- FIG. 19 is a schematic diagram of a reliability determination process according to an embodiment of the present disclosure.
- the processing system 1 according to the present disclosure is mainly used to photograph the inside of the oral cavity of a user to obtain a subject image.
- the processing system 1 is used to photograph the back of the throat of the oral cavity, specifically the pharynx. Therefore, hereinafter, the processing system 1 according to the present disclosure will be mainly described when used to photograph the pharynx.
- the pharynx is only an example of a photographed part, and the processing system 1 according to the present disclosure can be suitably used for other parts of the oral cavity, such as the tonsils.
- the photographed subject image is not limited to an internal image of the oral cavity, and various parts of the user may be photographed depending on the disease to be diagnosed.
- the processing system 1 is used to determine the possibility of contracting a specific disease from a subject image obtained by photographing a subject including at least the pharyngeal region of the oral cavity of a user, and to diagnose or assist in the diagnosis of the specific disease.
- Influenza is an example of a disease that can be determined by the processing system 1.
- the possibility of contracting influenza is diagnosed by examining the pharynx and tonsils of the user, and by determining the presence or absence of findings such as follicles in the pharyngeal region.
- the processing system 1 to determine the possibility of contracting influenza and outputting the result, it is possible to diagnose or assist in the diagnosis.
- the determination of the possibility of contracting influenza is one example.
- the processing system 1 can be suitably used for any disease that causes intraoral findings due to the contraction.
- intraoral findings are not limited to those that are discovered by a doctor or the like and whose existence is medically known.
- differences that can be recognized by a person other than a doctor, or differences that can be detected by artificial intelligence or image recognition technology, can be suitably applied to the processing system 1.
- diseases include influenza, as well as infectious diseases such as streptococcal infection, adenovirus infection, EB virus infection, mycoplasma infection, hand, foot and mouth disease, herpangina, and candidiasis; diseases that present with vascular or mucosal disorders such as arteriosclerosis, diabetes, and hypertension; tumors such as tongue cancer and pharyngeal cancer; and periodontal diseases such as dental caries, gingivitis, and periodontitis.
- infectious diseases such as streptococcal infection, adenovirus infection, EB virus infection, mycoplasma infection, hand, foot and mouth disease, herpangina, and candidiasis
- diseases that present with vascular or mucosal disorders such as arteriosclerosis, diabetes, and hypertension
- tumors such as tongue cancer and pharyngeal cancer
- periodontal diseases such as dental caries, gingivitis, and periodontitis.
- the processing system 1 determines the reliability of the determined possibility of disease.
- the reliability may be indicated by a numerical value, or may be indicated by words divided into multiple levels, such as high, medium, and low.
- the user who is the subject of imaging by the imaging device 200 may include any human being, such as a patient, a test subject, a diagnostic user, or a healthy individual.
- the operator who holds the imaging device 200 and performs imaging operations is not limited to medical professionals such as doctors, nurses, and laboratory technicians, but may include any human being, such as the user himself.
- the processing system 1 according to the present disclosure is typically expected to be used in medical institutions. However, this is not limited to this case, and the place of use may be any, such as the user's home, school, or workplace.
- examples of subjects include at least a portion of the user's oral cavity.
- the disease to be determined may also be any disease that manifests itself in the oral cavity.
- the following will describe a case in which the subject includes the pharynx or the area around the pharynx, and the disease to be determined is influenza.
- the subject image and the judgment image may be one or more videos or one or more still images.
- a through image is captured by the camera, and the captured through image is displayed on the display 203.
- one or more still images are captured by the camera, and the captured image is displayed on the display 203.
- video shooting is started, and the image captured by the camera during that time is displayed on the display 203. Then, when the shooting button is pressed again, video shooting ends.
- the subject image does not mean only a specific image among these, but may include all images captured by the camera.
- the judgment image simply means an image used for the possibility of affliction with a certain disease, and does not necessarily have to undergo the specific processing specifically described below.
- the subject image captured by the imaging device 200 can be used as the judgment image as it is.
- FIG. 1 is a diagram showing a state of use of a processing system 1 according to an embodiment of the present disclosure.
- an operator attaches an auxiliary tool 500 to the tip of an imaging device 200 so as to cover it, and inserts the imaging device 200 together with the auxiliary tool 500 into the oral cavity 710 of the user.
- an operator (which may be the user 700 himself or herself, or may be someone different from the user 700) attaches the auxiliary tool 500 to the tip of the imaging device 200 so as to cover it.
- the operator inserts the imaging device 200 to which the auxiliary tool 500 is attached into the oral cavity 710.
- the tip of the auxiliary tool 500 passes through the incisors 711 and is inserted to the vicinity of the soft palate 713.
- the imaging device 200 is similarly inserted to the vicinity of the soft palate 713.
- the tongue 714 is pushed downward by the auxiliary tool 500 (which functions as a tongue depressor), restricting the movement of the tongue 714. This allows the operator to ensure a good field of view for the imaging device 200, enabling good imaging of the pharynx 715 located in front of the imaging device 200.
- the captured subject image (typically an image including the pharynx 715) is transmitted from the imaging device 200 to the processing device 100, which is communicatively connected via a wired or wireless network.
- the processor of the processing device 100 that receives the subject image processes a program stored in memory to select a judgment image to be used for judgment from the subject image and to judge the possibility of suffering from a specified disease.
- the result is then transmitted to the terminal device 300 and output to a display or the like via the output interface of the terminal device 300.
- FIG. 2 is a diagram showing a state in which the imaging device 200 according to an embodiment of the present disclosure is used. Specifically, FIG. 2 is a diagram showing a state in which the imaging device 200 is held by an operator 600.
- the imaging device 200 is composed of a main body 201, a grip 202, and a display 203 from the side inserted into the oral cavity.
- the main body 201 and the grip 202 are formed in a substantially columnar shape of a predetermined length along the insertion direction H into the oral cavity.
- the display 203 is disposed on the opposite side of the grip 202 from the main body 201 side.
- the imaging device 200 is formed in a substantially columnar shape as a whole, and is held by the operator 600 by holding it in a manner similar to holding a pencil.
- the display panel of the display 203 faces the operator 600 himself in the usage state, it is possible to easily handle the imaging device 200 while checking the subject image captured by the imaging device 200 in real time.
- the shooting button 220 is configured to be located on the upper surface side of the grip. Therefore, when the operator 600 holds it, the operator 600 can easily press the shooting button 220 with the index finger, etc.
- FIG. 3 is a schematic diagram of the processing system 1 according to an embodiment of the present disclosure.
- the processing system 1 includes a processing device 100, an imaging device 200 communicably connected to the processing device 100 via a wired or wireless network, and a terminal device 300 communicably connected to the processing device 100 via a wired or wireless network.
- the processing device 100 receives and processes a subject image captured by the imaging device 200 based on an operation input by an operator accepted by the imaging device 200 or the terminal device 300.
- the processing device 100 also determines the possibility of the subject having a predetermined disease based on the received subject image, interview information, and findings information, and transmits the result to the terminal device 300.
- the tip of the imaging device 200 is inserted into the user's oral cavity to capture images of the inside of the oral cavity, particularly the pharynx.
- the specific imaging process will be described later.
- the captured subject image is sent to the processing device 100 via a wired or wireless network.
- the terminal device 300 inputs subject information, interview information, diagnostic information, etc., required for processing in the processing device 100, and receives and outputs relevant information related to the imaging device 200 and the results of determining the possibility of contracting a specified disease from the processing device 100.
- the processing system 1 can further include a mounting table 400 as necessary.
- the mounting table 400 can stably mount the imaging device 200.
- the mounting table 400 can also be connected to a power source via a wired cable, allowing power to be supplied to the imaging device 200 from the power supply terminal of the mounting table 400 through the power supply port of the imaging device 200.
- FIG. 4 is a block diagram showing the configuration of a processing device 100 according to an embodiment of the present disclosure.
- the processing device 100 includes a memory 112, a processor 111, and a communication interface 113. These components are electrically connected to each other via control lines and data lines. Note that the processing device 100 does not need to include all of the components shown in FIG. 4, and it is possible to omit some components or add other components.
- a typical example of such a processing device 100 is a server device.
- the processing device 100 can also be connected to another server device and integrated into the processing device 100. It can also be connected to another database device and integrated into the processing device 100.
- the processor 111 of the processing device 100 functions as a control unit that controls other components of the processing system 1 based on a program stored in the memory 112. Based on the program stored in the memory 112, the processor 111 stores the subject image received from the photographing device 200 in the memory 112 and processes the stored subject image.
- the processor 111 executes the following processes based on the program stored in the memory 112: "acquiring one or more judgment images of the subject via the camera 211 for photographing the subject image of the user", "acquiring at least one of the user's medical interview information and attribute information", "determining the possibility of a predetermined disease based on a learned judgment model for determining the possibility of a predetermined disease and the judgment image, the medical interview information, and at least one of the attribute information", "determining the reliability of the possibility of the disease based on the judgment image, the medical interview information, and at least one of the attribute information”, etc.
- the processor 111 is mainly composed of one or more CPUs, but may be appropriately combined with a GPU, FPGA, etc.
- the memory 112 is composed of a RAM, a ROM, a non-volatile memory, a HDD, etc., and functions as a storage unit.
- the memory 112 stores instructions and commands for various controls of the processing system 1 according to this embodiment as a program.
- the memory 112 stores programs for the processor 111 to execute, such as "processing for acquiring one or more judgment images of the subject via the camera 211 for capturing an image of the subject of the user," “processing for acquiring at least one of the user's medical interview information and attribute information,” “processing for judging the possibility of a specific disease based on a learned judgment model for judging the possibility of a specific disease and at least one of the judgment image, medical interview information, and attribute information,” and “processing for determining the reliability of the possibility of a disease based on at least one of the judgment image, medical interview information, and attribute information.”
- the memory 112 also stores an image management table for managing the subject images captured by the camera 211 of the imaging device 200 and the images, a user table for storing the user's attribute information, medical interview information, judgment results, etc., and distribution information of the learning data.
- the memory 112 also stores various trained models, such as a trained judgment image selection model used to select a judgment image from a subject image, and a trained judgment model for judging the possibility of disease from a judgment image.
- a first trained judgment model (hereinafter also referred to as a trained positivity rate judgment model) that judges the first possibility (first positivity rate) of the disease
- a second trained judgment model (hereinafter also referred to as a trained positivity rate judgment model) that judges the second possibility (second positivity rate) of the disease
- a third trained judgment model that judges the third possibility (third positivity rate) of the disease are stored.
- the types and quantities of the trained judgment models are not limited to the above, and can be adjusted as appropriate as long as the possibility of the disease can be judged.
- the communication interface 113 functions as a communication unit for transmitting and receiving information to and from the terminal device 300, the image capture device 200, and/or other devices.
- Examples of the communication interface 113 include connectors for wired communication such as USB and SCSI, wireless communication transmitting and receiving devices such as wireless LAN, Bluetooth (registered trademark), infrared, and LTE, and various connection terminals for printed circuit boards and flexible circuit boards.
- the imaging device 200 includes a camera 211, a light source 212, a processor 213, a memory 214, a display panel 215, an input interface 210, and a communication interface 216.
- the terminal device 300 includes a processor 311, a memory 312, an input interface 313, an output interface 314, and a communication interface 315. These components are electrically connected to each other via control lines and data lines.
- the processing system 1 does not need to include all of the components shown in FIG. 5, and it is possible to omit some of the components or add other components.
- the processing system 1 can include a battery for driving each component.
- the imaging device 200 and the terminal device 300 only need to be connected to each other so that they can communicate with each other via a wired or wireless network, and it is not necessary for the two to be configured so that they can communicate with each other directly.
- the processor 311 functions as a control unit that controls the other components of the processing system 1 based on a processing program stored in the memory 312. Based on the processing program stored in the memory 312, the processor 311 inputs user information, medical interview information, findings information, etc., and outputs the diagnosis results of the possibility of disease received from the processing device 100, etc.
- the processor 111 executes the following processes based on the processing program stored in the memory 312: "accepting user information related to the user input by the operator or user via the input interface 313", “transmitting the accepted user information to the processing device 100 via the communication interface 315", “accepting the input of interview information and findings information of the subject by the operator or user via the input interface 313", “transmitting the accepted interview information and findings information together with the user information to the processing device 100 via the communication interface 315", “selecting a subject via the input interface 313 and transmitting a request to the processing device 100 to determine the possibility of the selected subject having a predetermined disease via the communication interface 315", “receiving a subject image and a determination result indicating the possibility of the user having a predetermined disease determined based on the subject image, etc., from the processing device 100 via the communication interface 315, and outputting the determination result and the subject image together via the output interface 314".
- the processor 311 is mainly composed of one or more CPUs, but may be appropriately combined
- Memory 312 is composed of RAM, ROM, non-volatile memory, HDD, etc., and functions as a storage unit. Memory 312 stores instructions and commands for various controls of processing system 1 according to this embodiment as a program. Specifically, the memory 312 stores programs for the processor 311 to execute, such as "a process of accepting input of user information related to the user by the operator or user himself via the input interface 313," “a process of transmitting accepted user information to the processing device 100 via the communication interface 315," "a process of accepting input of interview information and finding information of the subject by the operator or user via the input interface 313," “a process of transmitting the accepted interview information and finding information together with the user information to the processing device 100 via the communication interface 315," “a process of selecting a subject via the input interface 313 and transmitting a request for determining the possibility of the selected subject having a specified disease to the processing device 100 via the communication interface 315,” and “a process of receiving a determination result indicating the possibility of the user having a specified disease determined based on the subject image,
- the input interface 313 functions as an input unit that accepts instructions input by the operator to the terminal device 300.
- Examples of the input interface 313 include a "confirmation button” for making various selections, a “back/cancel button” for returning to the previous screen or canceling an input confirmation operation, a cross key button for moving a pointer output to the output interface 314, an on/off key for turning the power of the terminal device 300 on and off, and physical key buttons such as character input key buttons for inputting various characters.
- the input interface 313 may also use a touch panel that is superimposed on the display functioning as the output interface 314 and has an input coordinate system corresponding to the display coordinate system of the display.
- icons corresponding to the physical keys are displayed on the display, and the operator inputs instructions via the touch panel to select each icon.
- the method of detecting the user's instruction input by the touch panel may be any method, such as a capacitance type or a resistive film type.
- the input interface 313 does not always need to be physically provided on the terminal device 300, and may be connected as necessary via a wired or wireless network.
- the output interface 314 functions as an output unit for outputting information such as the judgment result received from the processing device 100.
- An example of the output interface 314 is a display configured as a liquid crystal panel, an organic EL display, a plasma display, or the like.
- the terminal device 300 itself does not necessarily need to be equipped with a display.
- an interface for connecting to a display or the like that can be connected to the terminal device 300 via a wired or wireless network can also function as the output interface 314 that outputs display data to the display or the like.
- the communication interface 315 functions as a communication unit for transmitting and receiving subject information, medical interview information, subject images, findings information, etc., to and from the processing device 100 connected via a wired or wireless network.
- Examples of the communication interface 315 include various types of connectors for wired communication such as USB and SCSI, wireless communication transmitting and receiving devices such as wireless LAN, Bluetooth (registered trademark), infrared, and LTE, and various connection terminals for printed circuit boards and flexible circuit boards.
- the camera 211 functions as an imaging unit that detects light reflected from the oral cavity, which is the subject, and generates a subject image.
- the camera 211 is equipped with, as an example, a CMOS image sensor, a lens system and a drive system for achieving the desired function.
- the image sensor is not limited to a CMOS image sensor, and other sensors such as a CCD image sensor can also be used.
- the camera 211 can have an autofocus function, and it is preferable that the focus is set to a specific location, for example, on the front of the lens.
- the camera 211 can have a zoom function, and is preferably set to capture an image at an appropriate magnification depending on the size of the pharynx or influenza follicles.
- the sensor constituting the camera 211 is subjected to a predetermined correction at the time of shipment.
- the photographing device is attached to a photographing jig that has a built-in color chart for corrected photographing, and a predetermined photographing is performed. After that, the average pixel value of each color panel of the color chart in the photographed image is measured. Color correction is performed so that the average pixel value becomes a reference value.
- the reference value may be the average value of the camera used when learning the trained judgment model or when evaluating the performance (clinical trial).
- the color correction may be performed by processing using an algorithm, and more specifically, an image quality correction parameter that minimizes the mean square error of the average pixel value of each color panel may be calculated by optimization.
- information related to these corrections may be stored in the memory 214.
- the following correction process may be added or may be performed instead of the color chart correction.
- an imaging device is attached to the pharyngeal model for correction imaging, and a specified imaging is performed. After that, the output value of the trained judgment model in the captured image is calculated. Next, color correction is performed so that the output value becomes a reference value.
- the reference value may be the average value of the output values of the pharyngeal model photographed by the camera used when training the trained judgment model or when evaluating the performance (clinical trial).
- the color correction may be performed by algorithmic processing, and more specifically, image quality correction parameters that minimize the mean squared error of the average pixel value of each color panel and the squared error of the output value may be calculated by optimization.
- the above-mentioned predetermined correction may be performed again depending on the timing of use of the imaging device 200 after shipment. For example, it may be performed automatically every time the imaging device 200 is used, or it may be performed automatically or manually during regular inspection (calibration). This makes it possible to correct changes in the color tone and brightness of the image due to aging and deterioration of the sensor, etc., and makes it possible to perform inspections with the same image quality as when the AI was learning the trained judgment model, etc., described below.
- the light source 212 is driven by instructions from the processor 213 and functions as a light source unit for irradiating light into the oral cavity.
- the light source 212 includes one or more light sources.
- the light source 212 is composed of one or more LEDs, and light having a predetermined frequency band is irradiated from each LED in the direction of the oral cavity.
- the light source 212 uses light having a desired band from the ultraviolet light band, visible light band, and infrared light band, or a combination of these. When determining the possibility of influenza infection in the processing device 100, it is preferable to use white light in the visible light band.
- the light source 212 is subjected to a predetermined correction at the time of shipment.
- the imaging device 200 is installed with respect to an integrating sphere, the light emitted from the light guide tube reaches an illuminometer, and the total luminous flux (brightness) of the light source 212 is measured. Then, the current value of the light source 212 is adjusted, and processing is performed so that the brightness value becomes a reference value.
- the reference value may be the average value of the camera used when learning the trained judgment model or when evaluating the performance (clinical trial).
- the adjustment may be performed by processing using an algorithm, or may be performed so that the brightness becomes a predetermined value calculated from the relationship between the brightness and the current value.
- information related to these corrections may be stored in the memory 214.
- the above-mentioned predetermined correction may be performed again depending on the timing of use of the imaging device 200 after shipment. For example, it may be performed automatically every time the imaging device 200 is used, or it may be performed automatically or manually during regular inspection (calibration). This makes it possible to correct changes in the color tone and brightness of the image due to aging and deterioration of the LEDs, etc., and makes it possible to perform inspections with the same image quality as when the trained judgment model, etc., described below, was trained.
- the processor 213 functions as a control unit that controls other components of the image capturing device 200 based on a program stored in the memory 214. Based on a program stored in the memory 214, the processor 213 controls the driving of the camera 211 and the driving of the light source 212, and controls the storage of the subject image captured by the camera 211 in the memory 214. The processor 213 also controls the output of the subject image and user information stored in the memory 214 to the display 203 and the transmission to the processing device 100. The processor 213 also controls the output of information related to the correction of the camera 211 and the light source 212 stored in the memory 214 to the display 203 and the transmission to the processing device 100. The processor 213 is mainly composed of one or more CPUs, but may be combined with other processors as appropriate.
- the memory 214 is composed of RAM, ROM, non-volatile memory, HDD, etc., and functions as a storage unit.
- the memory 214 stores instructions and commands for various controls of the image capture device 200 as a program.
- the memory 214 also stores subject images captured by the camera 211, various information about the user, and the like. It also stores information related to the correction of the camera 211 and the light source 212 described above.
- the display panel 215 is provided on the display 203 and functions as a display unit for displaying the subject image captured by the imaging device 200.
- the display panel 215 is configured from a liquid crystal panel, but is not limited to a liquid crystal panel and may be configured from an organic EL display, a plasma display, etc.
- the input interface 210 functions as an input unit that accepts user input of instructions to the processing device 100 and the image capture device 200.
- Examples of the input interface 210 include a "shooting button” for instructing the image capture device 200 to start and end recording, a “power button” for turning the image capture device 200 on and off, a “confirmation button” for making various selections, a “back/cancel button” for returning to the previous screen or canceling an input confirmation operation, and physical key buttons such as a cross key button for moving icons displayed on the display panel 215. Note that these various buttons and keys may be physically prepared, or may be displayed as icons on the display panel 215 and be selectable using a touch panel or the like superimposed on the display panel 215 and arranged as the input interface 210.
- the method of detecting the user's instruction input using the touch panel may be any method, such as a capacitance type or a resistive film type.
- the communication interface 216 functions as a communication unit for transmitting and receiving information to and from the image capture device 200 and/or other devices.
- Examples of the communication interface 216 include connectors for wired communication such as USB and SCSI, wireless communication transmitting and receiving devices such as wireless LAN, Bluetooth (registered trademark), infrared, and LTE, and various connection terminals for printed circuit boards and flexible circuit boards.
- FIG. 6A is a diagram conceptually illustrating an image management table stored in the processing device 100 according to an embodiment of the present disclosure.
- the information stored in the image management table is updated and stored as necessary according to the progress of processing by the processor 111 of the processing device 100.
- the image management table stores subject image information, candidate information, judgment image information, feature information, score information, etc., in association with user ID information.
- User ID information is information specific to each user for identifying each user. User ID information is generated each time a new user is registered by the operator.
- Subject image information is information for identifying a subject image captured by the operator for each user.
- a subject image is one or more images including a subject captured by the camera of the image capture device 200, and is stored in the memory 112 by receiving it from the image capture device 200.
- Candidate information is information for identifying an image that is a candidate for selecting a judgment image from one or more subject images.
- “Judgment image information” is information for identifying a judgment image used to determine the possibility of influenza infection. Such a judgment image is selected based on the similarity from the candidate images identified by the candidate information.
- “Feature information” is information obtained by inputting a judgment image into a feature extractor (FIG. 13, S613) described later. For example, it is information related to the average color tone and average brightness of the judgment image.
- “Score information” is numerical information that is assigned by a learned judgment image selection model, which will be described later, in the screening process when selecting a judgment image from the subject images.
- information for identifying each image is stored as the subject image information, candidate information judgment image information, feature amount information, and score information.
- the information for identifying each image is typically identification information for identifying each image, but it may also be information indicating the storage location of each image or the image data of each image itself.
- FIG. 6B is a conceptual diagram showing a user table stored in a processing device 100 according to an embodiment of the present disclosure.
- the information stored in the user table is updated and stored as needed in accordance with the progress of processing by the processor 111 of the processing device 100.
- the user table stores attribute information, medical interview information, two-dimensional code information, judgment result information, first judgment information, second judgment information, third judgment information, reliability, tag information, etc. in association with user ID information.
- User ID information is information unique to each user and is used to identify each user. User ID information is generated each time a new user is registered by the operator.
- attribute information is information input by, for example, the operator or user, etc., and is information related to the individual user, such as the user's name, gender, age, address, etc.
- Medical interview information is information input by, for example, the operator or user, etc., and is information used as a reference for diagnosis by a doctor, etc., such as the user's medical history and symptoms.
- interview information examples include patient background such as weight, allergies, and underlying diseases, body temperature, peak body temperature from onset, time elapsed from onset, heart rate, pulse rate, oxygen saturation, blood pressure, medication status, contact with other influenza patients, presence or absence of subjective symptoms and physical findings such as joint pain, muscle pain, headache, fatigue, loss of appetite, chills, sweating, cough, sore throat, runny nose/nasal congestion, tonsillitis, digestive symptoms, rash on hands and feet, redness or white coating on the pharynx, swollen tonsils, history of tonsillectomy, strawberry tongue, swelling of anterior cervical lymph nodes accompanied by tenderness, history of influenza vaccination, and time of vaccination.
- patient background such as weight, allergies, and underlying diseases, body temperature, peak body temperature from onset, time elapsed from onset, heart rate, pulse rate, oxygen saturation, blood pressure, medication status, contact with other influenza patients, presence or absence of subjective symptoms and physical findings such as joint pain, muscle pain, headache, fatigue, loss of appetite, chills,
- the "two-dimensional code information" is information for identifying a recording medium in which at least one of user ID information, information for identifying it, attribute information, interview information, and combinations thereof is recorded. Such a recording medium does not need to be a two-dimensional code. Instead of two-dimensional codes, various things can be used, such as one-dimensional barcodes, other multidimensional codes, text information such as specific numbers or letters, image information, etc.
- the "determination result information" is information that indicates the determination result of the possibility of influenza infection based on the determination image.
- the determination result information is the determination result by the ensemble processing (S622 in FIG. 13) described later, and is the overall result of determining the possibility of infection using the results of multiple determination processes.
- One example of such determination result information is the positivity rate for influenza.
- the determination result does not need to be a specific numerical value, and may be in any form, such as a classification according to the level of the positivity rate, or a classification indicating whether the result is positive or negative.
- the "first judgment information” is information indicating the judgment result of the possibility of influenza infection based on the judgment image.
- the first judgment information is the output result from the classifier described later (S616 in FIG. 13).
- the first judgment information is also information used in the ensemble processing described later. That is, the first judgment information is information indicating the first positive rate (first possibility).
- the "second judgment information” is information indicating the judgment result of the possibility of influenza infection based on the feature amount of the judgment image, the medical interview information, and the attribute information.
- the second judgment information is the output result (S619 in FIG. 13) of the learned positive rate judgment model described later (S618 in FIG. 13).
- the second judgment information is also information used in the ensemble processing described later.
- the second judgment information is information indicating the second positive rate (second possibility).
- the "third judgment information” is information indicating the judgment result of the possibility of influenza infection based on the first positive rate, the medical interview information, and the attribute information.
- the third determination information is the output result (S621 in FIG. 13) of the learned positive rate determination model (S620 in FIG. 13) described below.
- the third determination information is also information used in the ensemble processing described below. In other words, the third determination information is information indicating the third positive rate.
- “Reliability information” is information that indicates the reliability of the judgment result.
- the reliability information is the results of the first judgment and the second judgment (S813 and S815 in FIG. 17) described below.
- the reliability serves as a guideline for determining whether or not various pieces of information used to output the judgment result should be used for re-learning each trained judgment model.
- the reliability is an index that indicates the likelihood of the judgment result.
- Tag information is information that indicates that the reliability of the judgment result is low and that it will not be used as retraining data for the trained judgment model. That is, in the case shown in FIG. 6B, a check mark is added to the user ID "U1" as a tag, and the information linked by "U1" will not be used as the retraining data. In other words, for the information in the image management table shown in FIG. 6A and the information in the user table shown in FIG. 6B, the information in the top row linked by the user ID "U1" will not be used as the retraining data.
- the attribute information and medical interview information do not need to be input by the user or operator each time, but may be received, for example, from an electronic medical record device or other terminal device connected via a wired or wireless network. They may also be acquired by analyzing the subject image captured by the imaging device 200. Furthermore, although not specifically shown in Figures 6A and 6B, it is also possible to store in the memory 112 information on the current epidemic of infectious diseases that are the subject of diagnosis or assistance in diagnosis, such as influenza, as well as external factor information such as the results of other users' judgments about these infectious diseases and their disease status.
- each piece of information stored in Figures 6A and 6B does not necessarily have to be stored in the memory 112 of the processing device 100, but may be stored in a database device connected to the processing device 100 via a wired or wireless communication network, and may be read from the database device as the processing progresses.
- FIGS. 7A to 7C are schematic diagrams of distribution information of accumulated data stored in a processing device 100 according to an embodiment of the present disclosure. Specifically, FIGS. 7A to 7C show a convex hull that is generated to encompass multiple pieces of accumulated data.
- the horizontal and vertical axes represent the learning data (accumulated data) used to train the second trained judgment model, which is the second trained judgment model (the trained positive rate judgment model of S618).
- the horizontal axis represents the medical interview information of the training data
- the vertical axis represents the image features of the training data.
- each point plotted in FIG. 7A is determined from the medical interview information and image features associated with one user ID.
- a convex hull C1 is then formed to include all of the multiple points plotted based on the medical interview information and image features.
- the shape of the convex hull C1 is a triangle as an example, but it may have other shapes depending on the number of points to be plotted and the training data used for the vertical and horizontal axes.
- the horizontal axis represents the input to the second trained judgment model, which is the second trained judgment model
- the vertical axis represents the output.
- Each point plotted in Figure 7B is determined from input/output data associated with one user ID.
- a convex hull C2 is then formed to include all of the multiple points plotted based on the input/output data of the second trained judgment model.
- the shape of the convex hull C2 is a pentagon as an example, but it can also be other shapes depending on the number of points to be plotted and the data used for the vertical and horizontal axes.
- the output (second positive rate) of the second trained judgment model which is the second trained judgment model
- the output (first positive rate) of the first trained judgment model (the feature extractor related to S613 and the classifier related to S615) is plotted on the second axis (Y axis)
- the input (feature values of the medical interview information and the image) of the second trained judgment model is plotted on the third axis (Z axis).
- the third axis is data (e.g., numerical values) determined from the feature values of the medical interview information and the image.
- each point plotted in FIG. 7C is determined from input/output data associated with one user ID.
- a convex hull C3 is formed so as to include all of the multiple points plotted based on the input/output data.
- the shape of the convex hull C3 is a triangular pyramid as an example, but it may have other shapes depending on the number of points to be plotted and the learning data used for the vertical and horizontal axes.
- the memory 112 may store not only the distribution information related to the accumulated data as described above (FIGS. 7A to 7C), but also the learning data of other trained models, distribution information related to input/output data, and data at the time of performance evaluation (clinical trial).
- the output of the first trained judgment model may be the output of the convolution layer (feature vector) instead of the positive rate.
- distribution information of the input (image features, average color tone, brightness, etc.) of the first trained judgment model and distribution information related to the image score that is the output of the trained judgment image selection model may be stored. Even in these cases, a convex hull of a predetermined shape is formed from the multiple plotted points. That is, in the present disclosure, the distribution information of the accumulated data includes the learning data of each learning model and that formed by appropriately selecting the input/output data, and convex hulls of various shapes are formed in each distribution information.
- FIG. 8 is a diagram showing a processing sequence executed between the processing device 100, the photographing device 200, and the terminal device 300 according to an embodiment of the present disclosure. Specifically, Fig. 8 shows a processing sequence executed until information related to a user of the photographing device 200 who is a subject of judgment is inputted in the terminal device 300, a subject image is photographed by the photographing device 200, and a judgment result (possibility of having the disease) and a determination result (reliability) are outputted from the processing device 100.
- the terminal device 300 stores subject information of a subject who is to be judged to be possibly afflicted with a predetermined disease, based on the operator's input received at the input interface 313 (S11).
- the subject information may include attribute information of the subject.
- the terminal device 300 stores interview information of the subject, based on the operator's input received at the input interface 313 (S12).
- the terminal device 300 stores diagnosis information of the subject, based on the operator's input received at the input interface 313 (S12).
- the terminal device 300 transmits the information including the subject information, interview information, and diagnosis information stored via the communication interface 315 to the processing device 100 as subject-related information (T11).
- these processes may be performed after the processes in the imaging device 200 described later, or may be performed simultaneously, and the order is not limited. Also, any information may be input for S11 to S13.
- the image capturing device 200 starts the camera 211, etc., by receiving an input from the operator to the input interface 210 (e.g., a power button) (S21). Then, the image capturing device 200 selects user information of the user to be captured from the subject information of the uncaptured subject received from the processing device 100, based on the operator's instruction input received by the input interface 210 (S22). Next, the image capturing device 200 determines whether the auxiliary tool 500 is attached, and if it is not yet attached, outputs an attachment display to encourage the attachment via the display panel 215 (S23). Note that this display is merely an example, and the attachment may be encouraged by other means such as sound, flashing light, or vibration.
- this display is merely an example, and the attachment may be encouraged by other means such as sound, flashing light, or vibration.
- the image capturing device 200 captures a subject image based on the operator's instruction input received by the input interface 210 (S25).
- the photographing device 200 stores the photographed subject image in the memory 214 in association with the user ID information, and outputs the photographed subject image on the display panel 215 of the display (S26).
- the photographing device 200 receives an input from the operator indicating the end of photographing via the input interface 210, it transmits the stored subject image (T21) via the communication interface 216 to the processing device 100 in association with the user ID information.
- the processing device 100 receives the subject-related information and the subject image via the communication interface 113, it stores them in the memory 112 and registers them in the image management table based on the user ID information.
- the processing device 100 selects a judgment image to be used to judge the possibility of influenza infection from the stored subject images (S31).
- the processing device 100 executes a judgment process of the possibility of influenza infection using the selected judgment image and the subject-related information (S32).
- the processing device 100 stores the judgment result obtained in association with the user ID information in the user table, and also stores other information related to the judgment result (feature information, score information, first judgment information, second judgment information, third judgment information, etc.) in the memory 112 and outputs it via the communication interface 113 (S33). After that, the processing device 100 executes a judgment process of the reliability of the obtained judgment result (S34). Then, when the judgment result of the reliability is obtained, the processing device 100 stores it in the user table and outputs it via the communication interface 113 (S35). This completes the processing sequence.
- FIG. 9 is a diagram showing a processing flow executed in the processing device 100 according to an embodiment of the present disclosure. Specifically, Fig. 9 is a diagram showing a processing flow executed at a predetermined cycle for the processing related to S21 to S26 in Fig. 8. The processing flow is mainly performed by the processor 213 of the imaging device 200 reading and executing a program stored in the memory 214.
- the processor 213 determines whether or not an input from the operator has been accepted via the input interface 210 (e.g., a power button) (S211). At this time, if the processor 213 determines that an input from the operator has not been accepted, the processing flow ends.
- the input interface 210 e.g., a power button
- the processor 213 determines that input by the operator has been accepted, it starts the photographing device 200 (S211). Then, when the processor 213 receives subject information of subjects whose subject images have not yet been photographed from the processing device 100 via the communication interface 216, it outputs the subject information of the subjects whose subject images have not yet been photographed as a list to the display via the display panel 215. The processor 213 accepts the selection of subject information of subjects to be photographed from the list via the input interface 210 (S212).
- the processor 213 starts the camera 211 and determines whether the auxiliary tool 500 is normally attached to the photographing device 200, and if it is determined that the auxiliary tool 500 is not normally attached, outputs an attachment display to encourage the user to attach the auxiliary tool 500 (S213). Then, the processor 213 determines whether the auxiliary tool 500 is attached at a predetermined interval while the auxiliary tool 500 attachment display screen is output, and if it is determined that the auxiliary tool 500 is normally attached, proceeds to the next process. After determining that the auxiliary tool 500 is normally attached, when the operator's operation to start shooting is accepted via the input interface 210 (e.g., the shooting button), the processor 213 controls the camera 211 to start shooting the subject image of the subject (S214).
- the input interface 210 e.g., the shooting button
- This subject image is captured by continuously shooting a certain number of images (e.g., 30 images) at a certain interval by pressing the shooting button.
- the processor 213 When the processor 213 has finished capturing the subject image, it stores the captured subject image in the memory 214 in association with the read user ID information. The processor 213 then outputs the stored subject image to the display panel 215 (S215).
- the operator can remove the photographing device 200 together with the auxiliary tool 500 from the oral cavity, check the subject image output on the display panel 215, and input an instruction to retake the image if the desired image is not obtained. Therefore, the processor 213 determines whether or not input of an instruction to retake the image has been received from the operator via the input interface 210 (S216). If input of an instruction to retake the image has been received, the processor 213 displays the standby screen of S215 again, making it possible to photograph the subject image.
- the processor 213 will transmit the subject image stored in the memory 214 and the user ID information associated with the subject image to the processing device 100 via the communication interface 216 (S217). This ends the processing flow.
- FIG. 10 is a diagram showing a process flow executed in the processing device 100 according to an embodiment of the present disclosure. Specifically, Fig. 10 is a diagram showing a process flow executed for the processes related to S31 to S33 in Fig. 8. The process flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112.
- the processor 111 when the processor 111 receives a subject image and associated user ID information from the image capture device 200, it stores the image in the memory 112 and registers it in the image management table (S311). Then, when the processor 111 receives the user ID information or the corresponding attribute information (e.g., name) accepted by the terminal device 300 via the communication interface 113, it selects a user to be judged for the possibility of contracting influenza (S312). At this time, if multiple pieces of user ID information and corresponding subject images are received from the image capture device 200, it is possible to output the multiple pieces of user ID information or the corresponding attribute information to select one of the users.
- the processor 111 receives a subject image and associated user ID information from the image capture device 200, it stores the image in the memory 112 and registers it in the image management table (S311). Then, when the processor 111 receives the user ID information or the corresponding attribute information (e.g., name) accepted by the terminal device 300 via the communication interface 113, it selects a
- the processor 111 When the processor 111 selects a user, it reads attribute information associated with the user ID information of that user from the user table in the memory 112 (S313). Similarly, the processor 111 reads medical interview information associated with the user ID information of the user to be judged from the user table in the memory 112 (S314).
- the processor 111 reads out the subject image associated with the user ID information of the selected user from the memory 112, and executes a process of selecting a judgment image to be used in judging the possibility of having influenza (S315: details of this selection process will be described later). The processor 111 then executes a process of judging the possibility of having influenza based on the selected judgment image (S316: details of this judgment process will be described later).
- the processor 111 stores the judgment result in the user table in association with the user ID information, and outputs the judgment result via the communication interface 113 (S317). This ends the processing flow.
- FIG. 11 is a diagram showing a processing flow executed in the processing device 100 according to an embodiment of the present disclosure. Specifically, FIG. 11 is a diagram showing details of the judgment image selection process executed in S315 of FIG. 10. This processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112.
- the processor 111 reads out from the memory 112 a subject image associated with the user ID information of the selected user (S411). Next, the processor 111 selects an image that is a candidate for a judgment image from among the read out subject images (S412). As an example, this selection is performed using a learned judgment image selection model.
- FIG. 12 is a diagram showing a process flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 12 is a diagram showing a process flow for generating a trained judgment image selection model used in S412 of FIG. 11. This process flow may be executed by the processor 111 of the processing device 100, or may be executed by a processor of another processing device.
- the processor executes a step of acquiring a subject image of a subject including at least a part of the pharynx as a learning subject image (S511).
- the processor executes a processing step of assigning label information to the acquired learning subject image indicating whether the image can be used as a judgment image (S512).
- the processor then executes a step of storing the assigned label information in association with the learning subject image (S513).
- the label assignment process and label information storage process may be performed by a human beforehand determining whether the learning subject image is a judgment image or not, and the processor may store the result in association with the learning subject image, or the processor may analyze whether the image is a judgment image using a known image analysis process, and store the result in association with the learning subject image.
- the label information is assigned based on viewpoints such as whether at least a part of the oral cavity, which is the subject, is captured, and whether the image quality is good due to camera shake, defocus, cloudiness, etc.
- the processor 111 executes a step of performing machine learning of the selection pattern of the judgment image using them (S514).
- the machine learning is performed by providing a set of the training subject images and the label information to a neural network composed of a combination of neurons, and repeating learning while adjusting the parameters of each neuron so that the output of the neural network is the same as the label information.
- a step of acquiring a trained judgment image selection model (e.g., neural network and parameters) is executed (S515).
- the acquired trained judgment image selection model may be stored in the memory 112 of the processing device 100 or in another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs the subject image read out in S411 to the learned judgment image selection model, thereby acquiring as output a candidate image that is a candidate for the judgment image.
- This makes it possible to select an image that shows at least a partial area of the subject oral cavity, and has good image quality with no camera shake, defocus, subject motion blur, exposure, or cloudiness. Furthermore, regardless of the operator's skill in photographing, images with good image quality can be consistently selected.
- the processor 111 registers the acquired candidate image that is a candidate for the judgment image in the image management table. At this time, the processor 111 registers not only the image in question, but also the feature quantities and score of the judgment image in the image management table.
- the processor 111 executes a process of selecting a judgment image from the selected candidate images based on similarity (S413). Specifically, the processor 111 compares the obtained candidate images to calculate the similarity between each of the candidate images. The processor 111 then selects a candidate image that is determined to have a low similarity to the other candidate images as the judgment image.
- the similarity between such candidate images is calculated by a method using local features in each candidate image (Bag-of-Keypoints method), a method using Earth Mover's Distance (EMD), a method using Support Vector Machine (SVM), a method using Hamming distance, a method using cosine similarity, or the like.
- the processor 111 registers the candidate image selected based on the similarity as a judgment image in the image management table (S414).
- the subject image, the candidate image, and the judgment image may each be one or more.
- the judgment accuracy can be improved compared to the case where only one judgment image is used.
- each time a subject image is photographed the photographed subject image is sent to the processing device 100 and then the candidate image and judgment image are selected, or the candidate image and judgment image are selected in the photographing device 200, and photographing can be terminated when a predetermined number of judgment images (for example, about five) have been obtained.
- a predetermined number of judgment images for example, about five
- FIG. 13 is a diagram showing a processing flow executed in the processing device 100 according to an embodiment of the present disclosure. Specifically, FIG. 13 is a diagram showing details of the process for determining the possibility of influenza, which is executed in S316 of FIG. 10. This processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112.
- the processor 111 obtains a determination result by performing ensemble processing of the first positivity rate (first possibility of infection), the second positivity rate (second possibility of infection), and the third positivity rate, each of which is obtained using a different method.
- the processor 111 reads out from the memory 112 a judgment image associated with the user ID information of the user to be judged (S611).
- the judgment image is the image selected in S315 of FIG. 10.
- the processor 111 then performs a predetermined pre-processing (S612) on the judgment image that has been read out.
- Such pre-processing includes filter processing such as band pass filters including high pass filters and low pass filters, averaging filters, Gaussian filters, Gabor filters, Canny filters, Sobel filters, Laplacian filters, median filters, and bilateral filters, blood vessel extraction processing using Hessian matrices, segmentation processing of specific regions (e.g., follicles) using machine learning, trimming processing for the segmented region, haze removal processing, super-resolution processing, and combinations thereof, which are selected according to the purpose such as high definition, region extraction, noise removal, edge enhancement, image correction, and image conversion.
- filter processing such as band pass filters including high pass filters and low pass filters, averaging filters, Gaussian filters, Gabor filters, Canny filters, Sobel filters, Laplacian filters, median filters, and bilateral filters
- blood vessel extraction processing using Hessian matrices such as segmentation processing of specific regions (e.g., follicles) using machine learning, trimming processing for the segmented region, haze removal processing, super-resolution
- the processor 111 may also perform color tone conversion processing for each subject image based on information related to the correction of the camera 211 and the light source 212 transmitted from the image capturing device 200. This makes it possible to adjust the color tone, etc., between the judgment image and the image of the learning data of the trained model, thereby making it possible to further improve the judgment accuracy of each trained model.
- the processor 111 then provides the preprocessed judgment image as an input to a feature extractor (S613), and obtains the image features of the judgment image as an output (S614).
- the processor 111 then provides the features of the obtained judgment image as an input to a classifier (S615), and obtains a first positive rate indicating a first possibility of influenza infection as an output (S616).
- the feature extractor can obtain a predetermined number of features, such as the presence or absence of follicles and redness in the judgment image, as vectors. As an example, feature vectors are extracted from the judgment image, and these are stored as features of the judgment image.
- FIG. 14 is a diagram showing a process flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 14 is a diagram showing a process flow for generating a trained positive rate determination and selection model including the feature extractor of S613 and the classifier of S615 in FIG. 13. This process flow may be executed by the processor 111 of the processing device 100, or may be executed by a processor of another processing device.
- the processor executes a step of acquiring an image of a subject including at least a part of the pharynx, which has been pre-processed in the same manner as in S612 of FIG. 13, as a judgment image for learning (S711).
- the processor executes a processing step of assigning a correct answer label to the user who is the subject of the acquired judgment image for learning, which has been assigned in advance based on the results of a rapid influenza test by immunochromatography, a PCR test, a virus isolation and culture test, etc. (S712).
- the processor executes a step of storing the assigned correct answer label information as judgment result information in association with the judgment image for learning (S713).
- the processor executes a step of performing machine learning of the positive rate judgment pattern using them (S714).
- the machine learning is performed by providing a pair of the learning judgment images and the correct label information to a feature extractor composed of a convolutional neural network and a classifier composed of a neural network, and repeating learning while adjusting the parameters of each neuron so that the output from the classifier is the same as the correct label information.
- a step of acquiring a trained positive rate judgment model is executed (S715).
- the acquired trained positive rate judgment model may be stored in the memory 112 of the processing device 100 or in another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs the judgment image preprocessed in S612 into the learned positivity rate judgment model, thereby obtaining as output the feature amount of the judgment image (S614) and a first positivity rate (S616) indicating the first possibility of contracting influenza, and stores them in the memory 112 in association with the user ID information.
- the processor 111 reads out from the memory 112 at least one of the medical interview information and attribute information associated with the user ID information of the user to be judged (S617).
- the processor 111 also reads out from the memory 112 the feature amount of the judgment image calculated in S614 and stored in the memory 112 in association with the user ID information (S614).
- the processor 111 then provides the read out medical interview information and/or attribute information and the feature amount of the judgment image as inputs to the trained positive rate judgment model (S618), and acquires as an output a second positive rate indicating a second possibility of contracting influenza (S619).
- FIG. 15 is a diagram showing a process flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 15 is a diagram showing a process flow for generating a trained positive rate determination and selection model in S618 of FIG. 13. This process flow may be executed by the processor 111 of the processing device 100, or may be executed by a processor of another processing device.
- the processor executes a step of acquiring learning features from a judgment image that has been subjected to pre-processing similar to S612 of FIG. 13 on an image of a subject that includes at least a part of the pharynx (S721).
- the processor also executes a step of acquiring medical interview information and attribute information that have been stored in advance in association with user ID information of the user who is the subject of the judgment image (S721).
- the processor executes a processing step of assigning a correct answer label to the user who is the subject of the judgment image, which has been assigned in advance based on the results of a rapid influenza test by immunochromatography, a PCR test, a virus isolation and culture test, etc. (S722).
- the processor then executes a step of storing the assigned correct answer label information as judgment result information in association with the learning features of the judgment image, as well as the medical interview information and attribute information (S723).
- the processor executes a step of performing machine learning of a positive rate judgment pattern using them (S724).
- the machine learning is performed by providing these sets of information to a neural network that combines neurons, and repeating learning while adjusting the parameters of each neuron so that the output from the neural network is the same as the correct label information.
- a step of acquiring a trained positive rate judgment model is executed (S725).
- the acquired trained positive rate judgment model may be stored in the memory 112 of the processing device 100 or in another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs the feature amount of the judgment image read out in S614 and at least one of the medical interview information and attribute information read out in S617 to the trained positivity rate judgment model, thereby obtaining as output a second positivity rate (S619) indicating a second possibility of influenza infection, and stores this in the memory 112 in association with the user ID information.
- the processor 111 reads out from the memory 112 at least one of the medical interview information and attribute information associated with the user ID information of the user to be judged (S617).
- the processor 111 also reads out from the memory 112 the first positivity rate calculated in S616 and stored in the memory 112 in association with the user ID information.
- the processor 111 then provides the read out medical interview information and/or attribute information and the first positivity rate as inputs to the trained positivity rate judgment model (S620), and acquires as an output a third positivity rate indicating a third possibility of contracting influenza (S621).
- FIG. 16 is a diagram showing a process flow for generating a trained model according to an embodiment of the present disclosure. Specifically, FIG. 16 is a diagram showing a process flow for generating a trained positive rate determination and selection model in S620 of FIG. 13. This process flow may be executed by the processor 111 of the processing device 100, or may be executed by a processor of another processing device.
- the processor executes a step of acquiring first positivity information obtained by inputting a judgment image obtained by performing preprocessing similar to S612 of FIG. 13 on an image of a subject including at least a part of the pharynx into a learned positivity judgment selection model including a feature extractor (S613 of FIG. 13) and a classifier (S615 of FIG. 13) (S731).
- the processor also executes a step of acquiring interview information and attribute information previously stored in association with user ID information of the user who is the subject of the judgment image (S731).
- the processor executes a processing step of assigning a correct answer label to the user who is the subject of the judgment image based on the results of a rapid influenza test by immunochromatography, a PCR test, a virus isolation and culture test, etc. (S732). Then, the processor executes a step of storing the assigned correct answer label information as judgment result information in association with the first positivity information, and the interview information and attribute information (S733).
- the processor executes a step of performing machine learning of a positivity rate determination pattern using them (S734).
- the machine learning is performed by providing these sets of information to a neural network that combines neurons, and repeating learning while adjusting the parameters of each neuron so that the output from the neural network is the same as the correct label information.
- a step of acquiring a trained positivity rate determination model is executed (S735).
- the acquired trained positivity rate determination model may be stored in the memory 112 of the processing device 100 or in another processing device connected to the processing device 100 via a wired or wireless network.
- the processor 111 inputs the first positivity rate information read in S616 and at least one of the medical interview information and attribute information read in S617 to the trained positivity rate determination model, thereby obtaining as output a third positivity rate (S621) indicating a third possibility of influenza infection, and stores it in the memory 112 in association with the user ID information.
- a third positivity rate S621 indicating a third possibility of influenza infection
- the processor 111 reads each positivity rate from the memory 112 and performs ensemble processing (S622).
- the obtained first positivity rate, second positivity rate, and third positivity rate are input to a ridge regression model, and the ensemble result of each positivity rate is obtained as a determination result of the possibility of contracting influenza (S623).
- the ridge regression model used in S622 is generated by machine learning by the processor 111 of the processing device 100 or a processor of another processing device. Specifically, the processor acquires the first positive rate, the second positive rate, and the third positive rate from the learning judgment image. The processor also assigns a correct answer label to the user who is the subject of the learning judgment image, based on the results of a rapid influenza test by immunochromatography, a PCR test, a virus isolation and culture test, etc. in advance. The processor then provides a set of each positive rate and the corresponding correct answer label to the ridge regression model, and repeats learning while adjusting the parameters given to each positive rate so that the output is the same as the correct answer label information of the ridge regression model. As a result, a ridge regression model used for ensemble processing is obtained and stored in the memory 112 of the processing device 100 or in another processing device connected to the processing device 100 via a wired or wireless network.
- any method may be used, such as a process for obtaining the average value of each positive rate, a process for obtaining the maximum value, a process for obtaining the minimum value, a process for weighted addition, or a process using other machine learning methods such as bagging, boosting, stacking, lasso regression, linear regression, etc.
- the processor 111 stores the determination result thus obtained in the user table in the memory 112 in association with the user ID information (S624). This ends the processing flow.
- ensemble processing is performed on the first positive rate, the second positive rate, and the third positive rate to obtain a final judgment result.
- each positive rate may be used as the final judgment result as is, or ensemble processing may be performed using any two positive rates to obtain the final judgment result.
- other positive rates obtained by other methods may be further added and ensemble processing may be performed to obtain the final judgment result.
- the obtained judgment result is output via the communication interface 113, but it is also possible to output only the final judgment result, or to output each positive rate together.
- FIG. 17 is a diagram showing a processing flow executed in the processing device 100 according to an embodiment of the present disclosure. Specifically, Fig. 17 is a diagram showing a processing flow executed for the processing related to S34 to S35 in Fig. 8. The processing flow is mainly performed by the processor 111 of the processing device 100 reading and executing a program stored in the memory 112.
- the processor 111 reads data from the memory 112 that will be used to determine the reliability of the determination result, which is the possibility of contracting a specified disease (S811). For example, the processor 111 reads the first positive rate obtained in S616 of FIG. 13 and the second positive rate obtained in S618 from the memory 112 as data to be used in the first determination described below. More specifically, the processor 111 reads the first positive rate stored as the first determination information stored in the user table in S32 of FIG. 8 and the second positive rate stored as the second determination information. The processor 111 also reads the interview information read in S617 of FIG. 13, the image feature amount in S614, and the distribution information of the accumulated data shown in FIG. 7A as data to be used in the second determination described below.
- the processor 111 performs a first judgment process for judging the reliability of the judgment results (positive rate) in S32 and S33 of FIG. 8 (S812). Specifically, the processor 111 compares the first positive rate, which is the output of the first trained judgment model, with the second positive rate, which is the output of the second trained judgment model, and calculates the difference, which is the comparison result. The processor 111 then determines the reliability according to the calculated difference.
- the reliability may be a numerical value determined according to the calculated difference.
- the processor 111 determines whether the judgment result (reliability) from the first judgment process in S812 is lower than a predetermined standard (S813). For example, the processor 111 determines whether the reliability calculated numerically is lower than a predetermined preset threshold. The threshold may be determined based on past judgment results, a doctor's opinion on the past judgment results, etc. If the judgment result from the first judgment process is lower than the predetermined standard (S813: Yes), the judgment result of the positive rate is deemed suspicious, and the second judgment process and related processes are skipped.
- the processor 111 performs a second judgment process to judge the reliability of the judgment results (positive rate) in S32 and S33 of FIG. 8 (S814). Specifically, the processor 111 judges where the point determined by the read-out medical interview information and image feature amount is located in the distribution information of the accumulated data shown in FIG. 7A. That is, as shown in FIG. 18, the point determined by the read-out medical interview information and image feature amount is determined as point ⁇ 1 or point ⁇ 2.
- FIG. 18 is a schematic diagram of the reliability judgment process according to one embodiment of the present disclosure.
- the processor 111 judges whether the judgment result (position of the point) by the second judgment process of S814 satisfies a predetermined criterion (S815).
- the predetermined criterion is, for example, whether the point determined in S814 is located inside the convex hull C1 shown in FIG. 18. If the judgment result by the second judgment process satisfies the predetermined criterion (S815: Yes), the information input to the second trained judgment model is deemed appropriate, the reliability of the positive rate judgment result is deemed high, and the process proceeds to S818. In FIG. 18, if point ⁇ 1 is determined as the judgment result of S814, the reliability of the positive rate judgment result is deemed high, and the process proceeds to S818.
- the processor 111 determines that the judgment result of the positive rate is suspicious, decides not to use the information related to the judgment result for additional learning of the trained judgment model, and attaches tag information indicating that it will not be used (S816). Therefore, if the judgment result from the second judgment process meets the predetermined standard (S815: Yes), it is determined that the information input to the second trained judgment model is appropriate, and the judgment result of the positive rate is also appropriate, and the tag information is not attached.
- the processor 111 outputs the result of the determination regarding the reliability via the communication interface 113 (S817).
- the processor 111 may determine that the result of the positive rate determination is inappropriate, display on the screen that the reliability of the determination result is low, and notify the user, operator, doctor, etc. More specifically, the processor 111 may display an input error in the medical interview information and prompt the user, operator, doctor, etc. to check the medical interview information.
- the processor 111 may not only simply display that the reliability is low, but may also display the difference in the first determination process and information related to the second determination process as additional information.
- the processor 111 may display that a retest using the same method or an additional test using another method (rapid influenza test using immunochromatography or PCR test) is recommended, and prompt the user, operator, doctor, etc. to take additional measures.
- the processor 111 stores the judgment result of the first judgment process in the memory 112, and also stores the judgment result of the second judgment process if it is performed. Specifically, the processor 111 stores these judgment results in the reliability information shown in FIG. 6B.
- the processor 111 performs the first judgment process to judge the reliability of the judgment result, and performs the second judgment process according to the judgment result of the first judgment process, but this is not limited to this.
- the processor 111 may perform the second judgment process regardless of the judgment result of the first judgment process. That is, the processor 111 may comprehensively judge the reliability of the judgment result based on the judgment results of the first judgment process and the second judgment process. In this case, the processor 111 may score each of the judgment results of the first judgment process and the second judgment process, and determine the reliability according to the calculated score.
- the processor 111 judges whether the judgment result (position of the point) by the second judgment process is located inside the convex hull C1 shown in FIG. 18, but the present invention is not limited to this.
- the process of S815 may be performed by a method as shown in FIG. 19.
- FIG. 19 is a schematic diagram of the reliability judgment process according to one embodiment of the present disclosure. As shown in FIG. 19, the distance L1 between the point ⁇ 1 whose position has been determined and the convex hull C1 may be calculated, and if the distance L1 is equal to or greater than a predetermined value, the reliability of the judgment result may be judged to be low.
- the processor 111 does not judge that the reliability of the judgment result is low, and makes related information available as data for further learning.
- the calculated distance from the convex hull may be stored in the memory 112 together with the judgment result of the second judgment process.
- the processor 111 uses the distribution information of the learning data as shown in FIG. 7A and FIG. 18 as the distribution information of the accumulated data used in the second judgment process, this is not limited to this.
- the processor 111 may use the distribution information of the accumulated data as shown in FIG. 7B or FIG. 7C instead of the distribution information of the learning data as shown in FIG. 7A and FIG. 18.
- the processor 111 will select the input of the second learned judgment model (medical interview information, attribute information) and the output of the second learned judgment model (second judgment information) as the data to be read in S811.
- the processor 111 will select the output of the first learned judgment model (first judgment information) as the data to be read in S811, in addition to the input of the second learned judgment model and the output of the second learned judgment model.
- the reliability judgment process was performed using the input/output data of the first and second trained judgment models and the accumulated data, but the input/output data of the third trained judgment model may also be used.
- the input/output data of the first, second, and third trained judgment models and the accumulated data may be appropriately selected to determine the combination to be used for the reliability judgment.
- K trained judgment process may be performed by comparing the output data between the K models.
- processor 111 may read and use from memory 112 the training data of other trained models and distribution information related to input/output data, rather than the distribution information related to the accumulated data shown in Figures 7A to 7C.
- processor 111 may use distribution information of the output (feature vector) of the convolutional layer, distribution information of the input (image features, average hue, luminance, etc.) of the first trained judgment model, and distribution information related to the image score that is the output of the trained judgment image selection model, to perform the processes of S814 and S815.
- the predetermined criteria in S815 may differ depending on the distribution information of the accumulated data used.
- processor 111 may use multiple pieces of distribution information of the accumulated data in S814 and S816. For example, processor 111 may use the two pieces of distribution information of the accumulated data in FIG. 7A and FIG. 7B, execute the process of S814 for each piece of distribution information, and determine whether the position of the point in each piece of distribution information is within the convex hull or the distance from the convex hull is less than or equal to a predetermined value. In such a case, processor 111 may determine that the reliability is low if the judgment result in any one of the judgments does not satisfy the predetermined criterion, or may determine that the reliability is low if multiple judgment results do not satisfy the predetermined criterion.
- the processor 111 may perform either the first judgment process or the second judgment process and judge the reliability of the judgment result. In other words, the required reliability of the judgment result, the load for judging the reliability, etc. may be taken into consideration, and an optimal judgment process or combination of judgment processes may be performed.
- the processor 111 may also execute the second judgment process using reference data other than the accumulated data as described above.
- processor 111 may process to delete the information related to the judgment result or not store it in memory 112. Also, when the reliability is determined to be low, processor 111 may store the information related to the judgment result in a storage device such as a memory separate from the re-learning data. This prevents the information from being stored as re-learning data, and reduces the amount of stored data while preventing a decrease in the accuracy of re-learning.
- the reliability of the judgment result is merely determined to be high or low, but the processor 111 may determine the possibility of an input error in the information used in the judgment process of the possibility of morbidity and notify the user of the judgment result.
- the processor 111 may determine the possibility of an input error in the medical interview information.
- an input error in the medical interview information refers to an input error in each element constituting the medical interview information (e.g., body temperature, headache, etc.), and not an input error in the medical interview information as a whole.
- the processor 111 can determine an input error in the information used in the judgment process of the possibility of morbidity for each input element.
- the processor 111 for example, in the multidimensional space of the medical interview information, sets point a to represent the input medical interview information, sets point b to be the shortest distance on the distribution convex hull surface of the learning data from point a, and sets a difference vector a-b based on point a and point b. Then, the processor 111 may determine that the vector element (medical interview information) with a large difference is likely to be an input error. Here, the processor 111 may take into account the difference in units between the medical interview information (i.e., the units between the medical interview items) in the difference.
- FIG. 13 a case has been described in which information indicating the possibility of influenza is output using at least one of medical interview information and attribute information.
- external factor information related to influenza may be used to output information indicating the possibility of influenza.
- Such external factor information includes the results of judgments made on other users, the results of diagnoses by doctors, and influenza epidemic information in the area to which the user belongs.
- the processor 111 obtains such external factor information from other processing devices via the communication interface 113, and provides the external factor information as input to the trained positive rate judgment model, making it possible to obtain a positive rate that takes the external factor information into consideration.
- the medical interview information and attribute information are input in advance by the operator or user, or are received from an electronic medical record device or the like connected to a wired or wireless network.
- this information may be obtained from a captured subject image.
- the attribute information and medical interview information associated with the learning subject image are given as correct answer labels to the learning subject image, and a learned information estimation model is obtained by machine learning these pairs using a neural network.
- the processor 111 then gives the subject image as an input to the learned information estimation model, thereby obtaining the desired medical interview information and attribute information.
- Examples of such medical interview information and attribute information include gender, age, degree of pharyngeal redness, degree of tonsillar swelling, and the presence or absence of white fur. This saves the operator the trouble of inputting the medical interview information and attribute information.
- follicles that appear in the pharynx are a characteristic sign of influenza, and are also confirmed by visual inspection in doctors' diagnoses. Therefore, a labeling process is performed on the learning subject image by a doctor's operation input to the learning subject image. Then, position information (shape information) of the labels in the learning subject image is obtained as learning position information, and a trained region extraction model is obtained by machine learning the set of the learning subject image and the labeled learning position information using a neural network.
- the processor 111 provides the subject image as an input to the trained region extraction model, thereby outputting position information (shape information) of the region of interest (i.e., follicles).
- the processor 111 then stores the obtained position information (shape information) of the follicles as medical interview information.
- processor 111 receives a predetermined period of shooting time as a subject image. Processor 111 then extracts each of the RGB color components from each frame constituting the received video to obtain the luminance of the G (green) component. Processor 111 generates a luminance waveform of the G component in the video from the luminance of the G component of each obtained frame, and estimates the heart rate from the peak value. Note that this method utilizes the fact that hemoglobin in blood absorbs green light, but the heart rate may naturally be estimated by other methods. Processor 111 then stores the heart rate estimated in this manner as medical interview information.
- processor 111 may read the judgment image from memory 112 and provide the read judgment image as input to the feature extractor without preprocessing. Also, even when preprocessing is performed, processor 111 may provide both the preprocessed judgment image and the non-preprocessed judgment image as input to the feature extractor.
- judgment images that have been preprocessed similarly to S612 in FIG. 13 are used as training data.
- judgment images that have not been preprocessed may be used as training data.
- the trained models described in Figures 12, 14 to 16, etc. use neural networks, convolutional neural networks, multi-layer Hercepton (MLP), LSTM (Long Short Term Memory), GRU (Gated Recurrent Unit), GNN (Graph Neural Network), methods using neural networks such as Transformer, methods using gradient boosting decision trees (GBDT) such as LightGBM (Light Gradient Boosting Machine), XGBoost, and CatBoost, and learning devices such as ridge regression, logistic regression, support vector regression (SVR), nearest neighbor method, decision tree, regression tree, and random forest.
- MLP multi-layer Hercepton
- LSTM Long Short Term Memory
- GRU Global Recurrent Unit
- GNN Graph Neural Network
- methods using neural networks such as Transformer
- methods using gradient boosting decision trees (GBDT) such as LightGBM (Light Gradient Boosting Machine), XGBoost, and CatBoost
- learning devices such as ridge regression, logistic regression, support vector regression (SVR), nearest neighbor method, decision tree, regression tree, and random forest.
- the processing device 100 selects a judgment image, performs judgment processing, and outputs the judgment result, and the photographing device 200 photographs a subject image.
- these various processes can be appropriately distributed and processed by the processing device 100, the photographing device 200, etc.
- a server device is given as an example of the processing device 100, but this is not the only example.
- Other examples of the processing device 100 include various terminal devices such as smartphones, tablets, and laptop PCs, electronic medical record devices, and devices for controlling other medical devices.
- a typical example of the terminal device 300 is a tablet, but other examples include smartphones, laptop PCs, electronic medical record devices, and devices for controlling other medical devices.
- a subject image captured by the imaging device 200 is transmitted to the processing device 100 and a judgment process is performed in the processing device 100
- information related to correction of the camera 211 and the light source 212 may be transmitted together with the subject image.
- the processing device 100 may use the information related to the correction to perform a color conversion process of the subject image. This makes it possible to adjust the color tone, etc., between the judgment image and the image of the learning data of the trained model, and further improve the judgment accuracy of each trained model.
- a roughly cylindrical imaging device 200 is used to capture the subject image.
- a terminal device 300 as the imaging device and capture the subject image using a camera provided in the terminal device 300.
- the camera is not inserted into the oral cavity near the pharynx, but is placed outside the incisors (outside the body) to capture the inside of the oral cavity.
- processes and procedures described in this specification can be realized not only by those explicitly described in the embodiments, but also by software, hardware, or a combination of these. Specifically, the processes and procedures described in this specification are realized by implementing logic equivalent to the processes in media such as integrated circuits, volatile memory, non-volatile memory, magnetic disks, optical storage, etc. In addition, the processes and procedures described in this specification can be implemented as computer programs and executed by various computers including processing devices and server devices.
- Reference Signs List 1 Processing system 100 Processing device 200 Photographing device 300 Terminal device 400 Placement stand 500 Auxiliary tool 600 Operator 700 User
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Public Health (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
Le problème décrit par la présente invention est d'utiliser une image de détermination d'un sujet pour déterminer la possibilité que le sujet souffre d'une maladie prescrite. La solution selon la présente invention consiste : à acquérir une ou plusieurs images de détermination d'un sujet par l'intermédiaire d'une caméra d'utilisateur pour prendre des images du sujet; à acquérir des informations d'entretien médical et/ou des informations d'attribut concernant l'utilisateur et à déterminer la possibilité que le sujet souffre de la maladie prescrite sur la base d'un modèle de détermination entraîné pour déterminer la possibilité que le sujet souffre de la maladie prédéterminée, et/ou des images de détermination, et/ou des informations issues de l'entretien médical, et/ou des informations d'attribut; et à déterminer la fiabilité de la possibilité que le sujet souffre de la maladie sur la base des images de détermination, et/ou des informations d'entretien médical, et/ou des informations d'attribut.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2023/025095 WO2025009146A1 (fr) | 2023-07-06 | 2023-07-06 | Dispositif de traitement, programme de traitement, procédé de traitement et système de traitement |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2023/025095 WO2025009146A1 (fr) | 2023-07-06 | 2023-07-06 | Dispositif de traitement, programme de traitement, procédé de traitement et système de traitement |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025009146A1 true WO2025009146A1 (fr) | 2025-01-09 |
Family
ID=94171728
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/025095 Pending WO2025009146A1 (fr) | 2023-07-06 | 2023-07-06 | Dispositif de traitement, programme de traitement, procédé de traitement et système de traitement |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025009146A1 (fr) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023276810A1 (fr) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | Dispositif, méthode et programme de création de marqueur de maladie, dispositif d'apprentissage et modèle de détection de maladie |
| WO2023073844A1 (fr) * | 2021-10-27 | 2023-05-04 | アイリス株式会社 | Dispositif de traitement, programme de traitement et procédé de traitement |
-
2023
- 2023-07-06 WO PCT/JP2023/025095 patent/WO2025009146A1/fr active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023276810A1 (fr) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | Dispositif, méthode et programme de création de marqueur de maladie, dispositif d'apprentissage et modèle de détection de maladie |
| WO2023073844A1 (fr) * | 2021-10-27 | 2023-05-04 | アイリス株式会社 | Dispositif de traitement, programme de traitement et procédé de traitement |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10685259B2 (en) | Method for analyzing an image of a dental arch | |
| US11107218B2 (en) | Method for analyzing an image of a dental arch | |
| JP7199426B2 (ja) | 対象者の識別のためのカメラ及び画像校正 | |
| US11049248B2 (en) | Method for analyzing an image of a dental arch | |
| US10755409B2 (en) | Method for analyzing an image of a dental arch | |
| US8330807B2 (en) | Automated assessment of skin lesions using image library | |
| US20190026598A1 (en) | Method for analyzing an image of a dental arch | |
| JP6830082B2 (ja) | 歯科分析システムおよび歯科分析x線システム | |
| JP7178423B2 (ja) | 遠隔歯科医療画像のためのガイダンス方法及びシステム | |
| JP2020533702A (ja) | 対象者識別システム及び方法 | |
| US12053351B2 (en) | Method for analyzing an image of a dental arch | |
| JP7319996B2 (ja) | 遠隔歯科医療画像のための方法及びシステム | |
| US20220361739A1 (en) | Image processing apparatus, image processing method, and endoscope apparatus | |
| US12469127B2 (en) | Method for analyzing an image of a dental arch | |
| CN109935316B (zh) | 医疗辅助系统、信息终端装置、患者图像数据取得方法 | |
| CN114830107A (zh) | 图像处理系统、图像处理装置、内窥镜系统、接口以及图像处理方法 | |
| KR102041888B1 (ko) | 구강 관리 시스템 | |
| US20240277231A1 (en) | Processing Device, Processing Program, And Processing Method | |
| CN115082733B (zh) | 一种医学影像的拍摄部位确定方法、系统和装置 | |
| WO2025009146A1 (fr) | Dispositif de traitement, programme de traitement, procédé de traitement et système de traitement | |
| JP2016198140A (ja) | 器官画像撮影装置 | |
| JP2005094185A (ja) | 画像処理システム、画像処理装置、および撮像制御方法 | |
| US20240000307A1 (en) | Photography support device, image-capturing device, and control method of image-capturing device | |
| CN114240934B (zh) | 一种基于肢端肥大症的图像数据分析方法及系统 | |
| US20240130604A1 (en) | Processing Device, Processing Program, Processing Method, And Processing System |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23944394 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2025530926 Country of ref document: JP Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2025530926 Country of ref document: JP |