[go: up one dir, main page]

WO2025158641A1 - Dispositif de traitement d'images, procédé de traitement d'images et support de stockage - Google Patents

Dispositif de traitement d'images, procédé de traitement d'images et support de stockage

Info

Publication number
WO2025158641A1
WO2025158641A1 PCT/JP2024/002356 JP2024002356W WO2025158641A1 WO 2025158641 A1 WO2025158641 A1 WO 2025158641A1 JP 2024002356 W JP2024002356 W JP 2024002356W WO 2025158641 A1 WO2025158641 A1 WO 2025158641A1
Authority
WO
WIPO (PCT)
Prior art keywords
lesion
image
group
interest
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/002356
Other languages
English (en)
Japanese (ja)
Inventor
和浩 渡邉
雄治 岩舘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to PCT/JP2024/002356 priority Critical patent/WO2025158641A1/fr
Publication of WO2025158641A1 publication Critical patent/WO2025158641A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]

Definitions

  • This disclosure relates to the technical fields of image processing devices, image processing methods, and storage media that perform processing related to the evaluation of lesions using medical images.
  • Patent Document 1 discloses a diagnostic aid that calculates feature values from past and present image data obtained by CT and detects suspected lung cancer from the feature values.
  • one of the objectives of this disclosure is to provide an image processing device, an image processing method, and a storage medium that are capable of accurately evaluating lesions.
  • One aspect of the image processing device is an image-of-interest extraction means for extracting an image of interest to be analyzed for a lesion from a second group of medical images of the patient based on a comparison between a first group of medical images of the patient to which annotation information related to a first lesion has been added and a second group of medical images of the patient generated after the first group of medical images; a lesion evaluation means for evaluating the first lesion in the second medical image group based on the image of interest;
  • the image processing device has:
  • One aspect of the image processing method includes: The computer extracting an image of interest to be analyzed for the lesion from the second group of medical images based on a comparison between a first group of medical images of the patient, to which annotation information related to the first lesion has been added, and a second group of medical images of the patient, which have been generated after the first group of medical images; performing an evaluation of the first lesion in the second medical image group based on the image of interest; It is an image processing method.
  • One aspect of the storage medium is extracting an image of interest to be analyzed for the lesion from the second group of medical images based on a comparison between a first group of medical images of the patient, to which annotation information related to the first lesion has been added, and a second group of medical images of the patient, which have been generated after the first group of medical images;
  • the storage medium stores a program that causes a computer to execute a process for evaluating the first lesion in the second medical image group based on the image of interest.
  • FIG. 1 shows the schematic configuration of a lesion evaluation system.
  • 2 is an example of a functional block of a processor of the image processing device.
  • FIG. 1 is a diagram in which a group of pre-examination images and a group of target examination images of a certain patient are arranged in association with the patient's regions to be imaged.
  • FIG. 10 is a diagram showing an outline of a process executed by a first lesion evaluation unit when a pair of an image of interest and a corresponding image to be annotated is given. 10 shows an example of displaying information based on evaluations of a first lesion and a second lesion.
  • 10 is a flowchart illustrating an example of an outline of processing executed by the image processing apparatus.
  • 1 shows the schematic configuration of a lesion evaluation system.
  • FIG. 1 is a block diagram of an image processing device.
  • 10 is a flowchart illustrating an example of a procedure of processing executed by the image processing apparatus.
  • System Configuration Fig. 1 shows the schematic configuration of a lesion evaluation system 100.
  • the lesion evaluation system 100 shown in Fig. 1 evaluates the lesion (condition) of a patient undergoing treatment for a disease such as cancer by comparing it with past conditions, and presents the evaluation results to a medical professional such as a doctor as information indicating the effectiveness of the treatment, etc.
  • the lesion evaluation system 100 mainly comprises an image processing device 1, a display device 3, and an input device 4.
  • the image processing device 1 evaluates the patient's condition based on information about the patient's most recent examination, which is the subject of lesion evaluation, and information about the patient's examination conducted prior to that examination (also referred to as a "preliminary examination”). Specifically, the image processing device 1 performs evaluations of lesions detected in preliminary examinations (also referred to as "first lesions"), and evaluations of lesions not detected in previous examinations (i.e., lesions other than the first lesions, also referred to as "second lesions”). Examples of evaluations of first lesions include RECIST (Response Evaluation Criteria in Solid Tumors) evaluations, drug efficacy evaluations, and evaluations of the effectiveness of treatment for patients currently undergoing treatment. The image processing device 1 then performs display control to display the evaluation results of the first lesion and the evaluation results of the second lesion on the display device 3, and performs various processes based on user input signals received from the input device 4.
  • first lesions also referred to as “first lesions”
  • the display device 3 performs a predetermined display based on a display signal supplied from the image processing device 1.
  • Examples of the display device 3 include displays such as CRTs (Cathode Ray Tubes) and LCDs (Liquid Crystal Displays), as well as projectors.
  • the input device 4 generates a user input signal based on operations by a user of the image processing device 1, such as a doctor.
  • Examples of the input device 4 include buttons, keyboards, pointing devices such as mice, touch panels, remote controllers, voice input devices, and any other user interface.
  • FIG. 1 also shows an example of the hardware configuration of the image processing device 1.
  • the image processing device 1 mainly includes a processor 11, a memory 12, and an interface 13. These elements are connected via a data bus 19.
  • Processor 11 performs predetermined processing by executing programs stored in memory 12.
  • Processor 11 is a processor such as a CPU (Central Processing Unit), GPU (Graphics Processing Unit), or TPU (Tensor Processing Unit).
  • Processor 11 may be composed of multiple processors.
  • Processor 11 is an example of a computer.
  • Memory 12 is composed of various volatile memories used as working memory, such as RAM (Random Access Memory) and ROM (Read Only Memory), and non-volatile memory that stores information necessary for the processing of image processing device 1.
  • memory 12 may include an external storage device such as a hard disk connected to or built into image processing device 1, or may include a storage medium such as a removable flash memory.
  • Memory 12 stores programs and other information necessary for image processing device 1 to execute each process in this embodiment.
  • Memory 12 stores first lesion detection model information D1, second lesion detection model information D2, and examination information D3.
  • First lesion detection model information D1 is model information including parameters necessary to construct a first lesion detection model, which is a model used to detect a first lesion.
  • Second lesion detection model information D2 is model information including various parameters necessary to construct a second lesion detection model, which is a model used to detect a second lesion.
  • the first lesion detection model and second lesion detection model are, for example, machine learning models (including statistical models; the same applies below), and the parameters necessary for these models are stored in first lesion detection model information D1 and second lesion detection model information D2, respectively.
  • first lesion detection model information D1 includes parameters such as the layer structure, the neuron structure of each layer, the number and filter size of filters in each layer, and the weights of each element of each filter.
  • second lesion detection model information D2 includes parameters such as the layer structure, the neuron structure of each layer, the number and size of filters in each layer, and the weight of each element of each filter. Details of the first lesion detection model information D1 and the second lesion detection model information D2 will be described later.
  • Test information D3 is test information obtained by testing a patient, and includes pre-test information D31 and target test information D32. Test information D3 is associated with, for example, a patient ID for identifying the patient.
  • the pre-examination information D31 is information relating to a pre-examination of a patient.
  • the pre-examination information D31 includes a group of images of the patient generated by an image generation device during the pre-examination (also referred to as the "pre-examination image group") and metadata associated with an image in the pre-examination image group in which a lesion (i.e., a first lesion) is detected.
  • the image generation device described above is a device that generates images (slices) of the inside of a living body while changing its position, and examples of such devices include CT and MRI.
  • the pre-examination image group is an example of a "first medical image group.”
  • Metadata is data about the diagnosis results for the first lesion that is attached to an image showing the first lesion, and is generated, for example, by annotation by the doctor in charge of the pre-examination.
  • the metadata includes at least area information that indicates the area (range) of the first lesion within the image.
  • the metadata may also include the name of the disease corresponding to the lesion, the condition, and any other diagnostic results.
  • images in the pre-examination image group that have metadata attached will also be referred to as "annotated images.”
  • the above-mentioned annotation is, for example, a task performed by a doctor in charge of a pre-examination, who refers to a group of pre-examination images displayed on a display or the like, identifies images in which a lesion appears, and then inputs on a computer the designation of the lesion area within the image and other information related to the diagnosis results.
  • the metadata may also be data based on the diagnosis results obtained by applying CAD (Computer Aided Diagnosis).
  • the metadata is an example of "annotation information related to the first lesion.”
  • the target examination information D32 is data obtained from an examination to be evaluated that was conducted after the pre-examination (e.g., the most recent examination), and includes at least a group of patient images generated by an image generation device.
  • an examination to be evaluated that was conducted after the pre-examination will also be referred to as the "target examination”
  • the group of patient images included in the target examination information D32 will also be referred to as the "target examination image group.”
  • the target examination image group is an example of a "second medical image group.”
  • the interface 13 acts as an interface between the image processing device 1 and external devices.
  • the interface 13 is electrically connected to the display device 3 and the input device 4.
  • the interface 13 may be a communications interface such as a network adapter for wired or wireless communication with external devices, or may be a hardware interface compliant with USB (Universal Serial Bus) or SATA (Serial AT Attachment).
  • the interface 13 may also act as an interface with external devices such as the display device 3 and input device 4 via a communications network such as the Internet.
  • the configuration of the lesion assessment system 100 shown in Figure 1 is an example, and various modifications may be made.
  • the image processing device 1 may be configured as an integrated unit with at least one of the display device 3 and the input device 4.
  • the image processing device 1 may include an audio output device that outputs information by voice.
  • the image processing device 1 may be configured from multiple devices.
  • the image processing device 1 may receive the target examination images from the above-mentioned image generation device that generates the patient images.
  • the first lesion detection model is a machine-learned model that, when an image generated by an image generation device is input, outputs an inference result regarding the lesion area in the input image.
  • the first lesion detection model is a machine-learned model that determines the relationship between the image input to the first lesion detection model and the lesion area in the input image.
  • the lesion detection model may be a model (including a statistical model; the same applies below) that includes an architecture adopted in any machine learning method, such as a neural network or a support vector machine.
  • Representative models of such neural networks include, for example, Fully Convolutional Network, SegNet, U-Net, V-Net, Feature Pyramid Network, Mask R-CNN, and DeepLab.
  • the first lesion detection model is trained in advance based on a pair of an input image that conforms to the input format of the lesion detection model and correct answer data (in the example above, a correct answer confidence map or bounding box) that indicates the correct inference result that should be output by the lesion detection model when that input image is input.
  • correct answer data in the example above, a correct answer confidence map or bounding box
  • a first example of an inference result output by the first lesion detection model is a confidence map (including a mask image that represents the lesion area using two values) that indicates the confidence that each unit area of the input image is a lesion area.
  • the unit area may be an area of one pixel, an area of multiple pixels, or an area smaller than one pixel (a sub-pixel area).
  • the image processing device 1 determines that a connected area of unit areas (which may be limited to those of a predetermined size or larger) whose confidence level is equal to or greater than a predetermined threshold is a detected lesion area.
  • the above-mentioned predetermined threshold is a threshold for detecting lesion areas (also called a "detection threshold") and corresponds to a hyperparameter set by the user.
  • a second example of an inference result output by the first lesion detection model is a set of a bounding box indicating the extent of the lesion area in the input image and the reliability (certainty) that the area identified by the bounding box is a lesion area.
  • the image processing device 1 determines that a bounding box with a reliability equal to or greater than a predetermined threshold is a detected lesion area.
  • the above-mentioned predetermined threshold is a detection threshold for detecting lesion areas, and corresponds to a hyperparameter set by the user.
  • the first lesion detection model may be multiple models each created to be specialized for a specific body part (e.g., organ) of the patient being imaged.
  • model information for multiple first lesion detection models specialized for each body part is stored in first lesion detection model information D1.
  • the second lesion detection model When an image generated by the image generation device is input, the second lesion detection model outputs an inference result regarding the lesion area in the input image.
  • the second lesion detection model is a model that has learned the relationship between the image input to the second lesion detection model and the lesion area in the input image.
  • the lesion detection model may be a model that includes an architecture used in any machine learning method, such as a neural network or a support vector machine.
  • the first lesion detection model and the second lesion detection model may be the same model.
  • lesion detection model information regarding a single model that functions as both the first lesion detection model information D1 and the second lesion detection model information D2 is pre-stored in memory 12.
  • the first lesion detection model and the second lesion detection model may include a feature extraction model that extracts features from an image, or may be models separate from the feature extraction model. In the latter case, the first lesion detection model and the second lesion detection model are each machine-learned models that output the above-mentioned inference results when the feature values (tensors with a predetermined number of dimensions) output by the feature extraction model to which an image is input are input.
  • Functional Block Diagram 2 is an example of functional blocks of the processor 11 of the image processing device 1.
  • the processor 11 of the image processing device 1 includes an acquisition unit 30, an image-of-interest extraction unit 31, a first lesion evaluation unit 32, a second lesion evaluation unit 33, and a display control unit 34. Note that in FIG. 2, blocks between which data is exchanged are connected by solid lines, but the combination of blocks between which data is exchanged is not limited to this. The same applies to other functional block diagrams described below.
  • the acquisition unit 30 acquires the pre-examination information D31 and target examination information D32 contained in the patient's examination information D3 via the interface 13.
  • the acquisition unit 30 then supplies the acquired pre-examination information D31 and target examination information D32 to the image of interest extraction unit 31 and the second lesion evaluation unit 33, respectively.
  • the acquisition unit 30 identifies a patient to be evaluated in response to a user input signal received from the input device 4, and acquires the pre-examination information D31 and target examination information D32 corresponding to the identified patient.
  • the image of interest extraction unit 31 extracts an image (also referred to as an "image of interest") that is to be analyzed for the first lesion from the group of target examination images included in the target examination information D32, and supplies the image of interest, as well as the annotated image and metadata from the group of pre-examination images that correspond to the image of interest, to the first lesion evaluation unit 32. Details of the processing performed by the image of interest extraction unit 31 will be described later. In other words, the image of interest is an image that should be noted in the evaluation of the first lesion.
  • the first lesion evaluation unit 32 evaluates the first lesion in the group of target examination images based on the image of interest, the annotated image corresponding to the image of interest, and the metadata. In this case, the first lesion evaluation unit 32 first identifies the lesion area in the image of interest using a first lesion detection model constructed with reference to the first lesion detection model information D1. The first lesion evaluation unit 32 then identifies the presence or absence of a lesion area in the image of interest that is the same as the first lesion indicated by the metadata of the annotated image, and the size of the lesion area if present, and evaluates the change in the first lesion (e.g., disappearance, reduction, stability, progression).
  • the change in the first lesion e.g., disappearance, reduction, stability, progression
  • the first lesion evaluation unit 32 evaluates the change in the first lesion that occurred between the pre-examination and the target examination.
  • the first lesion evaluation unit 32 supplies the evaluation result for the first lesion to the display control unit 34. Details of the process of generating the evaluation result for the first lesion by the first lesion evaluation unit 32 will be described later.
  • the second lesion evaluation unit 33 evaluates a second lesion not indicated in the metadata based on the pre-examination information D31 and target examination information D32 supplied from the acquisition unit 30. In this case, the second lesion evaluation unit 33 identifies a lesion area in each image of the target examination image group using a second lesion detection model constructed with reference to the second lesion detection model information D2, and identifies, among the identified lesion areas, a lesion area corresponding to a second lesion not indicated in the metadata of the pre-examination information D31. At this time, the second lesion evaluation unit 33 determines whether the detected lesion area is a second lesion using the image-by-image matching results between the pre-examination image group and the target examination image group, as described below.
  • the second lesion detected by the second lesion evaluation unit 33 is either a lesion that was already present at the time of the pre-examination but was overlooked without being detected and is therefore not recorded in the metadata (also referred to as an "overlooked lesion"), or a lesion that did not exist at the time of the pre-examination but has newly developed by the time of the target examination (also referred to as a "new lesion").
  • new lesions include lesions that have metastasized from a first lesion.
  • the second lesion evaluation unit 33 supplies the evaluation results regarding the second lesion to the display control unit 34.
  • the display control unit 34 generates display information based on the evaluation results for the first lesion supplied from the first lesion evaluation unit 32, the evaluation results for the second lesion supplied from the second lesion evaluation unit 33, and the examination information D3. The display control unit 34 then supplies the generated display information to the display device 3, causing the display device 3 to display information based on the evaluation results for the first lesion and/or second lesion.
  • the image processing device 1 in this embodiment performs evaluation of a first lesion detected in a preliminary examination and evaluation of a second lesion, such as a new lesion, without requiring a doctor to perform an image diagnosis. This allows the image processing device 1 to optimally support diagnosis and clinical trials.
  • each of the components of the acquisition unit 30, image-of-interest extraction unit 31, first lesion evaluation unit 32, second lesion evaluation unit 33, and display control unit 34 can be realized, for example, by the processor 11 executing a program.
  • each component may be realized by recording the necessary programs on any non-volatile storage medium and installing them as needed.
  • at least some of these components are not limited to being realized by software programs, but may also be realized by any combination of hardware, firmware, and software.
  • at least some of these components may be realized using a user-programmable integrated circuit, such as an FPGA (Field-Programmable Gate Array) or a microcontroller. In this case, a program consisting of the above components may be realized using this integrated circuit.
  • FPGA Field-Programmable Gate Array
  • each component may be configured using an ASSP (Application Specific Standard Product), an ASIC (Application Specific Integrated Circuit), or a quantum processor (quantum computer control chip).
  • ASSP Application Specific Standard Product
  • ASIC Application Specific Integrated Circuit
  • quantum processor quantum computer control chip
  • the image of interest extraction unit 31 performs matching (i.e., correspondence) on an image-by-image basis between the pre-inspection image group and the target inspection image group, and extracts at least an image from the target inspection image group that is associated with the annotated image as an image of interest.
  • Figure 3 shows a group of pre-examination images and a group of target examination images for a patient, arranged in correspondence with the patient's body parts to be imaged.
  • the group of pre-examination images includes at least one annotated image Ip1.
  • the image of interest extraction unit 31 determines the similarity of each image between the pre-examination image group and the target examination image group, and generates a matching result in which the most similar images are matched.
  • the annotated image Ip1 is most similar (matches) with image It2 in the target examination image group, and the image of interest extraction unit 31 extracts image It2 and images It1 and It3, which represent the patient's body parts before and after it, as images of interest.
  • the image of interest extraction unit 31 calculates a similarity index for each image in the pre-examination image group that represents the degree of similarity between the image and each image in the target examination image group, and identifies matching images based on the similarity index.
  • the image of interest extraction unit 31 may calculate the similarity index by directly comparing the images, or may extract image features and calculate the similarity index based on the extracted features.
  • the image-of-interest extraction unit 31 may acquire feature amounts output from the feature extraction model (or feature extraction layer) when each image is input to the above-mentioned feature extraction model (or feature extraction layer of the first lesion detection model or the second lesion detection model) used for lesion detection.
  • the image-of-interest extraction unit 31 may perform matching on an image-by-image basis between the pre-examination image group and the target examination image group using any time-series data matching method, not limited to the above-mentioned example.
  • the image of interest extraction unit 31 may determine only image It2 that matches the annotated image Ip1 as the image of interest, or may determine two or more images before and after image It2 that matches the annotated image Ip1 as the images of interest.
  • the image-of-interest extraction unit 31 determines the image of interest based on matching between the group of pre-examination images and the group of target examination images.
  • the image processing device 1 performs the detection process for the first lesion only on the image of interest, which reduces false detection of the first lesion and improves detection accuracy compared to when the detection process for the first lesion is performed on all images in the group of target examination images.
  • the first lesion evaluation unit 32 evaluates the first lesion based on the lesion area indicated by the metadata of the annotated image and the lesion area detected using the first lesion detection model from the image of interest that matches the annotated image. In the evaluation of the first lesion, the first lesion evaluation unit 32 determines the presence or absence of a lesion area in the image of interest that represents the first lesion indicated by the metadata, and the size of the lesion area if present. Note that in the RECIST evaluation performed when the disease to be evaluated is cancer, it is necessary to identify the lesion detected in the preliminary examination (i.e., the first lesion) in the image generated in the target examination and measure the size of the identified lesion.
  • RECIST evaluation is an evaluation standard used to determine whether treatment for solid cancer is effective. Before treatment begins, the size of the tumor (target lesion) is measured using diagnostic imaging such as CT, and the condition during treatment is classified into the following four states.
  • CR Complete Response
  • PR Partial Response
  • SD Stable Disease
  • PD Progressive Disease
  • a complete response is when all target lesions disappear during the course of treatment, or in the case of lymph nodes, when their short axis shrinks to less than 10 mm.
  • a partial response is when the target lesion shrinks by 30% or more since before the start of treatment.
  • Stable disease is a state between partial response and progression. Progression is when the target lesion grows by 20% or more from its smallest size during the course of treatment, or the diameter of the target lesion increases by 5 mm or more.
  • RECIST evaluation requires identifying the target lesion in images taken before and after treatment, and then measuring and comparing its size.
  • RECIST evaluation is used to define the response rate as the ratio of complete response and partial response, and to evaluate the time from complete response, partial response, or stable disease to progression as progression-free survival.
  • Figure 4 shows an overview of the processing performed by the first lesion assessment unit 32 when a pair of an image of interest and a corresponding image to be annotated is given.
  • the image to be annotated has metadata attached to it indicating that the area within the bounding box 80 is the lesion area.
  • the first lesion evaluation unit 32 aligns the patient region 78 displayed in the annotated image with the patient region 79 displayed in the image of interest (i.e., matches the positions of the patient regions). In this case, the first lesion evaluation unit 32 translates at least one of the patient region 78 or the patient region 79 within the image so that the positions of the patient region 78 and the patient region 79 within the image coincide. In this case, for example, the first lesion evaluation unit 32 moves at least one of the patient region 78 or the patient region 79 within the image so that the degree of overlap between the regions is maximized. In this case, an index indicating the degree of overlap between any regions, such as IOU (Intersection over Union), may be used.
  • IOU Intersection over Union
  • the first lesion evaluation unit 32 moves the patient region 79 within the image of interest. If the image sizes of the image of interest and the image to be annotated are different, the first lesion evaluation unit 32 may change the image size of at least one of the image of interest and the image to be annotated, and normalize the image size.
  • the first lesion evaluation unit 32 inputs the image of interest into the first lesion detection model constructed by referencing the first lesion detection model information D1, and detects a lesion area within the image of interest based on the inference results output by the first lesion detection model.
  • the first lesion evaluation unit 32 detects the areas within bounding boxes 81 to 83 as lesion areas.
  • the first lesion evaluation unit 32 determines whether the lesion area detected in the image of interest (also referred to as the "detected lesion area”) is identical to the lesion area in the image to be annotated (also referred to as the "annotated lesion area") indicated by the metadata. In this case, the first lesion evaluation unit 32 recognizes, for example, a detected lesion area whose degree of overlap with the annotated lesion area (e.g., IOU) is equal to or greater than a predetermined degree as the lesion area of the first lesion.
  • the above-mentioned predetermined degree is set to a default value stored in the memory 12, for example.
  • the first lesion evaluation unit 32 determines that the bounding box 80 and the bounding box 82 indicate the same lesion area of the first lesion because the bounding box 82 of the image of interest overlaps the bounding box 80 of the annotated image by a predetermined degree or more.
  • the first lesion evaluation unit 32 determines that the first lesion has disappeared as a result of treatment after the preliminary examination. In this case, for example, if there is an annotated lesion area whose degree of overlap with each detected lesion area is less than a predetermined degree, the first lesion evaluation unit 32 determines that the first lesion corresponding to the annotated lesion area has disappeared.
  • the first lesion evaluation unit 32 compares the sizes of bounding box 80 and bounding box 82, and performs RECIST evaluation, etc. On the other hand, the first lesion evaluation unit 32 ignores the lesion detection results for bounding boxes 81 and 83 of the image of interest that are located away from bounding box 80 of the image to be annotated (i.e., the degree of overlap is less than a predetermined degree) and does not use them in the evaluation of the first lesion.
  • the first lesion evaluation unit 32 aligns the image of interest with the image to be annotated, and performs an evaluation of the first lesion based on the detected lesion area that overlaps with the annotated lesion area. This allows the first lesion evaluation unit 32 to appropriately identify the first lesion detected in the preliminary examination in the image of interest as well, and accurately perform an evaluation of the first lesion. Furthermore, this method can appropriately identify the lesion area of the first lesion even when the first lesion detection model detects multiple lesion areas, as in the example shown in Figure 4, so it is possible to lower the detection threshold set in the first lesion detection model (i.e., increase the sensitivity of the first lesion detection model). As a result, it is possible to prevent the first lesion from being overlooked in the image of interest and improve the detection accuracy of the first lesion.
  • the second lesion evaluation unit 33 performs lesion detection on each image of the target examination image group obtained in the target examination, and determines whether the detected lesion area is a second lesion different from the first lesion based on the metadata of the pre-examination information D31 and the matching results on an image-by-image basis between the pre-examination image group and the target examination image group.
  • the second lesion evaluation unit 33 performs evaluation of the second lesion based on the results of applying the second lesion detection model to each image in the target test image group.
  • the second lesion evaluation unit 33 first sequentially selects images from the group of target examination images as processing targets, inputs the selected images (also simply referred to as "target examination images") into a second lesion detection model constructed based on the second lesion detection model information D2, and detects a lesion area in the target examination image based on the inference results output by the second lesion detection model. If a lesion area is detected in the target examination image, the second lesion evaluation unit 33 identifies an image from the group of pre-examination images (also simply referred to as a "corresponding pre-examination image”) that is associated with the target examination image through matching by the image-of-interest extraction unit 31.
  • the second lesion evaluation unit 33 then references the metadata attached to the corresponding pre-examination image to determine whether the detected lesion area is a second lesion. If there is no metadata attached to the corresponding pre-examination image, or if the annotated lesion area indicated by the metadata differs from the lesion area detected in the target examination image, the second lesion evaluation unit 33 determines that the lesion area detected in the target examination image is likely to be a second lesion. Note that the identity of the lesion area detected in the target examination image and the annotated lesion area can be determined using, for example, the same method as the method used by the first lesion evaluation unit 32 to determine whether they are the same lesion (see Figure 4).
  • the second lesion evaluation unit 33 performs evaluation of the second lesion based on a comparison of the results of applying each image in the target test image group and each image in the pre-test image group to the second lesion detection model.
  • the second lesion evaluation unit 33 inputs target examination images selected in order from the group of target examination images into the second lesion detection model, and detects lesion areas in the target examination images based on the inference results output by the second lesion detection model.
  • the second lesion evaluation unit 33 also inputs corresponding pre-examination images associated with the target examination images into the second lesion detection model, and detects lesion areas in the corresponding pre-examination images based on the inference results output by the second lesion detection model.
  • the second lesion evaluation unit 33 determines whether the lesion area detected in the target examination image is likely to be a second lesion based on whether the lesion area detected in the target examination image corresponds to the lesion area indicated by the metadata and the lesion area detected in the corresponding pre-examination image (i.e., whether they are the same lesion). For example, if there is a lesion area in the corresponding pre-examination image that is the same lesion as the lesion area detected in the target examination image, but the metadata does not indicate this lesion area, the second lesion evaluation unit 33 determines that the lesion area detected in the target examination image is likely to be an overlooked lesion (i.e., a second lesion).
  • the second lesion evaluation unit 33 may determine that it is a region that is prone to erroneous detection. In another example, if there is no lesion area in the corresponding pre-examination image that is the same lesion as the lesion area detected in the target examination image, the second lesion evaluation unit 33 determines that the lesion area detected in the target examination image is likely to be a new lesion (i.e., a second lesion), regardless of the metadata.
  • the second lesion evaluation unit 33 determines that the lesion area detected in the target examination image is the same lesion as the lesion area in the corresponding pre-examination image and the lesion area indicated by the metadata, it determines that the lesion area detected in the target examination image is the first lesion.
  • the second lesion evaluation unit 33 supplies the evaluation results for the second lesion to the display control unit 34, along with information generated in conjunction with the evaluation of the second lesion.
  • the second lesion evaluation unit 33 supplies to the display control unit 34 information on the lesion area detected by the second lesion inspection model, information on the reliability (certainty) of the lesion area detected by the second lesion inspection model, information on the lesion area determined to be possibly the second lesion, etc.
  • Display Example Fig. 5 shows a display example of information based on the evaluations of the first lesion and the second lesion.
  • the display control unit 34 generates display information based on the information supplied from the first lesion evaluation unit 32 and the second lesion evaluation unit 33, the examination information D3, etc., and transmits the generated display information to the display device 3, thereby causing the display device 3 to display the display screen shown in Fig. 5.
  • the display control unit 34 of the image processing device 1 provides an image display area 70, an image selection area 71, a display switch selection area 72, a lesion-of-interest determination result display area 73, and a warning display area 74 on the display screen.
  • the display control unit 34 displays, in the image display area 70, an arbitrary image selected by the user from the group of target examination images based on operation on the input device 4. In the example of FIG. 5, the display control unit 34 displays the 23rd image out of the group of 135 target examination images. Furthermore, in accordance with the selection made in the display switch selection area 72 (described below), the display control unit 34 displays, on the image, a bounding box 91 indicating the area of the lesion of interest corresponding to the first lesion, a bounding box 92 indicating an area that may be an overlooked lesion, and a bounding box 93 indicating an area that may be a new lesion.
  • the display control unit 34 also displays a score (here, a value range of 0 to 100) indicating the reliability (certainty) that the area is a lesion area, in association with each of the bounding boxes 91 to 93.
  • a score here, a value range of 0 to 100
  • the display control unit 34 displays the bounding box 91 indicating the area of the lesion of interest and the score based on the inference results output by the first lesion detection model supplied from the first lesion evaluation unit 32.
  • the display control unit 34 displays a bounding box 92 indicating an area that may be an overlooked lesion and a score, and a bounding box 93 indicating an area that may be a new lesion and a score, based on the inference results output by the second lesion detection model supplied from the second lesion evaluation unit 33.
  • the display control unit 34 may highlight, for example, by blinking, the bounding boxes (bounding boxes 92 and 93 in the example of Figure 5) relating to second lesions whose scores are equal to or greater than a predetermined threshold.
  • the predetermined threshold is pre-stored in the memory 12 or the like. This allows the display control unit 34 to effectively make the viewer aware of the presence of a second lesion that is highly likely to exist.
  • the display control unit 34 also displays, on an image-by-image basis, a portion of the target examination image group corresponding to a region close to the image displayed in the image display area 70 as thumbnail images in the image selection area 71.
  • the display control unit 34 may also display all thumbnail images of the target examination image group in the image selection area 71.
  • the display control unit 34 may also sequentially switch the images displayed in the image display area 70 when it detects a selection operation in the image selection area 71 or a predetermined operation such as a scrolling operation from the input device 4.
  • the display control unit 34 may also highlight thumbnail images of images with warnings in the image selection area 71. In the example of Figure 5, since warnings exist for the 23rd and 26th images, the display control unit 34 highlights these images with a border effect.
  • An image with a warning is, for example, an image for which warning information is displayed in the warning display area 74 when selected as an image to be displayed in the image display area 70.
  • the display control unit 34 displays check boxes in the display switching selection area 72 for selecting whether or not to display various types of information.
  • the display control unit 34 provides a first check box CH1 for selecting whether or not to display notable lesions, a second check box CH2 for selecting whether or not to display overlooked lesions, a third check box CH3 for selecting whether or not to display new lesions, and a fourth check box CH4 for selecting whether or not to display pre-examination results.
  • the display control unit 34 displays bounding boxes 91 to 93 and their corresponding scores on the image in the image display area 70.
  • the display control unit 34 references the pre-examination information D31 and superimposes area information of the lesion area indicated by the metadata attached to the image in the pre-examination image group corresponding to the image currently displayed in the image display area 70 on the image display area 70.
  • the above-mentioned area information may be an image of the lesion area indicated by the metadata cut out from the image in the pre-examination image group, or may be a bounding box indicating the lesion area, etc.
  • the display control unit 34 also displays the RECIST assessment (here, "PD") made by the first lesion assessment unit 32 in the lesion assessment result display area 73. Furthermore, since the first lesion assessment unit 32 calculated that the size of the first lesion indicated by the bounding box 91 associated with the RECIST assessment had increased by x%, the display control unit 34 displays the calculation result, "x% increase,” in the lesion assessment result display area 73.
  • PD RECIST assessment
  • the display control unit 34 displays the stage of the first lesion (here, "3") indicated by the bounding box 91 in the lesion determination result display area 73.
  • the display control unit 34 recognizes the stage of the lesion based on the inference result of the first lesion detection model regarding the first lesion indicated by the bounding box 91.
  • the display control unit 34 may input an image including the image area within the bounding box 91 to a machine learning model other than the first lesion detection model and the second lesion detection model, and recognize the stage of the lesion based on the inference result output by the machine learning model.
  • the above-mentioned machine learning model is a model trained by machine learning to output an inference result regarding the stage of the lesion when an image including a lesion area is input, and the trained model information is pre-stored in memory 12, etc.
  • the display control unit 34 displays warning information corresponding to the met predetermined criteria in the warning display area 74.
  • the display control unit 34 displays a warning message indicating the possibility of a new lesion in the warning display area 74.
  • the predetermined criteria for outputting a warning are not limited to the presence of a lesion area that may be a new lesion, but may be any criteria that indicate a dangerous condition.
  • the display control unit 34 may display warning information in the warning display area 74 that warns that the first lesion is growing.
  • the display example shown in Figure 5 allows the viewer (the patient's doctor) to easily understand the evaluation results for the first lesion and the evaluation results for the second lesion (overlooked lesion, new lesion). Therefore, the image processing device 1 can effectively support the viewer, the doctor, in making decisions such as determining a treatment plan for the patient.
  • the information based on the evaluation results for the first and second lesions described above is an example of information that supports decision-making.
  • the display example in FIG. 5 is merely an example, and various modifications may be applied.
  • the display control unit 34 may determine the display mode of the corresponding bounding box (e.g., color, line type, and/or line thickness, etc.) based on the reliability score.
  • the display control unit 34 may display them in any visually distinguishable manner (e.g., color-coded display).
  • FIG. 6 is an example of a flowchart showing an outline of the processing executed by the image processing device 1.
  • the image processing device 1 acquires the pre-examination information D31 and the target examination information D32 for the target patient (step S11).
  • the image processing device 1 may receive, via the input device 4, user input specifying the target patient, user input specifying the pre-examination information D31, user input specifying the first lesion evaluation unit 32, and the like.
  • the image processing device 1 then sequentially or in parallel executes processes corresponding to steps S12 to S15 and a process corresponding to step S16.
  • the image processing device 1 first extracts an image of interest corresponding to the annotated image from the target inspection image group included in the target inspection information D32 (step S12).
  • the image processing device 1 for example, performs image-by-image matching between the pre-inspection image group and the target inspection image group, and extracts an image (and adjacent images) from the target inspection image group that corresponds to the annotated image included in the pre-inspection image group as the image of interest.
  • the image processing device 1 aligns the position of the subject in the image of interest based on the image to be annotated (step S13). As a result, the image processing device 1 translates the subject in the image of interest so that the subject overlaps most closely between the image to be annotated and the image of interest.
  • the image processing device 1 detects the lesion area in the image of interest (step S14). In this case, the image processing device 1 inputs the image of interest into a first lesion detection model constructed with reference to the first lesion detection model information D1, and detects the lesion area in the image of interest based on the inference result output by the first lesion detection model.
  • the image processing device 1 evaluates the first lesion in the group of target examination images (step S15). In this case, the image processing device 1 compares the lesion area indicated by the metadata in the annotated image with the lesion area detected in step S14, and considers lesion areas that overlap to a predetermined degree or more to be the same lesion and measures changes in size, etc.
  • the image processing device 1 performs an evaluation of second lesions (new lesions, overlooked lesions) using each image in the group of target examination images (step S16).
  • the image processing device 1 detects lesion areas corresponding to second lesions different from the first lesion registered in the pre-examination information D31 from the group of target examination images, based on the second lesion detection model constructed with reference to the second lesion detection model information D2, the group of target examination images, and the pre-examination information D31.
  • the image processing device 1 displays information based on the evaluation results from step S15 and step S16 on the display device 3 (step S17).
  • the image processing device 1 does not need to perform evaluation regarding the second lesion using the second lesion detection model.
  • the image processing device 1 may perform display in step S17 based on the evaluation results obtained in step S15 without performing processing equivalent to step S16. Furthermore, the image processing device 1 may perform evaluation of the second lesion based on the inference results of the first lesion detection model. For example, the image processing device 1 generates an evaluation result that identifies a lesion area (area within bounding boxes 81 and 83 in FIG. 4) that is determined to be a lesion area different from the first lesion, among the lesion areas detected by the first lesion detection model, as a lesion area that may be the second lesion.
  • a lesion area area within bounding boxes 81 and 83 in FIG. 4
  • the image processing device 1 may perform an evaluation of a progressing second lesion only when the evaluation of the first lesion is equal to or lower than a predetermined criterion.
  • the predetermined criterion is a predetermined criterion indicating the possibility of the occurrence of a new lesion (metastasis), such as "progression" in the RECIST evaluation.
  • step S15 of FIG. 6 the image processing device 1 performs an evaluation of the first lesion based on each image of interest, and if there is an image of interest that is evaluated below a predetermined standard, it executes step S16 and then step S17. On the other hand, if there is no image of interest that is evaluated below the predetermined standard, the image processing device 1 executes step S17 without executing step S16.
  • the image processing device 1 performs evaluation of the second lesion only when there is a possibility of a new lesion occurring, thereby effectively reducing the processing load.
  • At least one of the first lesion detection model information D1, the second lesion detection model information D2, the preliminary examination information D31, and the target examination information D32 may be stored in a storage device separate from the image processing device 1.
  • FIG 8 is a schematic diagram of a lesion evaluation system 100A in a modified example.
  • Lesion evaluation system 100A includes a server device 2 that stores at least one of first lesion detection model information D1, second lesion detection model information D2, pre-examination information D31, and target examination information D32.
  • Lesion evaluation system 100A also includes multiple image processing devices 1 (1A, 1B, ...) that are capable of data communication with server device 2 via a network.
  • each image processing device 1 references at least one of the first lesion detection model information D1, second lesion detection model information D2, pre-examination information D31, and target examination information D32 via the network.
  • the interface 13 of each image processing device 1 includes a communications interface such as a network adapter for communications.
  • each image processing device 1 can reference the first lesion detection model information D1, second lesion detection model information D2, pre-examination information D31, and target examination information D32 to suitably execute processing related to lesion evaluation.
  • the server device 2 may instead execute at least some of the processing executed by each functional block of the processor 11 of the image processing device 1 shown in FIG. 2.
  • Second Embodiment 8 is a block diagram of an image processing device 1X.
  • the image processing device 1X includes an image-of-interest extraction unit 31X and a lesion evaluation unit 32X.
  • the image processing device 1X may be composed of a plurality of devices.
  • the image-of-interest extraction means 31X extracts an image of interest from the second group of medical images that is to be subjected to lesion-related analysis, based on a comparison between a first group of medical images of the patient that have annotation information related to the first lesion and a second group of medical images of the patient that were generated after the first group of medical images.
  • the image-of-interest extraction means 31X can be, for example, the image-of-interest extraction unit 31 in the first embodiment (including modified examples, the same applies below).
  • the lesion evaluation means 32X performs an evaluation of the first lesion in the second medical image group based on the image of interest.
  • the lesion evaluation means 32X can be, for example, the first lesion evaluation unit 32 in the first embodiment.
  • FIG. 9 is an example of a flowchart showing the processing steps executed by the image processing device 1X.
  • the image-of-interest extraction means 31X compares a first group of medical images of a patient, which have annotation information related to the first lesion, with a second group of medical images of the patient that were generated after the first group of medical images, and extracts an image of interest from the second group of medical images that will be used for lesion analysis (step S21).
  • the lesion evaluation means 32X evaluates the first lesion in the second group of medical images based on the image of interest (step S22).
  • the image processing device 1X can suitably extract, based on the image, an image of interest from the second medical image group for evaluation of the first lesion to which annotation information has been added in the first medical image group, and can efficiently and accurately perform evaluation of the first lesion in the second medical image group.
  • Non-transitory computer-readable media include various types of tangible storage media.
  • Examples of non-transitory computer-readable media include magnetic storage media (e.g., flexible disks, magnetic tapes, hard disk drives), magneto-optical storage media (e.g., magneto-optical disks), CD-ROMs (Read Only Memory), CD-Rs, CD-R/Ws, semiconductor memories (e.g., mask ROMs, programmable ROMs (PROMs), erasable PROMs (EPROMs), flash ROMs, and random access memories (RAMs).
  • magnetic storage media e.g., flexible disks, magnetic tapes, hard disk drives
  • magneto-optical storage media e.g., magneto-optical disks
  • CD-ROMs Read Only Memory
  • CD-Rs Compact Only Memory
  • CD-R/Ws Compact Discs
  • semiconductor memories e.g., mask ROMs, programmable ROMs (PROMs), erasable PROMs (EPROMs
  • Programs may also be supplied to computers via various types of transient computer-readable media.
  • transient computer-readable media include electrical signals, optical signals, and electromagnetic waves.
  • Transitory computer-readable media can supply programs to computers via wired communication paths such as electrical wires and optical fibers, or wireless communication paths.
  • An image processing device having: [Appendix 2] The image processing device described in Appendix 1, wherein the target image extraction means matches the first medical image group with the second medical image group on an image-by-image basis, and extracts at least an image from the second medical image group that corresponds to an image to which the annotation information is attached as the target image.
  • the lesion evaluation means acquires the detection result based on the image of interest and a machine learning model;
  • the image processing device according to claim 5, wherein the machine learning model is a model that has been machine-learned to determine the relationship between an image input to the machine learning model and a lesion area present in the image.
  • the lesion evaluation means aligns the area of the patient in the image to which the annotation information is added with the area of the patient in the image of interest, and determines whether the first lesion is present in the image of interest based on the degree of overlap after the alignment between the lesion area detected from the image of interest and the area of the first lesion indicated by the annotation information.
  • the image processing device further comprising a display control means for controlling display of information based on the evaluation results together with the target image on a display device as information to support decision-making.
  • the results of the evaluation include at least one of a result of RECIST evaluation and a growth rate of the first lesion; 9.
  • the display control means performs display control to display information regarding at least one of the RECIST evaluation result and the growth rate on the display device.
  • the display control means displays warning information regarding the first lesion based on the result of the evaluation.
  • Appendix 11 An image processing device as described in Appendix 1, further comprising a second lesion evaluation means for performing an evaluation of a second lesion not indicated by the annotation information based on the annotation information and the second group of medical images.
  • Appendix 12 The image processing device described in Appendix 11, wherein the second lesion evaluation means performs an evaluation of the second lesion based on the detection results of the lesion area in each image of the second medical image group and the detection results of the lesion area in the images of the first medical image group corresponding to each image of the second medical image group.
  • the lesion evaluation means acquires, based on a machine learning model, a detection result of a lesion area in each image of the second medical image group and a detection result of a lesion area in an image of the first medical image group corresponding to each image of the second medical image group; 13.
  • Appendix 14 12.
  • Appendix 15 15.
  • the image processing device wherein the display control means highlights the area in which the second lesion is detected on the images of the second medical image group.
  • the display control means displays warning information regarding the second lesion based on the result of the evaluation regarding the second lesion.
  • the computer extracting an image of interest to be analyzed for the lesion from the second group of medical images based on a comparison between a first group of medical images of the patient, to which annotation information related to the first lesion has been added, and a second group of medical images of the patient, which have been generated after the first group of medical images; performing an evaluation of the first lesion in the second medical image group based on the image of interest; Image processing methods.
  • Appendix 18 extracting an image of interest to be analyzed for the lesion from the second group of medical images based on a comparison between a first group of medical images of the patient, to which annotation information related to the first lesion has been added, and a second group of medical images of the patient, which have been generated after the first group of medical images;
  • a storage medium storing a program that causes a computer to execute a process for evaluating the first lesion in the second medical image group based on the image of interest.
  • Appendix 19 An image processing method as described in Appendix 17, in which the first medical image group and the second medical image group are matched on an image-by-image basis, and at least an image from the second medical image group that corresponds to an image to which the annotation information is added is extracted as the image of interest.
  • Appendix 20 18. The image processing method of claim 17, wherein, when the first lesion is present in the target image, changes in the first lesion are evaluated based on the size of the lesion area of the first lesion in the image with the annotation information and the size of the lesion area of the first lesion in the target image.
  • Appendix 21 18.
  • Appendix 22 21. The image processing method according to claim 20, wherein it is determined whether the first lesion is present in the image of interest based on the annotation information and a detection result of a lesion area in the image of interest.
  • Appendix 23 Obtaining the detection result based on the image of interest and a machine learning model; 23.
  • the image processing method according to claim 22, wherein the machine learning model is a model that has been machine-learned to determine the relationship between an image input to the machine learning model and a lesion area present in the image.
  • Appendix 24 23.
  • Appendix 25 18. The image processing method according to claim 17, wherein display control is performed to display information based on the results of the evaluation on a display device together with the image of interest as information to support decision-making.
  • the results of the evaluation include at least one of a result of RECIST evaluation and a growth rate of the first lesion; 26.
  • the image processing method according to claim 25, further comprising: performing display control to display information on the display device regarding at least one of the RECIST evaluation results and the growth rate.
  • Appendix 27 26.
  • the image processing method of claim 25, further comprising displaying warning information regarding the first lesion based on the results of the evaluation.
  • Appendix 28 18.
  • the image processing method of claim 17, further comprising: evaluating a second lesion not indicated by the annotation information based on the annotation information and the second group of medical images.
  • Appendix 29 An image processing method as described in Appendix 28, in which an evaluation of the second lesion is performed based on the detection results of the lesion area in each image of the second medical image group and the detection results of the lesion area in the images of the first medical image group corresponding to each image of the second medical image group.
  • Appendix 30 obtaining, based on a machine learning model, a detection result of a lesion area in each image of the second medical image group and a detection result of a lesion area in an image of the first medical image group corresponding to each image of the second medical image group; 30.
  • Appendix 31 An image processing method as described in Appendix 28, which performs display control to display information based on the results of the evaluation regarding the second lesion on a display device together with images of the second medical image group in which the second lesion is detected.
  • Appendix 32 32.
  • Appendix 33 33.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Un dispositif de traitement d'image (1X) comprend : un moyen d'extraction d'image d'intérêt (31X) ; et un moyen d'évaluation de lésion (32X). Le moyen d'extraction d'image d'intérêt (31X) extrait, à partir d'un second groupe d'images médicales, une image d'intérêt qui fait l'objet d'une analyse associée à une lésion, l'extraction étant effectuée sur la base d'une comparaison entre : un premier groupe d'images médicales qui est associé à un patient et auquel sont rattachées des informations de note concernant une première lésion ; et le second groupe d'images médicales qui est associé au patient et qui a été généré après le premier groupe d'images médicales. Sur la base de l'image d'intérêt, le moyen d'évaluation de lésion (32X) effectue une évaluation associée à une première lésion dans le second groupe d'images médicales.
PCT/JP2024/002356 2024-01-26 2024-01-26 Dispositif de traitement d'images, procédé de traitement d'images et support de stockage Pending WO2025158641A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/002356 WO2025158641A1 (fr) 2024-01-26 2024-01-26 Dispositif de traitement d'images, procédé de traitement d'images et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/002356 WO2025158641A1 (fr) 2024-01-26 2024-01-26 Dispositif de traitement d'images, procédé de traitement d'images et support de stockage

Publications (1)

Publication Number Publication Date
WO2025158641A1 true WO2025158641A1 (fr) 2025-07-31

Family

ID=96545034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/002356 Pending WO2025158641A1 (fr) 2024-01-26 2024-01-26 Dispositif de traitement d'images, procédé de traitement d'images et support de stockage

Country Status (1)

Country Link
WO (1) WO2025158641A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005027978A (ja) * 2003-07-09 2005-02-03 Fujitsu Ltd 医療情報システム
JP2015029885A (ja) * 2013-08-07 2015-02-16 パナソニック株式会社 スライス画像表示装置
JP2016202722A (ja) * 2015-04-27 2016-12-08 コニカミノルタ株式会社 医用画像表示装置及びプログラム
WO2018230409A1 (fr) * 2017-06-15 2018-12-20 キヤノン株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2021133164A (ja) * 2020-02-28 2021-09-13 キヤノンメディカルシステムズ株式会社 医用情報処理装置及び医用情報処理プログラム
JP2023113518A (ja) * 2022-02-03 2023-08-16 富士フイルム株式会社 情報処理装置、情報処理方法及びプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005027978A (ja) * 2003-07-09 2005-02-03 Fujitsu Ltd 医療情報システム
JP2015029885A (ja) * 2013-08-07 2015-02-16 パナソニック株式会社 スライス画像表示装置
JP2016202722A (ja) * 2015-04-27 2016-12-08 コニカミノルタ株式会社 医用画像表示装置及びプログラム
WO2018230409A1 (fr) * 2017-06-15 2018-12-20 キヤノン株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2021133164A (ja) * 2020-02-28 2021-09-13 キヤノンメディカルシステムズ株式会社 医用情報処理装置及び医用情報処理プログラム
JP2023113518A (ja) * 2022-02-03 2023-08-16 富士フイルム株式会社 情報処理装置、情報処理方法及びプログラム

Similar Documents

Publication Publication Date Title
US11915822B2 (en) Medical image reading assistant apparatus and method for adjusting threshold of diagnostic assistant information based on follow-up examination
KR102382872B1 (ko) 의료용 인공 신경망 기반 대표 영상을 제공하는 의료 영상 판독 지원 장치 및 방법
JP5100285B2 (ja) 医用診断支援装置およびその制御方法、プログラム、記憶媒体
CN109069014B (zh) 用于估计在冠状动脉中的健康管腔直径和狭窄定量的系统和方法
US12008748B2 (en) Method for classifying fundus image of subject and device using same
KR102676569B1 (ko) 의료 영상 분석 장치 및 방법, 의료 영상 시각화 장치 및 방법
KR102289277B1 (ko) 복수의 의료 영상 판독 알고리듬들에 대한 평가 스코어를 생성하는 의료 영상 판독 지원 장치 및 방법
US20210407077A1 (en) Information processing device and model generation method
US20230172451A1 (en) Medical image visualization apparatus and method for diagnosis of aorta
KR20240147617A (ko) 딥러닝 기반 기도 측정과 이미지 정합을 이용하는 영역별 기도 벽 두께 변화의 평가 방법 및 장치
US11928817B2 (en) Method for filtering normal medical image, method for interpreting medical image, and computing device implementing the methods
KR20230097646A (ko) 위장 질환 및 위암 검출률 향상을 위한 인공 지능 기반 위 내시경 영상 진단 보조 시스템 및 방법
CN116848588A (zh) 医学图像中的健康状况特征的自动标注
JP3928978B1 (ja) 医用画像処理装置、医用画像処理方法及びプログラム
KR20200114228A (ko) 순환 신경망을 이용한 이소시트르산 탈수소효소 유전형 변이 예측 방법 및 시스템
Danala et al. Applying quantitative radiographic image markers to predict clinical complications after aneurysmal subarachnoid hemorrhage: A pilot study
EP4170671A1 (fr) Procédé et système d'analyse d'images médicales
JP2012143368A (ja) 医用画像表示装置及びプログラム
KR102536369B1 (ko) 인공 지능 기반 위 내시경 영상 진단 보조 시스템 및 방법
WO2025158641A1 (fr) Dispositif de traitement d'images, procédé de traitement d'images et support de stockage
KR102446929B1 (ko) 대동맥 박리 가시화 장치 및 방법
JP7626626B2 (ja) 読影支援装置及び読影支援装置の作動方法
JP7433901B2 (ja) 学習装置及び学習方法
JP2011229707A (ja) 医用画像表示装置及びプログラム
Wang Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24920363

Country of ref document: EP

Kind code of ref document: A1