CN111899850A - Medical image information processing method, display method and readable storage medium - Google Patents
Medical image information processing method, display method and readable storage medium Download PDFInfo
- Publication number
- CN111899850A CN111899850A CN202010806496.7A CN202010806496A CN111899850A CN 111899850 A CN111899850 A CN 111899850A CN 202010806496 A CN202010806496 A CN 202010806496A CN 111899850 A CN111899850 A CN 111899850A
- Authority
- CN
- China
- Prior art keywords
- image
- medical image
- target area
- layer
- information processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present disclosure relates to an information processing method of medical image, including: extracting image features of a selected region of a selected layer image in the medical image; identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer; and acquiring the information of the medical image according to the identified target area. The display method of the medical image comprises the following steps: and updating the display mode of the target area in response to the identified target area of the multilayer image in the medical image, wherein the display mode comprises rendering. Through the embodiments of the disclosure, the segmentation result of the deep learning algorithm model can be supplemented, the efficient modification of a user is assisted, and the focus can be quantified and evaluated more accurately.
Description
Technical Field
The present disclosure relates to the field of intelligent auxiliary medical diagnosis information technology, and in particular, to an information processing method for medical images, a display method for medical images, and a computer-readable storage medium.
Background
In the prior art, a method and a device for identifying a focus through a deep learning algorithm model exist, but the method and the device cannot edit the focus to be identified and identified. In addition, manual redefinition is required in output display of lesion analysis results and evaluation results, which limits diagnosis efficiency.
Disclosure of Invention
The present disclosure is intended to provide an information processing method of a medical image, a display method of a medical image, and a computer-readable storage medium, which can supplement a result of a deep learning algorithm model segmentation, assist a user in efficient modification, and more accurately quantify and evaluate a lesion.
According to one aspect of the present disclosure, there is provided an information processing method of a medical image, including:
extracting image features of a selected region of a selected layer image in the medical image;
identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer;
and acquiring the information of the medical image according to the identified target area.
In some embodiments, the first and second light sources, wherein,
the selected mode of the selected area comprises the following steps:
determining a delineation line in response to an operation of an operator on the selected layer image;
and taking the image part surrounded by the delineating lines as a selected area.
In some embodiments, wherein the determining a delineation line in response to an operation of an operator on the selected layer image comprises:
capturing click actions and/or movement actions of an operation body;
determining or modifying a delineation line in response to a click action and/or a movement action of the operator.
In some embodiments, wherein the determining a delineation line in response to an operation of an operator on the selected layer image comprises:
capturing a moving track of an operation body;
and determining a delineation line based on the moving track of the operation body and different contrasts in the selected layer image.
In some embodiments, the selecting of the selected region includes:
acquiring image parameters of different areas in the selected layer image;
and determining a delineation line based on the absolute value of each image parameter or the relative value of the image parameters of the adjacent area.
In some embodiments, wherein the extracting image features of the selected region of the selected slice image in the medical image comprises: extracting texture features of the selected region;
the identifying a target area of an image of another layer in the medical image arranged close to the selected layer image based on image features of the target area of the selected layer includes: and identifying the target area according to the texture features of the selected area, wherein the texture features of the target area are matched with the texture features of the selected area.
In some embodiments, wherein the identifying the target region of the other layer images in the medical image arranged close to the selected layer image comprises: identifying target areas of at least two layers of the other layer images;
the obtaining of the information of the medical image according to the identified target area comprises:
acquiring image parameters of target areas of at least two layers of images of other layers;
and obtaining information of the interested region in the medical image based on the image parameters.
In some embodiments, wherein the medical image comprises a CT chest image, the information of the medical image comprises information of a lesion contained in the CT chest image.
According to one aspect of the present disclosure, a method for displaying medical images is provided, including:
and updating the display mode of the target area in response to the identified target area of the multilayer image in the medical image, wherein the display mode comprises rendering.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
an information processing method according to the medical image; or
The medical image display method is provided.
The information processing method of the medical image, the display method of the medical image and the computer readable storage medium of various embodiments of the present disclosure are provided by extracting image features of a selected region of a selected slice image in the medical image; identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer; according to the identified target region, the information of the medical image is obtained, so that the focus segmentation can be carried out through the deep learning algorithm model, and on the basis, an auxiliary tool based on manual or semi-automatic edge modification is added to supplement the segmentation result of the deep learning algorithm model, so that a user is assisted to modify the focus efficiently, and the focus is quantized and evaluated more accurately. And, provide many kinds of modification edge ways to choose, including manual drawing, threshold value cut apart semi-automatic drawing, lasso semi-automatic drawing, look for the similar focus of adjacent layer automatically on the basis of the focus characteristic of single-layer drawing, many kinds of modification ways can be used in combination freely; meanwhile, based on the modification delineation results, the lesion analysis results, and the pneumonia evaluation results including the novel coronavirus pneumonia (coronavirus disease2019, COVID-19) were adjusted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may designate like components in different views. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of like components. The drawings illustrate various embodiments generally, by way of example and not by way of limitation, and together with the description and claims, serve to explain the disclosed embodiments.
Fig. 1 shows a flowchart of an information processing method of a medical image according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating one manner of delineation in one embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating another exemplary manner in which the present disclosure may be implemented;
FIG. 4 is a schematic diagram illustrating yet another way to draw in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating multi-layer image-based information acquisition according to an embodiment of the present disclosure;
fig. 6 is a schematic display interface diagram of a medical image display method according to an embodiment of the disclosure, which is an updated display interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure.
For the medical image related to the present disclosure, the three-dimensional medical image of the human body and each part or organ of the body obtained by various medical imaging devices may be, for example: the three-dimensional medical image may also be a three-dimensional image obtained by a Computed Tomography (CT) scan, or a three-dimensional image obtained by reconstructing a CT two-dimensional slice image obtained by a CT scan, and the disclosure is not limited thereto. The two-dimensional slice image is a two-dimensional sequence digital tomographic image of a human body and each part or organ of the body acquired by a medical imaging apparatus, for example, a two-dimensional slice image acquired by a computed tomography apparatus (CT), a magnetic resonance imaging apparatus (MRI), a positron emission tomography apparatus (PET), an Ultrasound apparatus (Ultrasound), and the like, and the disclosure is not limited thereto. The two-dimensional slice image may also refer to a two-dimensional image obtained by extracting features of a three-dimensional stereoscopic medical image and reconstructing the image.
While embodiments of the present disclosure are described with CT images as a primary illustrative example, it should be understood that DICOM images can be presented in full and in detail for three-dimensional images of organs, wherein three-dimensional images are the primary building base. The sagittal plane (sagittalplane) is a plane that divides the human body into left and right parts, the left and right sections are sagittal planes, and the section with the same left and right is called the median sagittal plane, and the corresponding image can be defined as a sagittal view. The coronal plane (coronal plane) is a cross section obtained by longitudinally cutting a human body in the left and right directions into two parts, i.e., a front part and a rear part, and is called a coronal plane by a plane passing through a vertical axis and a horizontal axis and all planes parallel thereto, which divide the human body into the front part and the rear part, and a corresponding image can be defined as a coronal map. The sagittal and coronal planes correspond to the transverse plane (transverse plane). In the analysis and diagnosis process of CT medical images, for the parts, lesions, alien materials, placeholders, etc. to be analyzed and diagnosed in the medical images, which are objects having clinical analysis and diagnosis significance, in machine vision and image processing, the region to be processed is outlined from the processed image in the form of square frame, circle, ellipse, irregular polygon, etc., and is called region of interest (roi). Various operators (operators) and functions are commonly used in machine vision software such as Halcon, OpenCV, Matlab and the like to obtain a region of interest (ROI), and the image is processed in the next step. In the analysis and diagnosis process of the CT medical image, for the parts, lesions, foreign bodies, placeholders, and the like to be analyzed and diagnosed in the medical image, all regions of interest having clinical analysis and diagnosis significance are in accordance with the application scenarios of the embodiments of the present disclosure. In the embodiments of the present disclosure, a nodule is taken as an example of an object of interest included in a breast CT image, such as a lung nodule. In the context of thoracic CT imaging, lung nodules may refer to solid or subvisive lung shadows that appear as focal, ellipsoid-like, densitometric increases of less than 3 mm. The diameter of the small nodules is less than 5mm, and the diameter of the small nodules is 5-10 mm. The lung nodules can be either benign lesions or malignant or borderline lesions. Currently, a chest image can be acquired through CT, and a diagnosis of a lung nodule possibly existing in the chest image can be assisted through AI and the like. In some diagnostic information display interfaces, information about detected lung nodules is given, such as: benign and malignant, volume, long and short diameters, density, doubling time, CT value range and the like. The feature information of the lung nodule may include volume information, long/short path information, density information, and feature information of the lung nodule, which may be obtained from the chest image, and may also be simply and intuitively characterized based on a classification rule for the nodule and a classification result of the lung nodule, so that the lung nodule may be diagnosed clinically by rapidly displaying the corresponding feature information. In the clinical interpretation process aiming at the CT chest image, the proportion of the lesion part in the whole lung volume is an index of the lung involvement degree. The severity of lung infection is not only related to lesion volume but also to lesion density, and differences in density are graded differently for the severity of lung lesions. The CT value range of the normal tissues of the lung is about-950 HU to-700 HU, when the lung is diseased, the lung is often represented by a frosted glass shadow and a real change shadow, the CT value of the density increase is about-600 HU to-200 HU and-100 HU to 100HU, wherein, the-600 HU to-200 HU are corresponding frosted glass shadows; the-100 HU to 100HU are corresponding real change shadows, the-200 HU to-100 HU are real transition (can be calculated as real components, namely the real change shadows are-200 HU to 100HU), and the real change shadows of the ground glass shadows with different densities indicate different progression periods and different severity degrees of pneumonia. Adjusting the lesion analysis results, as well as the pneumonia assessment results, is of great significance to public health safety and public interest when accurate interpretation is required, such as for example, new coronavirus pneumonia (coronavirus disease2019, COVID-19).
As one aspect, as shown in fig. 1, an embodiment of the present disclosure provides an information processing method of a medical image, including:
s101: extracting image features of a selected region of a selected layer image in the medical image;
s102: identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer;
s103: and acquiring the information of the medical image according to the identified target area.
One of the inventive concepts of the present disclosure is directed to determining medical information contained in a medical image through a linkage mechanism between multi-layer images based on multi-layer images of a certain dimension of the medical image, such as multi-layer cross-sectional images, multi-layer sagittal-plane images, and multi-layer coronal-plane images. In the case where multiple layers of images all indicate the same lesion, embodiments of the present disclosure can determine adjacent layer images indicating the lesion by determining a single layer image, thereby providing an accurate information platform for obtaining information of the lesion.
In various embodiments, the other layer image arranged proximate to the selected layer image may be understood as at least one other layer image located adjacent to the selected layer image, e.g., at least one other layer image located on one side of the selected layer image, and/or at least one other layer image located on the other side of the selected layer image. In addition, in the process of identifying other layer images of the multi-layer embodiment, the images may be continuous multi-layer images, or multi-layer images at intervals determined in a random or preset manner. Of course, the significance of clinical diagnosis and interpretation needs to be met in the process of identifying other layer images. One skilled in the art may appreciate the broad inventive concepts of the present disclosure based on embodiments that identify a target region of one or both other slice images adjacent to the selected slice image in a medical image.
In some embodiments, in order to at least overcome the defect that the identified target region cannot be edited or replace the constraint that manual redefinition is required, the embodiments of the present disclosure can add an auxiliary tool based on manual or semi-automatic edge modification on the basis of performing lesion segmentation by using the deep learning algorithm model, supplement the result of the segmentation by using the deep learning algorithm model, assist a doctor to efficiently modify, and quantify and evaluate a lesion more accurately.
As shown in fig. 2, embodiments of the present disclosure may include, but are not limited to: the selected mode of the selected area comprises the following steps:
determining a delineation line in response to an operation of an operator on the selected layer image;
and taking the image part surrounded by the delineating lines as a selected area.
The AI automatically outlines the paths which can improve manual operation, but performance problems may exist, professional manual interpretation is the gold standard for image result evaluation, if the AI result is accepted by the professional manual, the AI result is replaced by the default manual delineation, and if the AI result is considered by the professional manual, the AI result can still be manually modified. For target regions with irregular edges, such as lesions, in the case that the AI delineation approach cannot accurately define the signs of such irregular contours, the disclosed embodiments can provide a manual delineation solution to this problem. The operation body of the embodiment of the disclosure may be a mouse cursor, a touch pen, a finger, and the like, for example, the mouse cursor is clicked and moved in the medical image, and the diagnostic system captures the click action of the mouse to complete the determination of the tracing point outlined at the edge; and moving the mouse, and forming a delineation line where the tracing point and the mouse are positioned by capturing the clicking action of the mouse by the diagnosis system. The diagnosis system judges whether the action of the mouse is legal in real time, namely, judges that the path determined by the click action and the movement action of the operation body meets the preset condition. Generally, this determination process may be defined as "determining whether the path is legal", and the legal determination condition at least includes that there is no intersection in the path, and the specific implementation process may be: the delineation lines do not intersect and/or the delineation points are not on the delineation lines. Extracting a delineation area defined by the delineation line on the medical image to provide the delineation area to the user as a selected area. In a preferred embodiment, a configuration menu may be provided that provides an interactive object for the user to set the density of points, e.g., the density of points for a lesion marking may be configured (denser, medium, less dense) in the setting.
When it is determined that the path determined by the click action and the movement action of the operation body may intersect, in some embodiments of the present disclosure, the user may be prompted in a manner including at least one of the following: changing the path color; changing the path line type; stopping the click action and the movement action of the capture operation body, and outputting prompt information.
As a preferred scheme of intelligent interaction, based on the number of times of the current click action of the operation body, in response to the operation of the operation body, a path between a trace point corresponding to the first click action and a trace point corresponding to the last click action is automatically completed, so as to define a target area of the embodiment of the present disclosure.
The present disclosure, in some embodiments, enables modification of a delineation line by an operator, including: the determining a delineation line in response to an operation of an operator on the selected layer image comprises:
capturing click actions and/or movement actions of an operation body;
determining or modifying a delineation line in response to a click action and/or a movement action of the operator.
Specifically, a tracing point is presented on a tracing line based on the clicking action of the operation body, and the position of the tracing point is moved through the operation body so as to modify the shape of the tracing line; or the position of the delineation line is moved by an operating body. In a normal state, the drawing line is in a first color, the mouse moves to the drawing line, the drawing line is in a click state, the drawing line can display a second color, and the drawing point is displayed. Moving the mouse to a tracing point, dragging the mouse to move the tracing point, and modifying the shape of the edge mark; when the mouse moves the tracing point to make the symptom mark illegal, such as intersection between the delineation lines, tracing point on the delineation line, etc., the operation is invalid, and the tracing point returns to the original position to give corresponding prompt information.
To achieve semi-automatic delineation, as shown in fig. 3, some embodiments of the present disclosure may be: the determining a delineation line in response to an operation of an operator on the selected layer image comprises:
capturing a moving track of an operation body;
and determining a delineation line based on the moving track of the operation body and different contrasts in the selected layer image.
Specifically, the semi-automatic delineation can be performed through lasso, and the steps can be as follows: clicking a lasso tool icon provided by a diagnostic system, pressing a mouse to drag near a boundary of areas with different contrast in an image, enabling the lasso tool to automatically adsorb a selected boundary onto the boundary, enabling the lasso tool to form a closed loop when the mouse returns to a starting point, and then releasing the mouse to form a closed selected area. Using a lasso tool, regions with the same contrast can be selected with relative ease.
To achieve pass threshold delineation, as shown in fig. 4, some embodiments of the present disclosure may be: the selected mode of the selected area comprises the following steps:
acquiring image parameters of different areas in the selected layer image;
and determining a delineation line based on the absolute value of each image parameter or the relative value of the image parameters of the adjacent area.
Specifically, for CT chest images, threshold segmentation can segment the lesion edge by CT value, and there are two main ways:
1. based on the absolute value of each image parameter: for example, a CT value interval is set, and the interval is defined as a certain region type, such as-200 HU to 100HU, and the focus in the CT value interval is clinically a real focus as described above;
2. relative values of the image parameters based on the neighboring regions: for example, setting the adjacent CT value contrast, if the difference of the adjacent CT values is larger than a certain value, the boundary between the lesion and the lung tissue is considered. In combination with the foregoing, a clinically adjacent area with a CT difference of greater than 200HU can be considered a solid lesion and lung tissue or a frosted junction.
As a basis for adding a manual or semi-automatic edge modification-based auxiliary tool to supplement the result of deep learning algorithm model segmentation, the process of extracting image features related to the embodiments of the present disclosure may be: the extracting image features of a selected region of a selected slice image in a medical image comprises: extracting the texture features of the selected region.
The texture feature included in each layer of image in the medical image is an image global feature, and the texture feature describes the surface property of a scene corresponding to the image or an image area. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points. As a statistical feature, the texture feature has rotation invariance and is strong against noise. In the process of processing CT chest images, it is often an effective method to utilize texture features when facing texture images with large differences in thickness, density, etc.
The extraction of the texture features in the embodiments of the present disclosure may be implemented by:
1. histogram feature
The CT histogram provides many information and features of the image, and the features of the histogram are usually maximum, minimum, mean, median, range, entropy, variance, entropy, etc. The CT histogram feature method is simple in calculation, has translation and rotation invariance, is insensitive to the accurate spatial distribution of color pixels and the like;
2. gray scale co-occurrence moment
Starting from the pixels with the image gray level of the CT image, the probability of the simultaneous occurrence of the pixels with the distance and the gray level is counted. Typically in various directions spaced 45 apart. The gray level co-occurrence matrix reflects the comprehensive information of image gray levels about direction, adjacent interval and change amplitude, and analyzes the information of image elements and arrangement structures of the pointed focus and lung tissues in the CT image;
3. local Binary Pattern (LBP)
Each pixel in the target area is compared with the pixels nearby by LBP, and the result is stored as binary number, that is, the relationship between the local neighborhood point and the central point is expressed by binary bits, and the binary bits of all the neighborhood points are used for describing the pattern of the local structure information. The LBP has stronger robustness to image gray scale change caused by illumination change and the like, is simple to calculate, and can also be used for real-time detection;
4. method of auto-correlation function
And extracting characteristic parameters such as the thickness, the directionality and the like of the texture by calculating an energy spectrum function of the image of the CT image. For the regular texture image, the surface detection is carried out because the autocorrelation function of the regular texture image has peaks and valleys;
5. signal processing method
The image is treated as a two-dimensionally distributed signal so that the texture can be analyzed from the perspective of the signal filter design. The texture is transferred to the transform domain by some linear transform, filter, and then the corresponding energy criterion is applied to extract texture features. Such as fourier transforms, Gabor filters, wavelet transforms, Laws textures, etc.
Based on the extraction of the texture features, in the embodiment of the present disclosure, the identifying the target region of the other layer images in the medical image arranged close to the selected layer image based on the image features of the target region of the selected layer image includes: and identifying the target area according to the texture features of the selected area, wherein the texture features of the target area are matched with the texture features of the selected area.
Specifically, a target region searching model can be constructed, and the focus with the same local features in adjacent regions can be searched by using the omics features of the focus local features including texture features, such as first-order features, shape features (2D), gray level co-occurrence matrices, gray level run length matrices, gray level size region matrices, adjacent gray level hue difference matrices, gray level dependency matrices and the like. The texture feature description method comprises the following steps:
1. the statistical method comprises the following steps: for example, a texture feature analysis method of a gray level co-occurrence matrix GLCM, four key features of the gray level co-occurrence matrix: energy, inertia, entropy and correlation. Another typical method in the statistical method is to extract texture features from an autocorrelation function (i.e., an energy spectrum function of an image) of the image, that is, to extract feature parameters such as the thickness and the directionality of the texture by calculating the energy spectrum function of the image;
2. the geometric method comprises the following steps: the method is based on the theory that complex textures can be formed by repeatedly arranging a plurality of simple texture elements in a certain regular form, such as a Voronio checkerboard feature method, a structural method and the like;
3. and (3) modeling method: based on the structural model of the image, the parameters of the model are used as texture features. Typical methods are random field CRF models, such as Markov (Markov) random field (MRF) models and Gibbs random field models;
4. signal processing method: the extraction and matching of the textural features mainly comprise the following steps: gray level co-occurrence matrix, Tamura texture features, autoregressive texture model, wavelet transform and the like.
In order to obtain medical image information more accurately, the identifying a target region of other layer images in the medical image arranged close to the selected layer image according to embodiments of the present disclosure includes: identifying target areas of at least two layers of the other layer images;
the obtaining of the information of the medical image according to the identified target area comprises:
acquiring image parameters of target areas of at least two layers of images of other layers;
and obtaining information of the interested region in the medical image based on the image parameters.
Specifically, the embodiment of the present disclosure takes processing a cross-sectional image as an example, and in a case where a selected layer is used as an ith layer, image features of a selected area in the ith layer are extracted, for example, texture features are extracted, a target area of other layer images including an (i + j) th layer and an (i-j) th layer in the group of cross-sectional images is identified, and a value of j is greater than or equal to 1. Further, taking the above-mentioned contents as an example of identifying the target regions of at least two other layer images adjacent to the selected layer image of the present embodiment in the medical image, when j is 1, the previous layer image and the next layer image adjacent to the ith layer image are identified, as shown in fig. 5, by automatically calculating the upper layer and lower layer target regions, information of a region of interest (ROI) in the medical image is obtained, including obtaining lesion information, lung tissue information, and glass grinding information.
As one aspect, an embodiment of the present disclosure provides a method for displaying a medical image, including:
and updating the display mode of the target area in response to the identified target area of the multilayer image in the medical image, wherein the display mode comprises rendering.
As a solution of medical image display, the present embodiment aims to provide accurate image diagnosis information to a user through an interface display manner.
Specifically, in combination with the foregoing, the target area of the multi-layer image in the medical image identified by the display method of the embodiment may be based on:
extracting image features of a selected region of a selected layer image in the medical image;
based on image features of a target region of the selected layer, target regions of other layer images in the medical image arranged close to the selected layer image are identified.
Further, as shown in fig. 6, the updated display mode including rendering is intended to provide the user with more accurate medical image information after updating, such as lesion information, CT histogram, pneumonia evaluation information including novel coronavirus pneumonia (coronaviruse disease2019, COVID-19), and the like. The manner of obtaining the lesion information may be based on the following steps: and acquiring the information of the medical image according to the identified target area.
Based on the knowledge of the general knowledge of those skilled in the art, the medical image display method of the present disclosure can know that: a display device comprising a display unit and a processor configured to:
in response to the identified target area of the multilayer image in the medical image, updating a display mode of the target area, wherein the display mode comprises rendering;
wherein: identifying a target region of a multi-slice image in a medical image, comprising:
extracting image features of a selected region of a selected layer image in the medical image;
based on image features of a target region of the selected layer, target regions of other layer images in the medical image arranged close to the selected layer image are identified.
In particular, one of the inventive concepts of the present disclosure is directed to medical imaging by extracting image features of a selected region of a selected slice image in a medical image; identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer; according to the identified target region, the information of the medical image is obtained, so that the focus segmentation can be carried out through the deep learning algorithm model, and on the basis, an auxiliary tool based on manual or semi-automatic edge modification is added to supplement the segmentation result of the deep learning algorithm model, so that a user is assisted to modify the focus efficiently, and the focus is quantized and evaluated more accurately. And, provide many kinds of modification edge ways to choose, including manual drawing, threshold value cut apart semi-automatic drawing, lasso semi-automatic drawing, look for the similar focus of adjacent layer automatically on the basis of the focus characteristic of single-layer drawing, many kinds of modification ways can be used in combination freely; meanwhile, based on the modification delineation results, the lesion analysis results, and the pneumonia evaluation results including the novel coronavirus pneumonia (coronavirus disease2019, COVID-19) were adjusted.
As one of the aspects of the present disclosure, the present disclosure also provides a computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processor, mainly implement an information processing method according to the medical image, including at least:
extracting image features of a selected region of a selected layer image in the medical image;
identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer;
and acquiring the information of the medical image according to the identified target area.
As one of the aspects of the present disclosure, the present disclosure further provides a computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processor, mainly implement the display method of medical images according to the above, including at least:
in response to the identified target area of the multilayer image in the medical image, updating a display mode of the target area, wherein the display mode comprises rendering;
wherein: identifying a target region of a multi-slice image in a medical image, comprising:
extracting image features of a selected region of a selected layer image in the medical image;
based on image features of a target region of the selected layer, target regions of other layer images in the medical image arranged close to the selected layer image are identified.
In some embodiments, a processor executing computer-executable instructions may be a processing device including more than one general-purpose processing device, such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, processor running other instruction sets, or processors running a combination of instruction sets. The processor may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
In some embodiments, the computer-readable storage medium may be a memory, such as a read-only memory (ROM), a random-access memory (RAM), a phase-change random-access memory (PRAM), a static random-access memory (SRAM), a dynamic random-access memory (DRAM), an electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), a flash disk or other form of flash memory, a cache, a register, a static memory, a compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD) or other optical storage, a tape cartridge or other magnetic storage device, or any other potentially non-transitory medium that may be used to store information or instructions that may be accessed by a computer device, and so forth.
In some embodiments, the computer-executable instructions may be implemented as a plurality of program modules that collectively implement the method for displaying medical images according to any one of the present disclosure.
The present disclosure describes various operations or functions that may be implemented as or defined as software code or instructions. The display unit may be implemented as software code or modules of instructions stored on a memory, which when executed by a processor may implement the respective steps and methods.
Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). A software implementation of the embodiments described herein may be provided through an article of manufacture having code or instructions stored thereon, or through a method of operating a communication interface to transmit data through the communication interface. A machine or computer-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., a computing display device, an electronic system, etc.), such as recordable/non-recordable media (e.g., Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory display devices, etc.). The communication interface includes any mechanism for interfacing with any of a hardwired, wireless, optical, etc. medium to communicate with other display devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or transmitting signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, the subject matter of the present disclosure may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.
Claims (10)
1. The information processing method of the medical image comprises the following steps:
extracting image features of a selected region of a selected layer image in the medical image;
identifying a target area of an image of another layer in the medical image arranged close to the image of the selected layer based on image features of the target area of the selected layer;
and acquiring the information of the medical image according to the identified target area.
2. The information processing method according to claim 1, wherein the manner of selecting the selected area includes:
determining a delineation line in response to an operation of an operator on the selected layer image;
and taking the image part surrounded by the delineating lines as a selected area.
3. The information processing method according to claim 2, wherein the determining a delineation line in response to an operation of an operator on the selected layer image comprises:
capturing click actions and/or movement actions of an operation body;
determining or modifying a delineation line in response to a click action and/or a movement action of the operator.
4. The information processing method according to claim 2, wherein the determining a delineation line in response to an operation of an operator on the selected layer image comprises:
capturing a moving track of an operation body;
and determining a delineation line based on the moving track of the operation body and different contrasts in the selected layer image.
5. The information processing method according to claim 1, wherein the manner of selecting the selected area includes:
acquiring image parameters of different areas in the selected layer image;
and determining a delineation line based on the absolute value of each image parameter or the relative value of the image parameters of the adjacent area.
6. The information processing method according to claim 1,
the extracting image features of a selected region of a selected slice image in a medical image comprises: extracting texture features of the selected region;
the identifying a target area of an image of another layer in the medical image arranged close to the selected layer image based on image features of the target area of the selected layer includes: and identifying the target area according to the texture features of the selected area, wherein the texture features of the target area are matched with the texture features of the selected area.
7. The information processing method according to claim 1,
the identifying a target region of an image of another layer in the medical image arranged proximate to the selected layer image comprises: identifying target areas of at least two layers of the other layer images;
the obtaining of the information of the medical image according to the identified target area comprises:
acquiring image parameters of target areas of at least two layers of images of other layers;
and obtaining information of the interested region in the medical image based on the image parameters.
8. The information processing method according to any one of claims 1 to 7, wherein the medical image includes a CT chest image, and the information of the medical image includes information of a lesion contained in the CT chest image.
9. The display method of the medical image comprises the following steps:
and updating the display mode of the target area in response to the identified target area of the multilayer image in the medical image, wherein the display mode comprises rendering.
10. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
the information processing method according to any one of claims 1 to 8; or
The display method according to claim 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010806496.7A CN111899850A (en) | 2020-08-12 | 2020-08-12 | Medical image information processing method, display method and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010806496.7A CN111899850A (en) | 2020-08-12 | 2020-08-12 | Medical image information processing method, display method and readable storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111899850A true CN111899850A (en) | 2020-11-06 |
Family
ID=73229990
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010806496.7A Pending CN111899850A (en) | 2020-08-12 | 2020-08-12 | Medical image information processing method, display method and readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111899850A (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112420169A (en) * | 2020-12-04 | 2021-02-26 | 王军帅 | Digital information processing method for image transmission of hospital radiology department |
| CN113823385A (en) * | 2021-09-03 | 2021-12-21 | 青岛海信医疗设备股份有限公司 | Method, device, equipment and medium for modifying DICOM image |
| CN114067935A (en) * | 2021-11-03 | 2022-02-18 | 广西壮族自治区通信产业服务有限公司技术服务分公司 | Epidemic disease investigation method, system, electronic equipment and storage medium |
| CN114764776A (en) * | 2021-01-12 | 2022-07-19 | 深圳华大智造云影医疗科技有限公司 | Image labeling method and device and electronic equipment |
| CN114974524A (en) * | 2022-06-07 | 2022-08-30 | 河南科技大学第一附属医院 | Medical image data annotation method and device |
| CN118570204A (en) * | 2024-08-01 | 2024-08-30 | 深圳市龙岗中心医院 | Medical image analysis system based on artificial intelligence |
| WO2025128021A1 (en) * | 2023-12-12 | 2025-06-19 | Turkcell Teknoloji Arastirma Ve Gelistirme Anonim Sirketi | A system for diagnosing brain tumor via artificial intelligence |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103247046A (en) * | 2013-04-19 | 2013-08-14 | 深圳先进技术研究院 | Automatic target volume sketching method and device in radiotherapy treatment planning |
| CN107403201A (en) * | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
| CN110895812A (en) * | 2019-11-28 | 2020-03-20 | 北京推想科技有限公司 | CT image detection method and device, storage medium and electronic equipment |
| CN111145877A (en) * | 2019-12-27 | 2020-05-12 | 杭州依图医疗技术有限公司 | Interaction method, information processing method, display method, and storage medium |
| CN111354006A (en) * | 2018-12-21 | 2020-06-30 | 深圳迈瑞生物医疗电子股份有限公司 | Method and device for tracing target tissue in ultrasound images |
-
2020
- 2020-08-12 CN CN202010806496.7A patent/CN111899850A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103247046A (en) * | 2013-04-19 | 2013-08-14 | 深圳先进技术研究院 | Automatic target volume sketching method and device in radiotherapy treatment planning |
| CN107403201A (en) * | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
| CN111354006A (en) * | 2018-12-21 | 2020-06-30 | 深圳迈瑞生物医疗电子股份有限公司 | Method and device for tracing target tissue in ultrasound images |
| CN110895812A (en) * | 2019-11-28 | 2020-03-20 | 北京推想科技有限公司 | CT image detection method and device, storage medium and electronic equipment |
| CN111145877A (en) * | 2019-12-27 | 2020-05-12 | 杭州依图医疗技术有限公司 | Interaction method, information processing method, display method, and storage medium |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112420169A (en) * | 2020-12-04 | 2021-02-26 | 王军帅 | Digital information processing method for image transmission of hospital radiology department |
| CN114764776A (en) * | 2021-01-12 | 2022-07-19 | 深圳华大智造云影医疗科技有限公司 | Image labeling method and device and electronic equipment |
| CN113823385A (en) * | 2021-09-03 | 2021-12-21 | 青岛海信医疗设备股份有限公司 | Method, device, equipment and medium for modifying DICOM image |
| CN113823385B (en) * | 2021-09-03 | 2024-03-19 | 青岛海信医疗设备股份有限公司 | Method, device, equipment and medium for modifying DICOM image |
| CN114067935A (en) * | 2021-11-03 | 2022-02-18 | 广西壮族自治区通信产业服务有限公司技术服务分公司 | Epidemic disease investigation method, system, electronic equipment and storage medium |
| CN114067935B (en) * | 2021-11-03 | 2022-05-20 | 广西壮族自治区通信产业服务有限公司技术服务分公司 | Epidemic disease investigation method, system, electronic equipment and storage medium |
| CN114974524A (en) * | 2022-06-07 | 2022-08-30 | 河南科技大学第一附属医院 | Medical image data annotation method and device |
| WO2025128021A1 (en) * | 2023-12-12 | 2025-06-19 | Turkcell Teknoloji Arastirma Ve Gelistirme Anonim Sirketi | A system for diagnosing brain tumor via artificial intelligence |
| CN118570204A (en) * | 2024-08-01 | 2024-08-30 | 深圳市龙岗中心医院 | Medical image analysis system based on artificial intelligence |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112529834B (en) | Spatial distribution of pathological image patterns in 3D image data | |
| CN111899850A (en) | Medical image information processing method, display method and readable storage medium | |
| Liu et al. | Multi-view multi-scale CNNs for lung nodule type classification from CT images | |
| CN109615636B (en) | Blood vessel tree construction method and device in lung lobe segment segmentation of CT (computed tomography) image | |
| JP6877868B2 (en) | Image processing equipment, image processing method and image processing program | |
| Gong et al. | Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis | |
| KR102204437B1 (en) | Apparatus and method for computer aided diagnosis | |
| CN118262875A (en) | Medical image diagnosis and contrast film reading method | |
| CN111553892B (en) | Lung nodule segmentation calculation method, device and system based on deep learning | |
| US20140235998A1 (en) | Method and apparatus for performing registration of medical images | |
| US9406146B2 (en) | Quantitative perfusion analysis | |
| CN107622492A (en) | Fissure segmentation method and system | |
| JP2015528372A (en) | System and method for automatically detecting pulmonary nodules in medical images | |
| US7359538B2 (en) | Detection and analysis of lesions in contact with a structural boundary | |
| Wang et al. | A method of ultrasonic image recognition for thyroid papillary carcinoma based on deep convolution neural network. | |
| CN111563876B (en) | Medical image acquisition method and medical image display method | |
| WO2013151749A1 (en) | System, method, and computer accessible medium for volumetric texture analysis for computer aided detection and diagnosis of polyps | |
| CN117876833A (en) | A lung CT image feature extraction method for machine learning | |
| Mastouri et al. | A morphological operation-based approach for Sub-pleural lung nodule detection from CT images | |
| Jamil et al. | Adaptive thresholding technique for segmentation and juxtapleural nodules inclusion in lung segments | |
| Neha | Kidney localization and stone segmentation from a ct scan image | |
| Heeneman et al. | Lung nodule detection by using Deep Learning | |
| CN114757894A (en) | A system for analyzing bone tumor lesions | |
| KR102332472B1 (en) | Tumor automatic segmentation using deep learning based on dual window setting in a medical image | |
| WO2007033170A1 (en) | System and method for polyp detection in tagged or non-tagged stool images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201106 |