[go: up one dir, main page]

CN115375603A - Image identification method and device, electronic equipment and storage medium - Google Patents

Image identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115375603A
CN115375603A CN202110535027.0A CN202110535027A CN115375603A CN 115375603 A CN115375603 A CN 115375603A CN 202110535027 A CN202110535027 A CN 202110535027A CN 115375603 A CN115375603 A CN 115375603A
Authority
CN
China
Prior art keywords
image
blood vessel
post
focus detection
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110535027.0A
Other languages
Chinese (zh)
Inventor
肖月庭
阳光
郑超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shukun Beijing Network Technology Co Ltd
Original Assignee
Shukun Beijing Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shukun Beijing Network Technology Co Ltd filed Critical Shukun Beijing Network Technology Co Ltd
Priority to CN202110535027.0A priority Critical patent/CN115375603A/en
Publication of CN115375603A publication Critical patent/CN115375603A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image identification method, an image identification device, electronic equipment and a storage medium, and the application firstly acquires a post-processing image of a blood vessel in a medical image to be identified; determining a focus detection area according to a post-processing image of a blood vessel in the medical image to be identified; then, detecting the focus of a focus detection area to obtain a focus detection result; and finally, correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified. The method and the device utilize artificial intelligence to assist in detecting the vascular diseases, improve the efficiency of vascular disease diagnosis and improve the accuracy of vascular disease diagnosis.

Description

Image identification method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of medical image processing, and in particular, to an image recognition method and apparatus, an electronic device, and a storage medium.
Background
In the modern medical field, the image recognition technology can help doctors to quickly and accurately know the state of blood vessels of patients and is helpful to timely diagnose and treat various blood vessel diseases, and the image recognition technology has great significance to the development of the medical field.
With increasing age, the probability of blood vessels becoming problematic increases, for example: calcified plaques, non-calcified plaques, aneurysms and the like often appear in head and neck vascular lesions. The traditional head and neck blood vessel diagnosis method is that a doctor observes a head and neck blood vessel image and then synthesizes self medical experience to obtain a diagnosis result, and the traditional diagnosis method is time-consuming and labor-consuming, is easy to miss and cannot ensure the accuracy of the diagnosis result. Therefore, how to improve the efficiency of vascular disease diagnosis and improve the accuracy of vascular disease diagnosis become problems to be solved urgently.
Disclosure of Invention
The application provides an image identification method, an image identification device, an electronic device and a storage medium, so as to improve the accuracy of blood vessel diagnosis.
In order to solve the above technical problem, the embodiments of the present application provide the following technical solutions:
an image recognition method, comprising:
acquiring a post-processing image of a blood vessel in a medical image to be identified;
determining a focus detection area according to the post-processing image of the blood vessel in the medical image to be identified;
detecting the focus of the focus detection area to obtain a focus detection result;
and correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
An image recognition apparatus comprising:
the acquisition module is used for acquiring a post-processing image of a blood vessel in the medical image to be identified;
the determining module is used for determining a focus detection area according to the post-processing image of the blood vessel in the medical image to be identified;
the detection module is used for carrying out focus detection on the focus detection area to obtain a focus detection result;
and the correction module is used for correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
Meanwhile, the embodiment of the application also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps in the image recognition method.
Meanwhile, the embodiment of the application also provides a computer readable storage medium, in which processor executable instructions are stored, and the instructions are loaded by one or more processors to execute the steps in the image recognition method.
Has the beneficial effects that: the application provides an image identification method, an image identification device, electronic equipment and a storage medium, and the application acquires a post-processing image of a blood vessel in a medical image to be identified; determining a focus detection area according to a post-processing image of a blood vessel in the medical image to be identified; then, focus detection is carried out on the focus detection area to obtain a focus detection result; and finally, correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified. In the process of identifying the image, the image is processed to obtain the blood vessel focus area, the post-processing image is corrected according to the focus area of the blood vessel, the blood vessel disease is detected in an auxiliary mode through a computer by utilizing an image identification technology, the diagnosis efficiency of the blood vessel disease is improved, and the diagnosis accuracy of the blood vessel disease is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an image recognition system according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a first image recognition method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a second image recognition method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Fig. 7a is a schematic diagram of a medical image to be recognized corresponding to a target region according to an embodiment of the present application.
Fig. 7b is a schematic diagram of a post-processing image of a blood vessel in a medical image to be identified according to an embodiment of the present application.
Fig. 8a is a schematic view of a post-processing image of a blood vessel before aneurysm detection according to an embodiment of the present application.
Fig. 8b is a schematic diagram of a post-processing image after blood vessel correction when a lesion is an aneurysm according to an embodiment of the present application.
Fig. 9a is a schematic diagram of a post-processing image of a blood vessel before calcification detection according to an embodiment of the present application.
Fig. 9b is a schematic diagram of a post-processing image after blood vessel correction when a lesion is calcified according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image identification method and device, electronic equipment and a storage medium.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims, and drawings of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or may alternatively include other steps or elements inherent to such process, method, article, or apparatus. In addition, the terms "first", "second", and "third", etc. are used to distinguish different objects, not to describe a particular order.
In the embodiment of the present application, CT (Computed Tomography) performs cross-sectional scanning one by one around a certain part of a human body together with a detector with extremely high sensitivity by using an accurately collimated X-ray beam, a gamma ray, an ultrasonic wave, and the like, has the characteristics of fast scanning time, clear images, and the like, and can be used for checking various diseases.
In an embodiment of the present application, the post-processing image includes: CPR image, straightening image, probe image (blood vessel section image), maximum intensity projection MIP, VR image, and the like.
In the embodiment of the present application, the mask refers to a tissue/lesion segmentation model obtained by multiple training using an empirical model or a real tissue/lesion as an input and using a deep learning neural network.
In the embodiment of the present application, the medical image to be identified includes, but is not limited to, a CT image, and in the implementation of the present invention, the medical image to be identified corresponding to the target region is mainly a human head and neck CT image.
In the embodiment of the present application, the presentation data mainly refers to a post-processing image of the blood vessel after being corrected.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario provided by an embodiment of the present application, and is used in the field of medical image processing, when a doctor diagnoses a head and neck blood vessel of a patient, the doctor diagnoses the head and neck blood vessel state of the patient by using an electronic device such as a computer, and provides an accurate blood vessel diagnosis report to help the doctor make a reasonable blood vessel diagnosis result.
The scene schematic diagram of the image recognition method provided by the embodiment of the application comprises the following steps: electronic device 11, image acquisition device 12, database 13, network 14, wherein:
the electronic device 11 is responsible for identifying, correcting and other operations on the acquired head and neck medical image, the electronic device includes a processor, the image identification device can be specifically integrated in the electronic device 11, and the electronic device can be a server or a terminal and other devices; the terminal may include a tablet Computer, a notebook Computer, a Personal Computer (PC), a microprocessor, or other devices.
The image capturing device 12 is responsible for capturing a head and neck image of a human body, the image capturing device 12 may include an electronic device such as a Magnetic Resonance Imaging (MRI), a Computed Tomography (CT), a colposcope, or an endoscope, and in this embodiment, the image capturing device is a Computed Tomography device;
the database 13 is responsible for storing head and neck initial image data, and the database 13 comprises a local database and/or a cloud database and the like;
network 14 is responsible for transmitting the initial head and neck image from image capture device 12 or database 13 to electronic device 11.
In one embodiment, after the image acquisition device 12 acquires the head and neck medical CT image of the patient, the electronic device 11 may acquire the medical image data acquired by the image acquisition device 12 through the network 14, and the electronic device 11 performs the image identification method provided by the present application to perform operations such as identification and correction on the head and neck medical CT image, and outputs the identified and corrected result.
In another embodiment, the medical image data stored in the database 13 may be acquired through the network 14, and then the head and neck medical CT image is transmitted to the electronic device 11 through the network 14, and the electronic device 11 performs operations such as identification and correction on the acquired head and neck medical image data.
In another embodiment, the image capturing device 12 may be connected to the database 13, the medical image data captured by the image capturing device 12 is stored in the database 13, the medical image data stored in the database 13 is then acquired through the network 14, and the electronic device 11 performs operations such as identification and correction on the acquired head and neck medical image data.
It should be noted that the scene schematic diagram of the image recognition method shown in fig. 1 is only an example, and the electronic device and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it is known by a person of ordinary skill in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems with the evolution of a system and the occurrence of a new service scene. The following are detailed descriptions. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image recognition method according to an embodiment of the present disclosure, where the image recognition method includes the following steps:
201: and acquiring a post-processing image of the blood vessel in the medical image to be identified.
In one embodiment, the step of obtaining a post-processed image of a blood vessel in the medical image to be identified comprises: acquiring a medical image to be identified corresponding to a target part; performing blood vessel segmentation on the medical image to be identified based on a blood vessel mask to obtain a blood vessel central line of a blood vessel in the medical image to be identified; and obtaining a post-processing image of the blood vessel in the medical image to be identified according to the blood vessel central line.
For example, in a process of diagnosing a head and neck vascular disease of a patient, a head and neck CT image of the patient is first obtained, a blood vessel segmentation is performed on the head and neck CT image based on a head and neck vascular mask to obtain a boundary of a blood vessel in the head and neck image, a blood vessel center line is then determined according to the blood vessel boundary, and finally a blood vessel post-processing image in the head and neck CT image is obtained according to the blood vessel center line, wherein the mask refers to a segmentation model of a tissue/focus obtained through multiple training by using a deep learning neural network with an empirical model or a real tissue/focus as an input.
202: determining a focus detection area according to a post-processing image of a blood vessel in a medical image to be identified;
in one embodiment, the step of determining a lesion detection area based on a post-processed image of a blood vessel in the medical image to be identified comprises: acquiring a blood vessel boundary in the post-processing image; and expanding a plurality of pixel areas to the periphery based on the blood vessel boundary to obtain a focus detection area.
In the process of obtaining the post-processing image of the blood vessel in the medical image to be identified, the situation that the real lesion area is mistakenly segmented often occurs, so that the outward convex lesions such as aneurysm and the like cannot be identified or are missed to be detected when the post-processing image is subsequently utilized for segmentation, therefore, the post-processing image needs to be further detected, the unrecognizable or missed lesions are detected again, and the post-processing image is corrected according to the newly detected lesions. A lesion detection area needs to be defined first before lesion detection, and a method for determining a lesion detection area is to obtain a blood vessel boundary in a post-processing image, and expand an area of a plurality of pixels to the periphery with the blood vessel boundary as a reference, to obtain a lesion detection area, for example: if the blood vessel region in the post-processed image is a and a region corresponding to a region expanded by 3 pixels to the periphery with respect to the blood vessel boundary is B, the region a and the region B are regarded as lesion detection regions.
In one embodiment, the step of determining a lesion detection area based on a post-processed image of a blood vessel in the medical image to be identified comprises: acquiring a blood vessel central line in the post-processing image; and expanding a region of a plurality of pixels to the periphery based on the blood vessel central line to obtain a focus detection region.
Another method for determining a lesion detection area is to acquire a blood vessel center line in a post-processing image, and expand an area of a plurality of pixels to the periphery with the blood vessel center line as a reference to obtain a lesion detection area, for example: and in the post-processing image, if the blood vessel region is A and a region corresponding to a region expanded by 3 pixels to the periphery with the center line of the blood vessel as a reference is B, the region A and the region B are taken as lesion detection regions.
In one embodiment, the step of determining a lesion detection area based on a post-processed image of a blood vessel in the medical image to be identified comprises: acquiring a predicted focus area mask corresponding to the post-processing image; and determining a focus detection area according to the predicted focus area mask.
The method for determining the focus detection area can also be that a corresponding predicted focus area mask in the post-processing image is obtained, and the focus detection area is dynamically adjusted according to the predicted focus area mask.
203: and (4) performing focus detection on the focus detection area to obtain a focus detection result.
The step of performing lesion detection on the lesion detection area to obtain a lesion detection result includes:
calling a focus detection model; and matching a mask of the predicted focus area with the focus detection area based on the focus detection model to obtain a focus detection result.
In the process of detecting the focus area, firstly, a focus detection model is called, a mask of the predicted focus area is matched with the focus detection area by using a bounding box neural network, and the focus in the focus detection area is determined according to a matching result.
204: and correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
In an embodiment, the step of modifying the post-processing image according to the lesion detection result to obtain display data corresponding to the medical image to be recognized includes: determining the type of the focus according to the focus detection result; calling a corresponding correction model according to the focus type; and correcting the vessel boundary and the vessel center line of the post-processing image based on the correction model to obtain display data corresponding to the initial image.
For example, first, a lesion type is determined according to the lesion detection result, a corresponding correction model is called according to the lesion type, a blood vessel boundary and a blood vessel center line of the post-processing image are corrected based on the correction model, as shown in fig. 8, if the lesion is an aneurysm, the aneurysm correction model directly splices the aneurysm image and the blood vessel image to form a corrected image of the blood vessel, and the blood vessel center line is re-determined, fig. 8a is a schematic diagram of the blood vessel post-processing image before aneurysm detection provided in the present embodiment, where a solid line represents a blood vessel boundary line, a dotted line convex part represents an aneurysm, and a dotted line in the middle of the blood vessel boundary line represents a blood vessel center line, as can be seen from fig. 8a, when there is an aneurysm in the blood vessel, the aneurysm is missed for detection in the post-processing image before detection, and therefore, the missed lesion to be detected again needs to be detected, the post-processing image is corrected according to the detected lesion, and fig. 8b is a schematic diagram of the post-processing image after blood vessel boundary after the lesion is corrected when the aneurysm is provided in the present embodiment, where a solid line is a blood vessel solid line, a dashed line represents a blood vessel center line, and a blood vessel boundary line after detection is corrected, and a blood vessel boundary line after the aneurysm is detected in the present embodiment, and a blood vessel boundary line is detected, and a blood vessel boundary line is again detected, and a blood vessel is detected.
Fig. 9a shows that if the lesion is calcified, the calcified region is removed from the region where the calcified region meets the blood vessel image by the calcified correction model, and the intersection is used as the real boundary of the blood vessel, so as to determine the centerline of the blood vessel according to the boundary. Fig. 9a is a schematic diagram of a blood vessel post-processing image before calcification detection provided in the embodiment of the present application, in which a solid line represents a blood vessel boundary line, a dotted line represents a blood vessel center line, and a shaded portion is calcification, as can be seen from fig. 9a, when there is calcification in a blood vessel, calcification is missed in the blood vessel post-processing image before correction, and therefore, a calcified lesion that is missed in detection needs to be re-detected, and the post-processing image is corrected according to the detected lesion, fig. 9b is a schematic diagram of a post-processing image after blood vessel correction when the lesion provided in the embodiment of the present application is calcification, in which a solid line represents a blood vessel boundary line, and a dotted line in the middle of a blood vessel boundary line represents a blood vessel center line, as can be seen from fig. 8b, calcification is re-detected in the blood vessel post-processing image after correction, and the blood vessel boundary and the blood vessel center line are corrected according to detected aneurysm.
Referring to fig. 3, fig. 3 is a schematic flow chart of an image recognition method according to an embodiment of the present disclosure, where the image recognition method of the present disclosure is described in detail by a user terminal (e.g., a computer of a doctor, etc.), and the image recognition method includes the following steps:
s301: the training server 33 performs training of the lesion detection model.
In the embodiment of the application, the lesion detection model may use a simulation result of a large amount of random head and neck blood vessel data as a training sample of the deep neural network, and simultaneously obtain a labeled blood vessel lesion result, and input the training sample into the lesion detection model to obtain a predicted blood vessel lesion detection model result, and according to the predicted blood vessel lesion detection model result and the labeled blood vessel lesion result, train by using a deep learning method to obtain the lesion detection model.
S302: the training server 33 performs training of the correction model.
In the embodiment of the application, the correction model can take simulation results of a large amount of blood vessel post-processing image correction data as training samples of the deep neural network, meanwhile, post-processing image results of labeled corrected blood vessels are obtained, the training samples are input into the correction model to obtain post-processing image results of predicted blood vessels, and the correction model is obtained by adopting a deep learning method for training according to the post-processing image correction results of the predicted blood vessels and the post-processing image results of the labeled corrected blood vessels.
S303: the user terminal 32 sends a data request to the image pickup device 31.
In the embodiment of the application, a doctor sends a data request to image acquisition equipment such as CT equipment through a user terminal so as to request a head and neck CT image of a patient.
S304: the image capturing device 31 sends a data response to the user terminal 32.
In this embodiment, image acquisition devices such as CT devices perform CT scanning on patients according to data requests, generate head and neck CT images, and send the head and neck CT images to a user terminal of a doctor through data responses, where fig. 7a is a schematic diagram of a medical image to be identified corresponding to a target region provided in this embodiment of the present application.
S305: the user terminal 32 corrects the head and neck CT image to obtain display data corresponding to the target display organization.
In this embodiment of the application, after the user terminal 32 receives the data response, the data response is analyzed, and a head and neck CT image is obtained as a to-be-identified medical image corresponding to the target region, the user terminal 32 obtains the to-be-identified medical image corresponding to the target region as shown in fig. 7a, then performs vessel segmentation on the to-be-identified medical image based on a vessel mask to obtain a vessel center line of a vessel in the to-be-identified medical image, and then obtains a post-processed image of the vessel in the to-be-identified medical image according to the vessel center line, as shown in fig. 7b, the post-processed image schematic diagram of the vessel in the to-be-identified medical image provided in this embodiment of the application is shown in the figure 7 b. And the display data is a post-processing image after being corrected.
The image identification method provided by the embodiment firstly acquires a post-processing image of a blood vessel in a medical image to be identified; determining a focus detection area according to a post-processing image of a blood vessel in the medical image to be identified; then, detecting the focus of a focus detection area to obtain a focus detection result; and finally, correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified. The method and the device utilize an image recognition technology, assist in detecting the vascular diseases through a computer, improve the efficiency of vascular disease diagnosis and improve the accuracy of vascular disease diagnosis.
In order to better implement the image recognition method provided by the embodiment of the present application, the embodiment of the present application further provides a device based on the image recognition method. The terms are the same as those in the image recognition method, and details of implementation may refer to the description in the method embodiment.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present disclosure, where the image recognition apparatus may include:
an obtaining module 401, configured to obtain a post-processing image of a blood vessel in a medical image to be identified;
a determining module 402, configured to determine a lesion detection area according to a post-processing image of a blood vessel in the medical image to be identified;
a detection module 403, configured to perform focus detection on the focus detection area to obtain a focus detection result;
and a correcting module 404, configured to correct the post-processing image according to the lesion detection result, so as to obtain display data corresponding to the medical image to be identified.
In an embodiment, the obtaining module 401 is specifically configured to obtain a medical image to be identified corresponding to a target portion; performing blood vessel segmentation on the medical image to be identified based on a blood vessel mask to obtain a blood vessel central line of a blood vessel in the medical image to be identified; and obtaining a post-processing image of the blood vessel in the medical image to be identified according to the blood vessel central line.
In an embodiment, the determining module 402 is specifically configured to obtain a blood vessel boundary in the post-processing image; and expanding a region of a plurality of pixels to the periphery based on the blood vessel boundary to obtain a focus detection region.
In one embodiment, the determining module 402 is specifically configured to obtain a centerline of a blood vessel in the post-processing image; and expanding a region of a plurality of pixels to the periphery based on the blood vessel central line to obtain a focus detection region.
In an embodiment, the determining module 402 is specifically configured to obtain a predicted lesion region mask corresponding to the post-processing image; and determining a focus detection area according to the predicted focus area mask.
In one embodiment, the detection module 403 is specifically configured to invoke a lesion detection model; and matching a mask of the predicted focus area with the focus detection area based on the focus detection model to obtain a focus detection result.
In one embodiment, the modification module 404 is specifically configured to determine a lesion type based on the lesion detection result; calling a corresponding correction model according to the focus type; and correcting the vessel boundary and the vessel center line of the post-processing image based on the correction model to obtain display data corresponding to the initial image.
The above modules can be implemented in the foregoing embodiments, and are not described in detail herein.
In an embodiment, the electronic device provided in the embodiments of the present application includes a terminal, a server, and the like, which are separately described.
The present embodiment also provides a terminal, as shown in fig. 5, which may include components such as a Radio Frequency (RF) circuit 501, a memory 502 including one or more computer-readable storage media, an input unit 503, a display unit 504, a sensor 505, an audio circuit 506, a Wireless Fidelity (WiFi) module 507, a processor 508 including one or more processing cores, and a power supply 509. Those skilled in the art will appreciate that the terminal structure shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 501 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 501 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), email, short Message Service (SMS), and the like.
The memory 502 may be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 508 and the input unit 503 access to the memory 502.
The input unit 503 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. In an embodiment, the touch sensitive surface may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 508, and can receive and execute commands sent from the processor 508. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 503 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 504 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 504 may include a Display panel, and in one embodiment, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 608 to determine the type of touch event, and the processor 508 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 5 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The terminal may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of identifying the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration identification related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
The audio circuit 506 includes a speaker and a microphone may provide an audio interface between the user and the terminal. The audio circuit 506 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508 and then sent to another terminal via the RF circuit 501, or the audio data is output to the memory 502 for further processing. The audio circuit 506 may also include an earbud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 507, and provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 507, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the terminal, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby integrally monitoring the mobile phone. In an embodiment, processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 508.
The terminal also includes a power supply 509 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 508 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 509 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 508 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 508 runs the application programs stored in the memory 502, thereby implementing various functions:
acquiring a post-processing image of a blood vessel in a medical image to be identified;
determining a focus detection area according to the post-processing image of the blood vessel in the medical image to be identified;
performing focus detection on the focus detection area to obtain a focus detection result;
and correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
The embodiment of the present application further provides a server, as shown in fig. 6, which shows a schematic structural diagram of the server according to the embodiment of the present application, specifically:
the server may include components such as a processor 601 of one or more processing cores, memory 602 of one or more computer-readable storage media, a power supply 603, and an input unit 604. Those skilled in the art will appreciate that the server architecture shown in FIG. 6 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 601 is a control center of the server, connects various parts of the entire server using various interfaces and lines, performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the server. Optionally, processor 601 may include one or more processing cores; preferably, the processor 601 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
The memory 602 may be used to store software programs and modules, and the processor 601 executes various functional applications and data processing by operating the software programs and modules stored in the memory 602. The memory 602 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 602 may also include a memory controller to provide the processor 601 with access to the memory 602.
The server further includes a power supply 603 for supplying power to each component, and preferably, the power supply 603 may be logically connected to the processor 601 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 603 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may further include an input unit 604, and the input unit 604 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 601 in the server loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and the processor 601 runs the application programs stored in the memory 602, thereby implementing various functions as follows:
acquiring a post-processing image of a blood vessel in a medical image to be identified;
determining a focus detection area according to the post-processing image of the blood vessel in the medical image to be identified;
detecting the focus of the focus detection area to obtain a focus detection result;
and correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image recognition method, which is not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium having stored therein a plurality of instructions, which can be loaded by a processor to perform the steps of any of the methods provided by the embodiments of the present application.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium may perform the steps of any one of the methods provided in the embodiments of the present application, beneficial effects that can be achieved by any one of the methods provided in the embodiments of the present application may be achieved, for details, see the foregoing embodiments, and are not described herein again.
The foregoing describes in detail an image recognition method, an image recognition apparatus, an electronic device, and a storage medium provided in the embodiments of the present application, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image recognition method, characterized in that the image recognition method comprises:
acquiring a post-processing image of a blood vessel in a medical image to be identified;
determining a focus detection area according to the post-processing image of the blood vessel in the medical image to be identified;
detecting the focus of the focus detection area to obtain a focus detection result;
and correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
2. The image recognition method of claim 1, wherein the step of obtaining a post-processed image of the blood vessel in the medical image to be recognized comprises:
acquiring a medical image to be identified corresponding to a target part;
performing blood vessel segmentation on the medical image to be identified based on a blood vessel mask to obtain a blood vessel central line of a blood vessel in the medical image to be identified;
and obtaining a post-processing image of the blood vessel in the medical image to be identified according to the blood vessel central line.
3. The image recognition method as set forth in claim 1, wherein the step of determining a lesion detection area based on the post-processed image of the blood vessel in the medical image to be recognized comprises:
acquiring a blood vessel boundary in the post-processing image;
and expanding a region of a plurality of pixels to the periphery based on the blood vessel boundary to obtain a focus detection region.
4. The image recognition method according to claim 1, wherein the step of determining a lesion detection area based on the post-processed image of the blood vessel in the medical image to be recognized comprises:
obtaining a blood vessel central line in the post-processing image;
and expanding a region of a plurality of pixels to the periphery based on the blood vessel central line to obtain a focus detection region.
5. The image recognition method according to claim 1, wherein the step of determining a lesion detection area based on the post-processed image of the blood vessel in the medical image to be recognized comprises:
acquiring a predicted focus area mask corresponding to the post-processing image;
and determining a focus detection area according to the predicted focus area mask.
6. The image recognition method according to claim 1, wherein the step of performing lesion detection on the lesion detection area to obtain a lesion detection result comprises:
calling a focus detection model;
and matching a mask of the predicted focus area with the focus detection area based on the focus detection model to obtain a focus detection result.
7. The image recognition method according to claim 1, wherein the step of modifying the post-processed image according to the lesion detection result to obtain display data corresponding to the medical image to be recognized includes:
determining the type of the focus according to the focus detection result;
calling a corresponding correction model according to the focus type;
and correcting the vessel boundary and the vessel center line of the post-processing image based on the correction model to obtain display data corresponding to the initial image.
8. An image recognition apparatus, comprising
The acquisition module is used for acquiring a post-processing image of a blood vessel in the medical image to be identified;
the determining module is used for determining a focus detection area according to the post-processing image of the blood vessel in the medical image to be identified;
the detection module is used for carrying out focus detection on the focus detection area to obtain a focus detection result;
and the correction module is used for correcting the post-processing image according to the focus detection result to obtain display data corresponding to the medical image to be identified.
9. An electronic device, comprising a memory storing an application program and a processor for running the application program in the memory to perform the steps of the image recognition method according to any one of claims 1 to 7.
10. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the image recognition method according to any one of claims 1 to 7.
CN202110535027.0A 2021-05-17 2021-05-17 Image identification method and device, electronic equipment and storage medium Pending CN115375603A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110535027.0A CN115375603A (en) 2021-05-17 2021-05-17 Image identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110535027.0A CN115375603A (en) 2021-05-17 2021-05-17 Image identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115375603A true CN115375603A (en) 2022-11-22

Family

ID=84058541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110535027.0A Pending CN115375603A (en) 2021-05-17 2021-05-17 Image identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115375603A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188469A (en) * 2023-04-28 2023-05-30 之江实验室 A lesion detection method, device, readable storage medium and electronic equipment
CN116309428A (en) * 2023-03-10 2023-06-23 上海联影智能医疗科技有限公司 Method, device, storage medium and electronic equipment for determining a region of interest

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682015A (en) * 2018-05-28 2018-10-19 科大讯飞股份有限公司 Lesion segmentation method, apparatus, equipment and storage medium in a kind of biometric image
JP2018175343A (en) * 2017-04-12 2018-11-15 富士フイルム株式会社 MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM
CN111127466A (en) * 2020-03-31 2020-05-08 上海联影智能医疗科技有限公司 Medical image detection method, device, equipment and storage medium
CN111445449A (en) * 2020-03-19 2020-07-24 上海联影智能医疗科技有限公司 Classification method, apparatus, computer equipment and storage medium of region of interest
CN111932495A (en) * 2020-06-30 2020-11-13 数坤(北京)网络科技有限公司 Medical image detection method, device and storage medium
CN111951215A (en) * 2020-06-30 2020-11-17 数坤(北京)网络科技有限公司 Image detection method and device and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018175343A (en) * 2017-04-12 2018-11-15 富士フイルム株式会社 MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM
CN108682015A (en) * 2018-05-28 2018-10-19 科大讯飞股份有限公司 Lesion segmentation method, apparatus, equipment and storage medium in a kind of biometric image
CN111445449A (en) * 2020-03-19 2020-07-24 上海联影智能医疗科技有限公司 Classification method, apparatus, computer equipment and storage medium of region of interest
CN111127466A (en) * 2020-03-31 2020-05-08 上海联影智能医疗科技有限公司 Medical image detection method, device, equipment and storage medium
CN111932495A (en) * 2020-06-30 2020-11-13 数坤(北京)网络科技有限公司 Medical image detection method, device and storage medium
CN111951215A (en) * 2020-06-30 2020-11-17 数坤(北京)网络科技有限公司 Image detection method and device and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309428A (en) * 2023-03-10 2023-06-23 上海联影智能医疗科技有限公司 Method, device, storage medium and electronic equipment for determining a region of interest
CN116188469A (en) * 2023-04-28 2023-05-30 之江实验室 A lesion detection method, device, readable storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107895369B (en) Image classification method, device, storage medium and equipment
CN110866897B (en) Image detection method and computer readable storage medium
CN113177928B (en) Image identification method and device, electronic equipment and storage medium
US11921278B2 (en) Image status determining method an apparatus, device, system, and computer storage medium
CN110070129B (en) An image detection method, device and storage medium
US11107212B2 (en) Methods and systems for displaying a region of interest of a medical image
EP3086261A2 (en) Method and apparatus for sensing fingerprints
CN110353711A (en) X-ray imaging analysis method, device and readable storage medium storing program for executing based on AI
CN110610181A (en) Medical image identification method and device, electronic equipment and storage medium
CN107862660B (en) Data optimization method, device and ultrasound platform
CN115375603A (en) Image identification method and device, electronic equipment and storage medium
CN114066875A (en) Slice image processing method and device, storage medium and terminal device
CN115984228A (en) Gastroscope image processing method and device, electronic equipment and storage medium
CN113902934A (en) Medical image processing method, medical image processing device, storage medium and electronic equipment
CN118969167A (en) Inspection report generation method, device, equipment and storage medium
CN115393323B (en) Target area obtaining method, device, equipment and storage medium
CN113902682A (en) Medical image-based diagnosis method, medical image-based diagnosis device, storage medium, and electronic apparatus
CN113283552A (en) Image classification method and device, storage medium and electronic equipment
CN113887579A (en) Medical image classification method and device, storage medium and electronic equipment
CN114429493B (en) Image sequence processing method and device, electronic equipment and storage medium
CN113902681A (en) Medical image recognition method and device, storage medium and electronic equipment
CN113793334B (en) Equipment monitoring method and equipment monitoring device
CN114140864B (en) Trajectory tracking method and device, storage medium and electronic equipment
CN117475344B (en) Ultrasonic image interception method and device, terminal equipment and storage medium
CN118312082A (en) Information display method, information display device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant after: Shukun Technology Co.,Ltd.

Address before: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant before: Shukun (Beijing) Network Technology Co.,Ltd.

Country or region before: China