WO2020087732A1 - Neural network-based method and system for vein and artery identification - Google Patents
Neural network-based method and system for vein and artery identification Download PDFInfo
- Publication number
- WO2020087732A1 WO2020087732A1 PCT/CN2018/123978 CN2018123978W WO2020087732A1 WO 2020087732 A1 WO2020087732 A1 WO 2020087732A1 CN 2018123978 W CN2018123978 W CN 2018123978W WO 2020087732 A1 WO2020087732 A1 WO 2020087732A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- network model
- layer
- ultrasound image
- vein
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0891—Clinical applications for diagnosis of blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the invention relates to the technical field of medical ultrasound, in particular to a vein and artery recognition method and system based on a neural network.
- the puncture needle enters the jugular vein, continue the subsequent dilation and catheterization; if the puncture needle is located in the carotid artery, the puncture needle is withdrawn and the puncture point is compressed To perform the puncture again.
- Different patients have different degrees of difficulty in judging the position of the venous vessels. According to the survey, the probability of inadvertently puncturing the carotid artery is 2% to 8%, which usually leads to related complications, such as local hematoma. If the patient has a blood clotting disorder, the hematoma may rapidly expand, and even airway obstruction, pseudoaneurysm, and arteriovenous fistula may all have fatal consequences.
- ultrasound image-guided jugular vein puncture in ultrasound B mode is very beneficial to locate blood vessels before puncture, reducing the probability of the above risks.
- the discrimination of arteries and veins in ultrasound images is still almost manually completed, especially the identification of carotid arteries and jugular veins. This requires the operator to have expertise in ultrasound imaging, which also causes uncertainty in the discrimination results and cannot be minimized. The risk of chemical surgery; At the same time, it also limits the promotion and popularization of ultrasound-guided jugular venipuncture.
- the purpose of the present invention is to overcome the shortcomings in the prior art and provide a neural network-based vein and artery identification method for automatic identification of arteries and veins in the ultrasound image to be identified.
- This method is safe, efficient, and capable of Help doctors improve the accuracy of diagnosis, better assist doctors in the identification of arteries and veins, and further assist doctors in venipuncture.
- the technical scheme adopted by the present invention is:
- a vein and artery recognition method based on neural network including:
- the neural network-based vein and artery recognition method further includes:
- the neural network model capable of automatically identifying arteries and veins in the ultrasound image is obtained.
- the marking of the arteries and veins in the pre-acquired ultrasound images is specifically:
- the user marks the veins and arteries in the divided ultrasound image
- the training set is used to train a neural network model
- the verification set is used to verify the recognition accuracy of the neural network and optimize the weight parameters of the neural network model
- the test set is used to finally evaluate the recognition accuracy of the neural network model .
- training by inputting a set number of labeled pre-acquired ultrasound images into the neural network model specifically includes:
- the neural network model includes an input layer, multiple hidden layers, and an output layer. Between the hidden layers in the neural network model, between the input layer and the hidden layer, the hidden layer and The output layers are connected by weight parameters; the input layer size is set to be consistent with the size of the ultrasound image input to the neural network model;
- the weight parameters in the neural network model are updated to obtain a neural network model that automatically recognizes arteries and veins in ultrasound images.
- the hidden layer is used to automatically extract features of arteries and veins in the ultrasound image
- the hidden layer includes at least a convolution layer and a maximum pooling layer
- the hidden layer includes a convolution layer, a maximum pooling layer, and a bonding layer.
- the output layer is configured to output several predicted bounding boxes
- the information of the bounding box includes probability information that the image in the bounding box is an artery or a vein, and position information and size information of the bounding box.
- the loss function includes:
- the error of the prediction category of the grid unit containing the target object is the error of the prediction category of the grid unit containing the target object.
- obtaining the position information of veins and arteries from the ultrasound image to be recognized through the neural network model specifically includes:
- filtering the output bounding box according to the set probability threshold specifically includes:
- the maximum value suppression method is used to filter the bounding box with the highest predicted probability as the screening result, and then the position information of the vein and the artery is obtained.
- the hidden layer further includes an overfitting setting, which is to randomly inactivate weight parameters between some input layers and the hidden layer or between the hidden layer and the output layer.
- the structure of the neural network model specifically includes: the hidden layer includes a convolutional layer and a maximum pooling layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected , And finally connect the output layer.
- the structure of the neural network model specifically includes: the hidden layer includes a convolutional layer, a maximum pooling layer, and a combination layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected , And then connect a combination layer to combine the advanced feature layer connected before the combined layer with the hidden layer or layers before the advanced feature layer; the output image of the advanced feature layer and the combined hidden layer The length and width of must be consistent accordingly; this advanced feature layer is combined with the previous hidden layer or layers to be input to the last convolutional layer.
- the structure of the neural network model specifically includes: in the multiple hidden layers, the ultrasound image first passes through the basic feature extraction network, extracts several feature images, and then undergoes a series of convolution operations to obtain feature images with different resolutions ; Then, through convolution operations, different size boundary boxes are simultaneously generated at different positions of these different resolution feature images, and softmax classification and position regression are performed on these boundary boxes at the output layer to predict the category and specific position of the boundary box. .
- a vein and artery recognition system based on neural network including:
- Ultrasound image input unit used to input ultrasound images, and input the ultrasound images to be recognized into the neural network model for processing
- the neural network model is used to obtain the position information of veins and arteries from the ultrasound image to be recognized through the neural network model;
- the ultrasound image generating unit distinguishes the marked vein from the artery according to the acquired position information, and generates an ultrasound image containing the vein marker and the artery marker.
- the neural network-based vein and artery recognition system further includes:
- Ultrasound image marking unit used to mark arteries and veins in pre-acquired ultrasound images
- the neural network training unit inputs the labeled pre-acquired ultrasound images into the neural network model for training, and obtains the neural network model that can automatically identify the arteries and veins in the ultrasound images.
- the ultrasound image marking unit is specifically used for:
- the training set is used to train a neural network model
- the verification set is used to verify the recognition accuracy of the neural network and optimize the weight parameters of the neural network model
- the test set is used to finally evaluate the recognition accuracy of the neural network model .
- the training of the neural network model by the neural network training unit specifically includes:
- the neural network model includes an input layer, multiple hidden layers, and an output layer. Between the hidden layers in the neural network model, between the input layer and the hidden layer, the hidden layer and The output layers are connected by weight parameters; the input layer size is set to be consistent with the size of the ultrasound image input to the neural network model;
- the weight parameters in the neural network model are updated to obtain a neural network model that automatically recognizes arteries and veins in ultrasound images.
- the structure of the neural network model specifically includes: the hidden layer includes a convolutional layer and a maximum pooling layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutions are connected Layer, and finally connect the output layer.
- the structure of the neural network model specifically includes: the hidden layer includes a convolutional layer, a maximum pooling layer, and a combination layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected , And then connect a combination layer to combine the advanced feature layer connected before the combined layer with the hidden layer or layers before the advanced feature layer; the output image of the advanced feature layer and the combined hidden layer The length and width of must be consistent accordingly; this advanced feature layer is combined with the previous hidden layer or layers to be input to the last convolutional layer.
- the structure of the neural network model specifically includes: in the multiple hidden layers, the ultrasound image first passes through the basic feature extraction network, extracts several feature images, and then undergoes a series of convolution operations to obtain feature images with different resolutions ; Then, through convolution operations, different size boundary boxes are simultaneously generated at different positions of these different resolution feature images, and softmax classification and position regression are performed on these boundary boxes at the output layer to predict the category and specific position of the boundary box. .
- the output layer is configured to output several predicted bounding boxes
- the information of the bounding box includes probability information that the image in the bounding box is an artery or a vein, and position information and size information of the bounding box.
- the loss function includes:
- the error of the prediction category of the grid unit containing the target object is the error of the prediction category of the grid unit containing the target object.
- acquiring the position information of veins and arteries from the ultrasound image to be recognized through the neural network model specifically includes:
- filtering the output bounding box according to the set probability threshold specifically includes:
- the maximum value suppression method is used to filter the bounding box with the highest predicted probability as the screening result, and then the position information of veins and arteries is obtained.
- the hidden layer further includes an overfitting setting, which is to randomly inactivate weight parameters between some input layers and the hidden layer or between the hidden layer and the output layer.
- the present invention can accurately identify the positions of the arteries and veins in the ultrasound image, thereby better assisting the doctor in performing venipuncture.
- Fig. 1-a is a schematic diagram of the neural network-based jugular vein puncture real-time ultrasound guidance system training mode of the present invention.
- Fig. 1-b is a schematic diagram of the neural network-based real-time ultrasound guidance system for jugular vein puncture of the present invention in a normal working mode.
- FIG. 2 is a schematic diagram of the system control flow of the present invention.
- FIG. 3 is a schematic diagram of the processing flow of the ultrasonic image marking unit of the present invention.
- FIG. 5 is a schematic structural diagram of a first neural network established in an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a third neural network established in an embodiment of the present invention.
- FIG. 7 is a flowchart of a process for acquiring position information of an artery and a vein from an ultrasound image to be recognized according to the present invention.
- FIG. 8 is an original image of an ultrasound image with an expert-marked rectangular frame in an embodiment of the present invention.
- FIG. 9 is an image of a jugular vein puncture guidance effect of the first neural network structure corresponding system in an embodiment of the present invention.
- a system includes (or contains or has) some units, modules, models, it should be understood that it may include (or contain or have) only those units, or it is not specifically restricted Other units may be included (or included or have).
- the terms "module” and “unit” as used herein mean, but are not limited to, software or hardware components that perform specific tasks, such as field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs).
- the module may be configured in an addressable storage medium and configured to execute on one or more processors.
- Modules can include components (such as software components, object-oriented software components, class components, and task components), processes, functions, attributes, processes, subroutines, program code segments, drivers, firmware, microcode, circuits, data, Databases, data structures, tables, arrays and variables.
- components such as software components, object-oriented software components, class components, and task components
- processes functions, attributes, processes, subroutines, program code segments, drivers, firmware, microcode, circuits, data, Databases, data structures, tables, arrays and variables.
- the functionality provided in units and modules may be combined into fewer components and modules or further divided into additional components and modules.
- image may mean multi-dimensional data or two-dimensional image data or three-dimensional image data composed of discrete image factors (for example, pixels in a two-dimensional (2D) image and pixels in a 3D image).
- subject as used herein may include veins and arteries of humans and animals.
- object may include man-made models.
- the term "user” as used herein is non-limiting, and may be a doctor, nurse, medical technician, medical imaging expert, etc., or may be an engineer who repairs medical equipment.
- Figure 1-a shows a real-time ultrasound guidance system for jugular vein puncture based on neural network.
- the system includes: a transducer for transmitting and receiving ultrasonic signals; an ultrasound image synthesis module, an ultrasound image synthesis module and The transducer is connected to synthesize the ultrasonic signal transmitted by the transducer into an ultrasonic image; the ultrasonic image processing module includes an ultrasonic image input unit, an ultrasonic image marking unit, a neural network training unit, and the ultrasonic image input unit is used to input an ultrasonic image,
- the user marks the ultrasound image through the ultrasound image marking unit.
- the ultrasound image separately marks the veins and arteries in the ultrasound image.
- the marking symbol can be a graphic, such as a square, rectangle, triangle, or other regular graphics, so that the ultrasound image marking unit obtains
- the ultrasound image marked by the user is trained by the neural network training unit to obtain the neural network model that can automatically identify the artery and vein in the ultrasound image.
- the neural network model can obtain the position information of the artery and vein. match Arteries or veins are distinguished by the marking symbols set by the user; the ultrasound image processing template is respectively connected to the ultrasound image synthesis module and the ultrasound image display module, and the ultrasound image processing module transmits the processed ultrasound image containing the vein mark and the artery mark To the ultrasound image display module for image display.
- connection method described in this system may be a wired connection, such as a point connection, or a wireless connection, such as connection via Bluetooth, wifi, etc.
- the ultrasonic image display module may be a module including a display device, and the display device may be one or more display devices such as a touch screen display, a mobile terminal display (mobile phone, ipad), a liquid crystal display, an LED display, and the like.
- the display device may be one or more display devices such as a touch screen display, a mobile terminal display (mobile phone, ipad), a liquid crystal display, an LED display, and the like.
- the above embodiment in FIG. 1-a is a system for setting selection in the engineer mode or manufacturer of the present invention.
- the ultrasound image processing module is configured to include ultrasound An image input unit, a neural network model, and an ultrasound image generation unit.
- the neural network model automatically recognizes the arteries and veins in the ultrasound image; at this time, as shown in FIG. 1-b, the system of the present invention is as follows.
- the system includes: a transducer, The transducer is used to transmit and receive ultrasonic signals; the ultrasonic image synthesis module, the ultrasonic image synthesis module is connected with the transducer, and is used to synthesize the ultrasonic signals transmitted by the transducer into an ultrasonic image; in the ultrasonic image processing module, the ultrasonic image input unit, It is used to input ultrasound images, and the ultrasound images to be recognized are input to a neural network model for processing; the neural network model is used to obtain the position information of veins and arteries from the ultrasound images to be recognized; the ultrasound image generation unit distinguishes according to the obtained position information Mark veins and arteries and generate vein markings and arteries The recorded ultrasound image; the ultrasound image input in this way is configured as a marked ultrasound image.
- the mark can be a graphic or symbol, such as a square frame, rectangular frame, triangular frame, or other regular graphics.
- Arteries and veins can be obtained through the neural network model
- the position information of the ultrasound image; the ultrasound image processing template is respectively connected with an ultrasound image synthesis module and an ultrasound image display module, and the ultrasound image processing module transmits the processed ultrasound images containing vein marks and arterial marks to the ultrasound image display module for image display.
- the arteries and veins may be carotid arteries, jugular veins, or veins and arteries in other parts.
- the system of the present invention is used to assist venipuncture.
- the system includes: a transducer, which is used to transmit and receive ultrasonic signals;
- the ultrasound image synthesis module is connected to the transducer and used to synthesize the ultrasound signal transmitted by the transducer to the ultrasound image;
- the ultrasound image input unit is used to input the ultrasound image and input the ultrasound image to be recognized into the neural network
- the model is processed; the position information of veins and arteries is obtained from the ultrasound image to be recognized through the neural network model; the ultrasound image generation unit distinguishes marked veins and arteries according to the obtained position information, and generates ultrasound containing vein marks and arterial marks Image; the ultrasound image input in this way is configured as a marked ultrasound image.
- the mark can be a graphic or symbol, such as a square frame, rectangular frame, triangular frame or other regular graphics, and the neural network model can obtain the position information of arteries and veins;
- the ultrasound image display module is connected, and the ultrasound image processing module transmits the processed ultrasound image containing vein markers and artery markers to the ultrasound image display module for image display;
- the venipuncture guide unit which is connected to the ultrasound image processing module,
- the venipuncture guidance unit assists the user in performing venipuncture by displaying the puncture parameters on the ultrasound image display module, such as the puncture grid, depth, and angle information.
- the transducer is placed on the patient to be tested, in this case it is placed on the patient's neck; the transducer transmits and receives ultrasonic signals, and transmits the ultrasonic signals to the ultrasonic image synthesis module to synthesize ultrasonic images
- the neural network model of the ultrasound image processing module processes the input ultrasound image, automatically recognizes the artery and vein in the ultrasound image, and transmits the ultrasound image containing the vein mark and the artery mark to the image display module; the image display module displays Ultrasound images with vein markings and arterial markings.
- the ultrasound images show the cross-sections of the carotid artery and jugular vein.
- the vein and artery recognition method based on neural network proposed by the present invention mainly includes the following steps:
- Step S1 collecting ultrasound images of the detection site, the user marks the veins and arteries in the ultrasound image through the ultrasound image marking unit; in this embodiment, the ultrasound images of the neck are preferably collected, and the carotid arteries and jugular veins are marked in the ultrasound images ;
- Step S2 a neural network training unit is used to train a neural network model based on the marked ultrasound images
- Step S3 input the ultrasound image to be recognized into the trained neural network model for processing; obtain the position information of veins and arteries from the ultrasound image to be recognized through the neural network model; distinguish the vein and artery according to the obtained position information, And generate ultrasound images containing vein markers and artery markers.
- the processing flow of the ultrasound image marking unit includes:
- Step S11 screening the collected ultrasound images
- Step S12 dividing the filtered ultrasound image into a training set, a verification set and a test set
- Step S13 the user marks the artery and vein in the ultrasound image
- filtering the collected ultrasound images includes: filtering unclear, incomplete, and repeated ultrasound images; removing information that is not related to the automatic guided jugular vein puncture process in the ultrasound images; for example, removing all collected ultrasound images Those ultrasound images that are unclear or incomplete in the middle of the labeling process cannot be removed.
- step S12 3/5 of the collected ultrasound images are randomly selected as the training set; 1/5 of the images are randomly selected as the verification set; the remaining 1/5 of the ultrasound images are used as the test set;
- the training set is used to train a neural network model, the verification set is used to verify the recognition accuracy of the neural network and optimize the weight parameters of the neural network model, and the test set is used to finally evaluate the recognition accuracy of the neural network model; of course Randomly selected ratios can be 3/5, 1/5, 1/5, or other ratios;
- a rectangular frame is used to mark all the arteries and veins in the ultrasound image, and the information of the rectangular frame is recorded: including coordinate information and category information; for example, the rectangular frame coordinate information includes both the upper left corner and the lower right corner of the rectangular frame
- the coordinate information of the point, the category information includes the marked rectangular frame representing an artery or vein; the mark can be a figure or symbol, such as a square frame, rectangular frame, triangle, or other regular figure.
- the processing flow of the neural network training unit includes:
- Step S21 ultrasonic image preprocessing: fix the ultrasonic image to a certain size, and normalize the ultrasonic image of the same size; for example, the preprocessed ultrasonic image is 416 ⁇ 416 ⁇ 1; 416 ⁇ 416 represents the preprocessed ultrasonic image Length and width, namely 416 pixels long and 416 pixels wide.
- the ultrasound image is normalized
- the specific processing method of the normalization operation is to subtract the average value of the image pixels from each pixel value in the ultrasound image and divide by the variance of the image pixels; after normalization, convert each pixel value of the ultrasound image to between 0 and 1;
- the processing method of this embodiment is to convert the marking information of the ultrasound image from an absolute number to account for the original ultrasound image Of the ratio; the specific calculation method is:
- width and height represent the original length and width of the ultrasound image before entering the neural network
- (xmin, ymin), (xmax, ymax) are the coordinates of the two points on the upper left corner and the lower right corner of the original rectangular frame marking the work record
- x_new , y_new is the center coordinate information of the rectangular frame after the ultrasonic image preprocessing, that is, the size is changed
- w_new, h_new respectively represent the length and width of the ultrasonic image preprocessing, that is, the rectangular frame after the size is changed
- Step S22 the structure of the neural network model is established
- the neural network model includes an input layer, multiple hidden layers, and an output layer; the multiple hidden layers of the neural network model are used to automatically extract the characteristics of arteries and veins in the ultrasound image; the hidden layer contains Several convolutional layers, several pooling layers, etc .; between the hidden layers in the neural network model, between the input layer and the hidden layer, and between the hidden layer and the output layer are connected by weight parameters; The hidden layer also includes some settings to prevent overfitting, such as randomly deactivating some weight parameters between the input layer and the hidden layer or between the hidden layer and the output layer, that is, the back propagation algorithm does not adjust these inactive weights ;
- the structure of the first neural network model established in the embodiment of the present invention includes an input layer, a plurality of hidden layers connected to the input layer, and an output layer connected to the hidden layer of the highest layer;
- Figure 5 shows the hidden layers and output layers of the neural network model; all hidden layers of the neural network model in Figure 5 include 8 convolutional layers, 5 maximum pooling layers, and the output layer is a Softmax classification layer ; First, 5 convolutional layers and 5 maximum pooling layers are alternately connected, and each maximum pooling layer plays a role in dimensionality reduction of features; then 3 convolutional layers are connected, and these convolutional layers are extracted High-level feature information; finally connect the output layer to output the results of the neural network; the arrows connecting each layer in Figure 5 reflect the weight parameters between the layers of the neural network model.
- Table 1 shows the structure of the second neural network model established in the embodiment of the present invention
- Table 1 includes the hidden layers of the neural network model
- Table 1 has a total of four columns, respectively representing hidden layers
- the role of the contained layer which reflects the weight parameters of the neural network model; all hidden layers of the neural network model are first connected with 5 convolutional layers and 5 maximum pooling layers alternately; followed by connecting several convolutional layers, table
- One option connects two convolutional layers; then connects a combination layer (Route layer), which is used to connect the advanced feature layer (layer 11 in Table 1) before the combination layer and the one before the advanced feature layer
- Combining layers or layers of hidden layers to combine high-level feature layers with low-level fine-grained features; the length and width of the output image of the high
- the neural network model described in the present invention preferably uses a convolutional neural network model.
- FIG. 6 shows the structure of the third neural network model established in the embodiment of the present invention.
- the 640 ⁇ 512 ultrasound image in FIG. 6 is extracted through a basic feature extraction network, such as VGG, Inception, Alexnet, etc., to a number of feature images, that is, the 52 ⁇ 52 feature image in FIG. 6; then, after a series of convolution operations, Obtain feature images with different resolutions, that is, the 26 ⁇ 26, 13 ⁇ 13, 7 ⁇ 7, and 4 ⁇ 4 feature images in FIG. 6; these feature images are represented in the form of a rectangular parallelepiped in FIG.
- a basic feature extraction network such as VGG, Inception, Alexnet, etc.
- the thickness of the rectangular parallelepiped represents The number of feature images, the length and width of the cuboid correspond to the length and width of the feature image; Conv in the lower left corner of the cuboid in the figure represents the convolution operation, which reflects the weight parameters of the neural network; the horizontal straight lines in the figure represent the convolution respectively
- the operation generates boundary boxes of different sizes at different positions of these feature images with different resolutions. This is where the structure of the third neural network model is different from the first two; finally, the output layer in FIG. 6
- the box performs softmax classification and position regression to predict the category and specific position of the bounding box, respectively.
- the optional output size of the neural network model is 13 ⁇ 13 ⁇ 35, where 35 records the information of the arteries or veins contained in the five bounding boxes output from each grid unit in the ultrasound image.
- a softmax classification layer is set to limit the 2 probability information to 0 to 1, and when the boundary box contains an artery or vein, the sum of the 2 probability information c 1 and c 2 is 1.
- the horizontal, vertical, width, and length of the center position of the bounding box are denoted as x, y, h, w, and the probability of containing the target object in the bounding box is denoted as p c , then the output of each bounding box can be expressed as:
- Step S23 initialize the neural network model: set the weight parameters of the neural network model to random numbers;
- Step S24 define the loss function of the neural network model
- the loss function of the neural network model includes four terms, namely:
- the criterion for judging that the bounding box contains the target object is that the overlap ratio between the predicted bounding box and the real rectangular frame in the grid unit in the ultrasound image (that is, the marking performed by the user in step S13) is greater than the set threshold. Is IOU;
- a bounding box with an IOU greater than 0.6 is used as the bounding box containing the target object
- the criterion for judging that the grid cell contains the target object is that the center of the real rectangular frame falls in the grid cell; the specific calculation formula of the loss function of an ultrasound image is:
- ⁇ 1 - ⁇ 4 represents the proportion of each error in the total loss function, and each error is selected in the form of square error;
- the first term of the loss function represents the error of the probability prediction of the bounding box containing the target object;
- S 2 represents the division of the ultrasound image into S ⁇ S grid cells, and B represents the number of bounding boxes set for each grid cell, Indicates whether the j-th bounding box of the i-th grid cell contains the target object,
- C i represents the probability vector of the i-th grid cell, Represents the probability vector of the current jth bounding box of the grid unit, and the length of these two probability vectors in the present invention is 2, which means the probability that the bounding box is a vein or an artery;
- the second term represents the loss function prediction error position and the size of the bounding box containing the target object; wherein x i, y i, h i , w i represent the center position of the rectangular frame of the i-th grid units abscissa, Ordinate and width and length information, Respectively represent the horizontal position, vertical position and width and length information of the corresponding center position of the predicted bounding box; the error part of the width and length is in the form of a root sign.
- the purpose is to weigh the prediction errors of target objects of different sizes;
- the third term of the loss function is the error of the probability prediction without the bounding box of the target object, Indicates whether the jth bounding box of the ith grid cell does not contain the target object; because the bounding box without the target object is the majority, ⁇ 3 is usually set to be smaller than ⁇ 1 , otherwise it cannot be trained to obtain a better recognition effect Neural network.
- the fourth term of the loss function represents the error of the prediction category of each grid unit containing the target object, where, when the center of an artery or vein falls in a grid unit, otherwise p i (c) indicates whether the i-th grid cell contains the target object of the c-th category, namely vein or artery, and the value is 0 or 1; Represents the probability of predicting that the i-th grid cell contains the target object of the c-th category, and the value range is [0,1].
- Step S25 Train the neural network model to obtain a neural network model that can automatically identify the artery and vein in the ultrasound image;
- the neural network model is trained with the ultrasound images of the normalized training set
- the ultrasound images in the training set are randomly selected, these ultrasound images are elastically deformed, and then input to the neural network model to train the neural network model; in this way, a more robust neural network model can be obtained.
- the back propagation algorithm can be used to train the neural network model; the initial value of the weight parameter of the neural network model is randomly set, and changes according to the law during the iteration process; the learning rate is set to 0.0001, the momentum is set to 0.9, and the weight is saved after 100 iterations Parameters to the network parameter file, the maximum number of iterations of the neural network model is set to 50k; during the iteration of the neural network model, the recall rate of the neural network model on the verification set is calculated, that is, the real rectangular frame of the ultrasound image in the verification set is identified Proportion; after the loss function of the neural network model converges, the weight parameter corresponding to the best recognition effect on the verification set at the time of convergence is used as the weight parameter of the neural network model.
- the processing flow for acquiring the position information of veins and arteries from the ultrasound image to be recognized through the neural network model includes:
- Step S31 Acquire the ultrasound image to be recognized, fix the ultrasound image to the same size as the input layer of the neural network model, and normalize the ultrasound image;
- the ultrasound image to be recognized comes from the test set
- Step S32 input the ultrasonic image to be recognized into the trained neural network model to obtain all the bounding boxes output by the neural network model;
- All bounding boxes represent the prediction of the artery or vein in the ultrasound image
- Step S33 screening the bounding box to obtain the final recognition result.
- screening the bounding box refers to selecting the bounding box whose prediction probability is greater than the set threshold as the prediction result
- the maximum value suppression method is used for further screening.
- the specific method is to calculate the overlap between the bounding boxes. In the bounding box with the overlap index greater than the set threshold, select the prediction probability The highest bounding box is used as the recognition result.
- the ultrasound image generating unit generates an ultrasound image containing vein marks and artery marks based on the veins and arteries identified in the ultrasound image.
- FIG. 9 it is a jugular vein puncture guidance effect image of a second neural network model structure corresponding system in an embodiment of the present invention, which corresponds to the original image FIG. 8.
- artery indicates the location of the carotid artery
- vein indicates the location of the jugular vein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Vascular Medicine (AREA)
- Human Computer Interaction (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
Description
本发明涉及医学超声技术领域,尤其是一种基于神经网络的静脉与动脉识别方法与系统。The invention relates to the technical field of medical ultrasound, in particular to a vein and artery recognition method and system based on a neural network.
在临床医学上,很多血管内手术在静脉上进行导管放置、需要穿刺等操作,例如在右侧颈静脉进行,如中心静脉导管放置、血流动力学测量、心肌活检和心脏消融等。颈静脉穿刺术的过程中,医生凭借视觉、触觉和经验等人工判断血管的位置,如通过检测血管内压力、观察血液颜色等鉴别血管的性质。这些通过血管压力、血液颜色判断颈静脉的方法可靠性差,如果穿刺针进入颈静脉血管,则继续后续的扩张、置管等操作;若穿刺针位于颈动脉血管,则撤回穿刺针,压迫穿刺点,重新进行穿刺。不同患者的静脉血管位置的难易判断程度不同,据调查显示,不经意穿刺到颈动脉的概率为2%到8%,这通常会导致相关并发症,如局部血肿等。如果病人有凝血障碍,血肿可能会迅速扩大,甚至出现气道阻塞,假性动脉瘤、动静脉瘘,这些都可能会有致命的后果。In clinical medicine, many endovascular surgeries perform catheter placement on veins, requiring puncture and other operations, such as in the right jugular vein, such as central venous catheter placement, hemodynamic measurement, myocardial biopsy, and cardiac ablation. During the process of jugular venipuncture, doctors use visual, tactile and experience to manually judge the position of blood vessels, such as identifying the nature of blood vessels by detecting intravascular pressure and observing blood color. These methods of judging the jugular vein by blood pressure and blood color have poor reliability. If the puncture needle enters the jugular vein, continue the subsequent dilation and catheterization; if the puncture needle is located in the carotid artery, the puncture needle is withdrawn and the puncture point is compressed To perform the puncture again. Different patients have different degrees of difficulty in judging the position of the venous vessels. According to the survey, the probability of inadvertently puncturing the carotid artery is 2% to 8%, which usually leads to related complications, such as local hematoma. If the patient has a blood clotting disorder, the hematoma may rapidly expand, and even airway obstruction, pseudoaneurysm, and arteriovenous fistula may all have fatal consequences.
数字图像处理技术飞速发展,计算机辅助诊断在医疗领域随处可见。鉴于超声引导技术的优点,超声B模式下超声图像引导颈静脉穿刺术对穿刺前定位血管十分有益,降低了以上风险发生的概率。但是目前对超声图像中动脉和静脉判别仍几乎由人工完成,尤其是颈动脉和颈静脉的识别,这需要操作者具有超声影像的专业知识,也就造成了判别结果的不确定性,不能最小化手术的风险;同时也使超声引导颈静脉穿刺的推广和普及受到限制。Digital image processing technology is developing rapidly, and computer-aided diagnosis is everywhere in the medical field. In view of the advantages of ultrasound guidance technology, ultrasound image-guided jugular vein puncture in ultrasound B mode is very beneficial to locate blood vessels before puncture, reducing the probability of the above risks. However, at present, the discrimination of arteries and veins in ultrasound images is still almost manually completed, especially the identification of carotid arteries and jugular veins. This requires the operator to have expertise in ultrasound imaging, which also causes uncertainty in the discrimination results and cannot be minimized. The risk of chemical surgery; At the same time, it also limits the promotion and popularization of ultrasound-guided jugular venipuncture.
因此,如何解决上述技术问题,即在实时利用超声引导颈静脉穿刺的进行时如何在基于人工智能或神经网络下自动识别动脉或静脉,尤其是自动快速识别颈动脉、颈静脉,成为本领域的研究人员面临的问题。Therefore, how to solve the above technical problem, that is, how to automatically identify arteries or veins based on artificial intelligence or neural networks when using ultrasound to guide the jugular vein puncture, especially to automatically and quickly identify carotid arteries and jugular veins, has become a technology in the art Problems facing researchers.
发明内容Summary of the invention
本发明的目的在于克服现有技术中存在的不足,提供一种基于神经网络的静脉与动脉识别方法,用于对待识别的超声图像中的动脉和静脉进行自动识别,这种方法安全高效,能够帮助医生提高诊断准确度,更好地辅助医生进行动脉与静脉的识别,进而进一步辅助医生进行静脉穿刺。本发明采用的技术方案是:The purpose of the present invention is to overcome the shortcomings in the prior art and provide a neural network-based vein and artery identification method for automatic identification of arteries and veins in the ultrasound image to be identified. This method is safe, efficient, and capable of Help doctors improve the accuracy of diagnosis, better assist doctors in the identification of arteries and veins, and further assist doctors in venipuncture. The technical scheme adopted by the present invention is:
一种基于神经网络的静脉与动脉识别方法,包括:A vein and artery recognition method based on neural network, including:
将待识别的超声图像输入神经网络模型进行处理;Input the ultrasonic image to be recognized into the neural network model for processing;
通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息;Acquiring the position information of veins and arteries from the ultrasound image to be identified through the neural network model;
根据获取的位置信息区别标记静脉与动脉,并生成含有静脉标记和动脉标记的超声图像。Mark the veins and arteries according to the acquired position information, and generate an ultrasound image containing the veins and arteries.
进一步地,所述的基于神经网络的静脉与动脉识别方法,还包括:Further, the neural network-based vein and artery recognition method further includes:
对预采集超声图像中的动脉和静脉进行标记;Mark the arteries and veins in the pre-acquired ultrasound images;
通过设定数量被标记的预采集超声图像输入神经网络模型进行训练,得到能够自动识别超声图像中动脉和静脉的所述神经网络模型。By inputting a set number of labeled pre-acquired ultrasound images into a neural network model for training, the neural network model capable of automatically identifying arteries and veins in the ultrasound image is obtained.
更进一步地,对预采集超声图像中的动脉和静脉进行标记具体为:Furthermore, the marking of the arteries and veins in the pre-acquired ultrasound images is specifically:
预采集一定数量的超声图像并进行筛选;Pre-collect a certain number of ultrasound images and filter them;
将筛选后的超声图像划分为训练集、验证集和测试集;Divide the filtered ultrasound image into training set, verification set and test set;
用户对划分后的超声图像中的静脉和动脉进行标记;The user marks the veins and arteries in the divided ultrasound image;
其中,所述训练集用于训练神经网络模型,所述验证集用于验证神经网络的识别准确度并优化神经网络模型的权重参数,所述测试集用于最终评价神经网络模型的识别准确度。The training set is used to train a neural network model, the verification set is used to verify the recognition accuracy of the neural network and optimize the weight parameters of the neural network model, and the test set is used to finally evaluate the recognition accuracy of the neural network model .
更进一步地,,通过设定数量被标记的预采集超声图像输入神经网络模型进行训练具体为:Further, training by inputting a set number of labeled pre-acquired ultrasound images into the neural network model specifically includes:
将超声图像固定到设定尺寸,并归一化同样尺寸的超声图像;Fix the ultrasound image to the set size and normalize the ultrasound image of the same size;
建立神经网络模型,所述神经网络模型包括一个输入层、多个隐含层和一个输出层,神经网络模型中的各隐含层之间、输入层和隐含层之间、隐含层和输出层之间通过权重参数相连接;输入层尺寸设置为与输入神经网络模型的超声图像的尺寸相一致;Establish a neural network model. The neural network model includes an input layer, multiple hidden layers, and an output layer. Between the hidden layers in the neural network model, between the input layer and the hidden layer, the hidden layer and The output layers are connected by weight parameters; the input layer size is set to be consistent with the size of the ultrasound image input to the neural network model;
初始化神经网络模型,将所述权重参数设置为随机数;Initialize the neural network model and set the weight parameter to a random number;
采用归一化后的超声图像训练神经网络模型;Use the normalized ultrasound image to train the neural network model;
根据损失函数计算训练神经网络模型产生预测误差,在所述损失函数收敛时,计算训练后得到的权重参数;Calculate the training neural network model according to the loss function to produce a prediction error, and when the loss function converges, calculate the weight parameters obtained after training;
更新神经网络模型中的权重参数,得到自动识别超声图像中动脉和静脉的神经网络模型。The weight parameters in the neural network model are updated to obtain a neural network model that automatically recognizes arteries and veins in ultrasound images.
更进一步地,所述隐含层用于自动提取超声图像中动脉和静脉的特征;Furthermore, the hidden layer is used to automatically extract features of arteries and veins in the ultrasound image;
所述隐含层至少包括卷积层和最大池化层;The hidden layer includes at least a convolution layer and a maximum pooling layer;
优选地,所述隐含层包括卷积层、最大池化层和结合层。Preferably, the hidden layer includes a convolution layer, a maximum pooling layer, and a bonding layer.
更进一步地,所述输出层配置为输出若干个预测的边界框;Furthermore, the output layer is configured to output several predicted bounding boxes;
其中,所述边界框的信息包括边界框中的图像是动脉或静脉的概率信息,以及边界框的位置信息和尺寸信息。The information of the bounding box includes probability information that the image in the bounding box is an artery or a vein, and position information and size information of the bounding box.
更进一步地,所述损失函数包括:Furthermore, the loss function includes:
包含有目标对象的边界框的概率预测的误差;Contains the error of the probability prediction of the bounding box of the target object;
包含有目标对象的边界框的位置和尺寸的预测误差;Contains the prediction error of the position and size of the bounding box of the target object;
不包含有目标对象的边界框的概率预测的误差;Does not contain the error of the probability prediction of the bounding box of the target object;
包含有目标对象的网格单元预测类别的误差。The error of the prediction category of the grid unit containing the target object.
进一步地,通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息具体包括:Further, obtaining the position information of veins and arteries from the ultrasound image to be recognized through the neural network model specifically includes:
获取待识别的超声图像,将获取的超声图像固定到与神经网络模型相适配的尺寸并对超声图像进行归一化;Acquire the ultrasound image to be recognized, fix the acquired ultrasound image to a size suitable for the neural network model and normalize the ultrasound image;
将待识别的超声图像输入到训练好神经网络模型,获取神经网络模型输出的所有边界框;Input the ultrasound image to be recognized into the trained neural network model to obtain all the bounding boxes output by the neural network model;
根据设定概率阈值筛选输出的边界框,进而获取静脉和动脉的位置信息。Filter the output bounding box according to the set probability threshold, and then obtain the position information of veins and arteries.
更进一步地,根据设定概率阈值筛选输出的边界框具体包括:Further, filtering the output bounding box according to the set probability threshold specifically includes:
挑选预测概率大于设定概率阈值的边界框作为预测结果;Select the bounding box whose prediction probability is greater than the set probability threshold as the prediction result;
在预测概率大于设定概率阈值的边界框中,采用极大值抑制方法筛选预测概率最高的边界框作为筛选结果,进而获取静脉和动脉的位置信息。In the bounding box with the predicted probability greater than the set probability threshold, the maximum value suppression method is used to filter the bounding box with the highest predicted probability as the screening result, and then the position information of the vein and the artery is obtained.
更进一步地,所述隐含层还包括过拟合设置,所述过拟合设置为随机失活一些输入层与隐含层之间或隐含层与输出层之间的权重参数。Further, the hidden layer further includes an overfitting setting, which is to randomly inactivate weight parameters between some input layers and the hidden layer or between the hidden layer and the output layer.
进一步地,神经网络模型的结构具体包括:所述隐含层包括卷积层和最大池化层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,最后连接输出层。Further, the structure of the neural network model specifically includes: the hidden layer includes a convolutional layer and a maximum pooling layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected , And finally connect the output layer.
或,or,
神经网络模型的结构具体包括:所述隐含层包括卷积层、最大池化层和结合层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与该高级特征层之前的一层或数层隐含层相结合;该高级特征层与相结合的隐含层的输出图像的长和宽必需相应一致;该高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层。The structure of the neural network model specifically includes: the hidden layer includes a convolutional layer, a maximum pooling layer, and a combination layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected , And then connect a combination layer to combine the advanced feature layer connected before the combined layer with the hidden layer or layers before the advanced feature layer; the output image of the advanced feature layer and the combined hidden layer The length and width of must be consistent accordingly; this advanced feature layer is combined with the previous hidden layer or layers to be input to the last convolutional layer.
或,or,
神经网络模型的结构具体包括:在所述多个隐含层,超声图像先经过基础的特征提取网络,提取到若干特征图像,然后再经过一系列卷积运算,得到具有不同分辨率的特征图像;再分别通过卷积运算在这些不同分辨率特征图像的不同位置上同时生成不同尺寸的边界框,在输出层对这些边界框进行softmax分类和位置回归,分别来预测边界框的类别和具体位置。The structure of the neural network model specifically includes: in the multiple hidden layers, the ultrasound image first passes through the basic feature extraction network, extracts several feature images, and then undergoes a series of convolution operations to obtain feature images with different resolutions ; Then, through convolution operations, different size boundary boxes are simultaneously generated at different positions of these different resolution feature images, and softmax classification and position regression are performed on these boundary boxes at the output layer to predict the category and specific position of the boundary box. .
一种基于神经网络的静脉与动脉识别系统,包括:A vein and artery recognition system based on neural network, including:
超声图像输入单元,用于输入超声图像,将待识别的超声图像输入神经网络模型进行处理;Ultrasound image input unit, used to input ultrasound images, and input the ultrasound images to be recognized into the neural network model for processing;
神经网络模型,用于通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息;The neural network model is used to obtain the position information of veins and arteries from the ultrasound image to be recognized through the neural network model;
超声图像生成单元,根据获取的位置信息区别标记静脉与动脉,并生成含有静脉标记和动脉标记的超声图像。The ultrasound image generating unit distinguishes the marked vein from the artery according to the acquired position information, and generates an ultrasound image containing the vein marker and the artery marker.
进一步地,所述的基于神经网络的静脉与动脉识别系统,还包括:Further, the neural network-based vein and artery recognition system further includes:
超声图像标记单元,用于对预采集超声图像中的动脉和静脉进行标记;Ultrasound image marking unit, used to mark arteries and veins in pre-acquired ultrasound images;
神经网络训练单元,通过设定数量被标记的预采集超声图像输入神经网络模型进行训练,得到能够自动识别超声图像中动脉和静脉的所述神经网络模型。The neural network training unit inputs the labeled pre-acquired ultrasound images into the neural network model for training, and obtains the neural network model that can automatically identify the arteries and veins in the ultrasound images.
更进一步地,超声图像标记单元具体用于:Furthermore, the ultrasound image marking unit is specifically used for:
预采集一定数量的超声图像并进行筛选;Pre-collect a certain number of ultrasound images and filter them;
将筛选后的超声图像划分为训练集、验证集和测试集;Divide the filtered ultrasound image into training set, verification set and test set;
供用户对划分后的超声图像中的静脉和动脉进行标记;For users to mark the veins and arteries in the divided ultrasound images;
其中,所述训练集用于训练神经网络模型,所述验证集用于验证神经网络 的识别准确度并优化神经网络模型的权重参数,所述测试集用于最终评价神经网络模型的识别准确度。The training set is used to train a neural network model, the verification set is used to verify the recognition accuracy of the neural network and optimize the weight parameters of the neural network model, and the test set is used to finally evaluate the recognition accuracy of the neural network model .
更进一步地,神经网络训练单元训练神经网络模型具体包括:Further, the training of the neural network model by the neural network training unit specifically includes:
将超声图像固定到设定尺寸,并归一化同样尺寸的超声图像;Fix the ultrasound image to the set size and normalize the ultrasound image of the same size;
建立神经网络模型,所述神经网络模型包括一个输入层、多个隐含层和一个输出层,神经网络模型中的各隐含层之间、输入层和隐含层之间、隐含层和输出层之间通过权重参数相连接;输入层尺寸设置为与输入神经网络模型的超声图像的尺寸相一致;Establish a neural network model. The neural network model includes an input layer, multiple hidden layers, and an output layer. Between the hidden layers in the neural network model, between the input layer and the hidden layer, the hidden layer and The output layers are connected by weight parameters; the input layer size is set to be consistent with the size of the ultrasound image input to the neural network model;
初始化神经网络模型,将所述权重参数设置为随机数;Initialize the neural network model and set the weight parameter to a random number;
采用归一化后的超声图像训练神经网络模型;Use the normalized ultrasound image to train the neural network model;
根据损失函数计算训练神经网络模型产生预测误差,在所述损失函数收敛时,计算训练后得到的权重参数;Calculate the training neural network model according to the loss function to produce a prediction error, and when the loss function converges, calculate the weight parameters obtained after training;
更新神经网络模型中的权重参数,得到自动识别超声图像中动脉和静脉的神经网络模型。The weight parameters in the neural network model are updated to obtain a neural network model that automatically recognizes arteries and veins in ultrasound images.
更进一步地,神经网络模型的结构具体包括:所述隐含层包括卷积层和最大池化层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,最后连接输出层。Furthermore, the structure of the neural network model specifically includes: the hidden layer includes a convolutional layer and a maximum pooling layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutions are connected Layer, and finally connect the output layer.
或,or,
神经网络模型的结构具体包括:所述隐含层包括卷积层、最大池化层和结合层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与该高级特征层之前的一层或数层隐含层相结合;该高级特征层与相结合的隐含层的输出图像的长和宽必需相应一致;该高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层。The structure of the neural network model specifically includes: the hidden layer includes a convolutional layer, a maximum pooling layer, and a combination layer; first, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected , And then connect a combination layer to combine the advanced feature layer connected before the combined layer with the hidden layer or layers before the advanced feature layer; the output image of the advanced feature layer and the combined hidden layer The length and width of must be consistent accordingly; this advanced feature layer is combined with the previous hidden layer or layers to be input to the last convolutional layer.
或,or,
神经网络模型的结构具体包括:在所述多个隐含层,超声图像先经过基础的特征提取网络,提取到若干特征图像,然后再经过一系列卷积运算,得到具有不同分辨率的特征图像;再分别通过卷积运算在这些不同分辨率特征图像的不同位置上同时生成不同尺寸的边界框,在输出层对这些边界框进行softmax分类和位置回归,分别来预测边界框的类别和具体位置。The structure of the neural network model specifically includes: in the multiple hidden layers, the ultrasound image first passes through the basic feature extraction network, extracts several feature images, and then undergoes a series of convolution operations to obtain feature images with different resolutions ; Then, through convolution operations, different size boundary boxes are simultaneously generated at different positions of these different resolution feature images, and softmax classification and position regression are performed on these boundary boxes at the output layer to predict the category and specific position of the boundary box. .
更进一步地,所述输出层配置为输出若干个预测的边界框;Furthermore, the output layer is configured to output several predicted bounding boxes;
其中,所述边界框的信息包括边界框中的图像是动脉或静脉的概率信息,以及边界框的位置信息和尺寸信息。The information of the bounding box includes probability information that the image in the bounding box is an artery or a vein, and position information and size information of the bounding box.
更进一步地,所述损失函数包括:Furthermore, the loss function includes:
包含有目标对象的边界框的概率预测的误差;Contains the error of the probability prediction of the bounding box of the target object;
包含有目标对象的边界框的位置和尺寸的预测误差;Contains the prediction error of the position and size of the bounding box of the target object;
不包含有目标对象的边界框的概率预测的误差;Does not contain the error of the probability prediction of the bounding box of the target object;
包含有目标对象的网格单元预测类别的误差。The error of the prediction category of the grid unit containing the target object.
进一步地,神经网络模型中,通过所述神经网络模型从待识别超声图像中 获取静脉和动脉的位置信息具体包括:Further, in the neural network model, acquiring the position information of veins and arteries from the ultrasound image to be recognized through the neural network model specifically includes:
获取待识别的超声图像,将获取的超声图像固定到与神经网络模型相适配的尺寸并对超声图像进行归一化;Acquire the ultrasound image to be recognized, fix the acquired ultrasound image to a size suitable for the neural network model and normalize the ultrasound image;
将待识别的超声图像输入到训练好神经网络模型,获取神经网络模型输出的所有边界框;Input the ultrasound image to be recognized into the trained neural network model to obtain all the bounding boxes output by the neural network model;
根据设定概率阈值筛选输出的边界框,进而获取静脉和动脉的位置信息。Filter the output bounding box according to the set probability threshold, and then obtain the position information of veins and arteries.
更进一步地,根据设定概率阈值筛选输出的边界框具体包括:Further, filtering the output bounding box according to the set probability threshold specifically includes:
挑选预测概率大于设定概率阈值的边界框作为预测结果;Select the bounding box whose prediction probability is greater than the set probability threshold as the prediction result;
在预测概率大于设定概率阈值的边界框中,采用极大值抑制方法筛选预测概率最高的边界框作为筛选结果,进而获取静脉和动脉的位置信息。In the bounding box where the predicted probability is greater than the set probability threshold, the maximum value suppression method is used to filter the bounding box with the highest predicted probability as the screening result, and then the position information of veins and arteries is obtained.
更进一步地,所述隐含层还包括过拟合设置,所述过拟合设置为随机失活一些输入层与隐含层之间或隐含层与输出层之间的权重参数。Further, the hidden layer further includes an overfitting setting, which is to randomly inactivate weight parameters between some input layers and the hidden layer or between the hidden layer and the output layer.
本发明的优点:本发明能够对超声图像中的动脉和静脉位置进行准确的识别,从而能更好地辅助医生进行静脉穿刺术。Advantages of the present invention: The present invention can accurately identify the positions of the arteries and veins in the ultrasound image, thereby better assisting the doctor in performing venipuncture.
图1-a为本发明的基于神经网络的颈静脉穿刺实时超声引导系统训练模式下示意图。Fig. 1-a is a schematic diagram of the neural network-based jugular vein puncture real-time ultrasound guidance system training mode of the present invention.
图1-b为本发明的基于神经网络的颈静脉穿刺实时超声引导系统正常工作模式下示意图。Fig. 1-b is a schematic diagram of the neural network-based real-time ultrasound guidance system for jugular vein puncture of the present invention in a normal working mode.
图2为本发明的系统控制流程示意图。2 is a schematic diagram of the system control flow of the present invention.
图3为本发明的超声图像标记单元的处理流程示意图。3 is a schematic diagram of the processing flow of the ultrasonic image marking unit of the present invention.
图4为本发明的神经网络训练单元的处理流程4 is a processing flow of the neural network training unit of the present invention
图5为本发明实施例中建立的第一种神经网络结构示意图。FIG. 5 is a schematic structural diagram of a first neural network established in an embodiment of the present invention.
图6为本发明实施例中建立的第三种神经网络结构示意图。FIG. 6 is a schematic structural diagram of a third neural network established in an embodiment of the present invention.
图7为本发明的从待识别超声图像中获取动脉和静脉的位置信息处理流程流程图。7 is a flowchart of a process for acquiring position information of an artery and a vein from an ultrasound image to be recognized according to the present invention.
图8是本发明实施例中带有专家标记矩形框的超声图像的原始图像。FIG. 8 is an original image of an ultrasound image with an expert-marked rectangular frame in an embodiment of the present invention.
图9是本发明实施例中第一种神经网络结构对应系统的颈静脉穿刺引导效果图像。9 is an image of a jugular vein puncture guidance effect of the first neural network structure corresponding system in an embodiment of the present invention.
下面结合具体附图和实施例对本发明作进一步说明。The present invention will be further described below with reference to specific drawings and embodiments.
在此发明中,当描述了一个系统包括(或者包含或者有)一些单元、模块、模型时,应该理解,它可以包括(或者包含或者有)仅那些单元,或者在没有具体限制的情况下它可以包括(或者包含或者具有)其它单元。如本文中使用的术语“模块”“单元”意指但不限于执行特定任务的软件或硬件组件,诸如现场可编程门阵列(FPGA)或专用集成电路(ASIC)。模块可以被配置为在可寻址存储介质中并且配置为在一个或多个处理器上执行。模块可以包括组件(诸如软件组件、面向对象软件组件、类组件和任务组件)、进程、函数、属性、过程、子例行程序、程序代码段、驱动程序、固件、微码、电路、数据、数据库、数据结构、 表、数组和变量。在单元和模块中提供的功能性可以被组合成更少的组件和模块或者进一步分成附加的组件和模块。In this invention, when it is described that a system includes (or contains or has) some units, modules, models, it should be understood that it may include (or contain or have) only those units, or it is not specifically restricted Other units may be included (or included or have). The terms "module" and "unit" as used herein mean, but are not limited to, software or hardware components that perform specific tasks, such as field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs). The module may be configured in an addressable storage medium and configured to execute on one or more processors. Modules can include components (such as software components, object-oriented software components, class components, and task components), processes, functions, attributes, processes, subroutines, program code segments, drivers, firmware, microcode, circuits, data, Databases, data structures, tables, arrays and variables. The functionality provided in units and modules may be combined into fewer components and modules or further divided into additional components and modules.
本文中使用的术语“图像”可以表示由离散图像因子(例如,二维(2D)图像中的像素和3D图像中的像素)组成的多维数据或二维图像数据或三维图像数据。The term "image" as used herein may mean multi-dimensional data or two-dimensional image data or three-dimensional image data composed of discrete image factors (for example, pixels in a two-dimensional (2D) image and pixels in a 3D image).
而且,本文中使用的术语“对象”可以包括人、动物的静脉、动脉。术语“对象”可以包括人造模型。Moreover, the term "subject" as used herein may include veins and arteries of humans and animals. The term "object" may include man-made models.
本文中使用的术语“用户”是非限制性的,可以是是医生、护士、医学技师、医学图像专家等,或者可以是维修医学设备的工程师。The term "user" as used herein is non-limiting, and may be a doctor, nurse, medical technician, medical imaging expert, etc., or may be an engineer who repairs medical equipment.
图1-a所示,是一种基于神经网络的颈静脉穿刺实时超声引导系统,该系统包括:换能器,换能器用于发射与接收超声波信号;超声图像合成模块,超声图像合成模块与换能器连接,用于将换能器传输的超声波信号合成超声图像;超声图像处理模块,包括超声图像输入单元、超声图像标记单元、神经网络训练单元,超声图像输入单元用于输入超声图像,用户通过超声图像标记单元对超声图像进行标记,例如对超声图像分别标记超声图像中的静脉与动脉,标记符号可以是一个图形,例如正方形,长方形,三角形或者其他规则图形,这样超声图像标记单元得到被用户标记的超声图像,通过神经网络训练单元对被标记的超声图像进行训练,得到能够自动识别超声图像中动脉和静脉的神经网络模型,通过神经网络模型能够得到动脉和静脉的位置信息,自动匹配动脉或静脉,通过用户设置的标记符号进行区分;所述超声图像处理模板分别与超声图像合成模块、超声图像显示模块连接,超声图像处理模块将处理后的含有静脉标记和动脉标记的超声图像传输至超声图像显示模块进行图像显示。Figure 1-a shows a real-time ultrasound guidance system for jugular vein puncture based on neural network. The system includes: a transducer for transmitting and receiving ultrasonic signals; an ultrasound image synthesis module, an ultrasound image synthesis module and The transducer is connected to synthesize the ultrasonic signal transmitted by the transducer into an ultrasonic image; the ultrasonic image processing module includes an ultrasonic image input unit, an ultrasonic image marking unit, a neural network training unit, and the ultrasonic image input unit is used to input an ultrasonic image, The user marks the ultrasound image through the ultrasound image marking unit. For example, the ultrasound image separately marks the veins and arteries in the ultrasound image. The marking symbol can be a graphic, such as a square, rectangle, triangle, or other regular graphics, so that the ultrasound image marking unit obtains The ultrasound image marked by the user is trained by the neural network training unit to obtain the neural network model that can automatically identify the artery and vein in the ultrasound image. The neural network model can obtain the position information of the artery and vein. match Arteries or veins are distinguished by the marking symbols set by the user; the ultrasound image processing template is respectively connected to the ultrasound image synthesis module and the ultrasound image display module, and the ultrasound image processing module transmits the processed ultrasound image containing the vein mark and the artery mark To the ultrasound image display module for image display.
本系统所述的连接方式可以是有线连接,比如点连接等,也可以为无线连接,比如通过蓝牙、wifi等方式进行连接。The connection method described in this system may be a wired connection, such as a point connection, or a wireless connection, such as connection via Bluetooth, wifi, etc.
超声图像显示模块可以是包括含有显示装置的模块,显示装置可以是触摸屏显示器、移动终端显示器(手机、ipad)、液晶显示器、LED显示器等显示器装置的一种或多种。The ultrasonic image display module may be a module including a display device, and the display device may be one or more display devices such as a touch screen display, a mobile terminal display (mobile phone, ipad), a liquid crystal display, an LED display, and the like.
图1-a中的上述实施例为本发明在工程师模式或厂家进行设置选择的系统,当系统得到能够自动识别超声图像中动脉和静脉的神经网络模型后,超声图像处理模块被配置为包含超声图像输入单元、神经网络模型、超声图像生成单元,神经网络模型自动识别超声图像中动脉和静脉;此时,如图1-b所示,本发明的系统如下,该系统包括:换能器,换能器用于发射与接收超声波信号;超声图像合成模块,超声图像合成模块与换能器连接,用于将换能器传输的超声波信号合成超声图像;超声图像处理模块中,超声图像输入单元,用于输入超声图像,将待识别的超声图像输入神经网络模型进行处理;通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息;超声图像生成单元,根据获取的位置信息区别标记静脉与动脉,并生成含有静脉标记和动脉标记的超声图像;这样输入的超声图像被配置为被标记的超声图像,标记可以是一个图形或符号,例如正方形框,矩形框,三角形框或者其他规则图形,通过神经网络模型能够得到动脉和静脉的位置信息;所述超声图像处理模板分别与超声图像 合成模块、超声图像显示模块连接,超声图像处理模块将处理后的含有静脉标记和动脉标记的超声图像传输至超声图像显示模块进行图像显示。The above embodiment in FIG. 1-a is a system for setting selection in the engineer mode or manufacturer of the present invention. After the system obtains a neural network model that can automatically recognize the arteries and veins in the ultrasound image, the ultrasound image processing module is configured to include ultrasound An image input unit, a neural network model, and an ultrasound image generation unit. The neural network model automatically recognizes the arteries and veins in the ultrasound image; at this time, as shown in FIG. 1-b, the system of the present invention is as follows. The system includes: a transducer, The transducer is used to transmit and receive ultrasonic signals; the ultrasonic image synthesis module, the ultrasonic image synthesis module is connected with the transducer, and is used to synthesize the ultrasonic signals transmitted by the transducer into an ultrasonic image; in the ultrasonic image processing module, the ultrasonic image input unit, It is used to input ultrasound images, and the ultrasound images to be recognized are input to a neural network model for processing; the neural network model is used to obtain the position information of veins and arteries from the ultrasound images to be recognized; the ultrasound image generation unit distinguishes according to the obtained position information Mark veins and arteries and generate vein markings and arteries The recorded ultrasound image; the ultrasound image input in this way is configured as a marked ultrasound image. The mark can be a graphic or symbol, such as a square frame, rectangular frame, triangular frame, or other regular graphics. Arteries and veins can be obtained through the neural network model The position information of the ultrasound image; the ultrasound image processing template is respectively connected with an ultrasound image synthesis module and an ultrasound image display module, and the ultrasound image processing module transmits the processed ultrasound images containing vein marks and arterial marks to the ultrasound image display module for image display.
图1-a、图1-b所示的系统中,所述的动脉、静脉可以是颈动脉、颈静脉,也可以是其他部位的静脉与动脉。In the system shown in FIGS. 1-a and 1-b, the arteries and veins may be carotid arteries, jugular veins, or veins and arteries in other parts.
在本发明一实施例中,如图1-b所示,本发明的系统用于辅助静脉穿刺,此时系统包括:换能器,换能器用于发射与接收超声波信号;超声图像合成模块,超声图像合成模块与换能器连接,用于将换能器传输的超声波信号合成超声图像;超声图像处理模块中,超声图像输入单元,用于输入超声图像,将待识别的超声图像输入神经网络模型进行处理;通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息;超声图像生成单元,根据获取的位置信息区别标记静脉与动脉,并生成含有静脉标记和动脉标记的超声图像;这样输入的超声图像被配置为被标记的超声图像,标记可以是一个图形或符号,例如正方形框,矩形框,三角形框或者其他规则图形,神经网络模型能够得到动脉和静脉的位置信息;所述超声图像处理模块分别与超声图像合成模块、超声图像显示模块连接,超声图像处理模块将处理后的含有静脉标记和动脉标记的超声图像传输至超声图像显示模块进行图像显示;静脉穿刺引导单元,静脉穿刺引导单元与超声图像处理模块连接,静脉穿刺引导单元,通过在超声图像显示模块上显示穿刺的参数,例如穿刺的网格、深度、角度信息等,辅助用户进行穿静脉穿刺。In an embodiment of the present invention, as shown in FIG. 1-b, the system of the present invention is used to assist venipuncture. In this case, the system includes: a transducer, which is used to transmit and receive ultrasonic signals; The ultrasound image synthesis module is connected to the transducer and used to synthesize the ultrasound signal transmitted by the transducer to the ultrasound image; in the ultrasound image processing module, the ultrasound image input unit is used to input the ultrasound image and input the ultrasound image to be recognized into the neural network The model is processed; the position information of veins and arteries is obtained from the ultrasound image to be recognized through the neural network model; the ultrasound image generation unit distinguishes marked veins and arteries according to the obtained position information, and generates ultrasound containing vein marks and arterial marks Image; the ultrasound image input in this way is configured as a marked ultrasound image. The mark can be a graphic or symbol, such as a square frame, rectangular frame, triangular frame or other regular graphics, and the neural network model can obtain the position information of arteries and veins; The ultrasound image processing module and the ultrasound image synthesis module 1. The ultrasound image display module is connected, and the ultrasound image processing module transmits the processed ultrasound image containing vein markers and artery markers to the ultrasound image display module for image display; the venipuncture guide unit, which is connected to the ultrasound image processing module, The venipuncture guidance unit assists the user in performing venipuncture by displaying the puncture parameters on the ultrasound image display module, such as the puncture grid, depth, and angle information.
如图2所示,换能器放置在患者待检测部位,本例中是放置于患者颈部;换能器发射与接收超声波信号,并将超声波信号传输至超声图像合成模块,以合成超声图像;通过超声图像处理模块的神经网络模型对输入的超声图像进行处理,将超声图像中的动脉和静脉自动识别,并将含有静脉标记和动脉标记的超声图像传输至图像显示模块;图像显示模块显示带静脉标记和动脉标记的超声图像,本实施例中超声图像中显示的是颈动脉和颈静脉的横截面。As shown in Figure 2, the transducer is placed on the patient to be tested, in this case it is placed on the patient's neck; the transducer transmits and receives ultrasonic signals, and transmits the ultrasonic signals to the ultrasonic image synthesis module to synthesize ultrasonic images The neural network model of the ultrasound image processing module processes the input ultrasound image, automatically recognizes the artery and vein in the ultrasound image, and transmits the ultrasound image containing the vein mark and the artery mark to the image display module; the image display module displays Ultrasound images with vein markings and arterial markings. In this embodiment, the ultrasound images show the cross-sections of the carotid artery and jugular vein.
本发明提出的基于神经网络的静脉与动脉识别方法,主要包括以下步骤:The vein and artery recognition method based on neural network proposed by the present invention mainly includes the following steps:
步骤S1,收集检测部位的超声图像,用户通过超声图像标记单元标记超声图像中的静脉与动脉;本实施例中优选收集的是颈部的超声图像,并标记超声图像中的颈动脉和颈静脉;Step S1, collecting ultrasound images of the detection site, the user marks the veins and arteries in the ultrasound image through the ultrasound image marking unit; in this embodiment, the ultrasound images of the neck are preferably collected, and the carotid arteries and jugular veins are marked in the ultrasound images ;
步骤S2,通过神经网络训练单元,基于所标记的超声图像,训练得到神经网络模型;Step S2, a neural network training unit is used to train a neural network model based on the marked ultrasound images;
步骤S3,将待识别超声图像输入经过训练得到的神经网络模型进行处理;通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息;根据获取的位置信息区别标记静脉与动脉,并生成含有静脉标记和动脉标记的超声图像。Step S3, input the ultrasound image to be recognized into the trained neural network model for processing; obtain the position information of veins and arteries from the ultrasound image to be recognized through the neural network model; distinguish the vein and artery according to the obtained position information, And generate ultrasound images containing vein markers and artery markers.
如图3所示,超声图像标记单元的处理流程包括:As shown in FIG. 3, the processing flow of the ultrasound image marking unit includes:
步骤S11,对收集的超声图像进行筛选;Step S11, screening the collected ultrasound images;
步骤S12,对筛选后的超声图像划分训练集、验证集和测试集;Step S12, dividing the filtered ultrasound image into a training set, a verification set and a test set;
步骤S13,用户对超声图像中的动脉和静脉进行标记;Step S13, the user marks the artery and vein in the ultrasound image;
具体地,步骤S11,对收集的超声图像进行筛选包括:过滤不清晰、不完全、重复的超声图像;去除超声图像中与自动引导颈静脉穿刺过程不相关信息;例如,去除收集的所有超声图像中不清晰或不完全导致无法进行标记工作的那些超声图像,去除会导致标记工作重复且对神经网络训练没有附加价值的重复的超声图像;去除超声图像中与自动判别动脉和静脉过程不相关信息,包括超声图像的深度、宽度、探头方向等参数信息;Specifically, in step S11, filtering the collected ultrasound images includes: filtering unclear, incomplete, and repeated ultrasound images; removing information that is not related to the automatic guided jugular vein puncture process in the ultrasound images; for example, removing all collected ultrasound images Those ultrasound images that are unclear or incomplete in the middle of the labeling process cannot be removed. Duplicate ultrasound images that will cause repeated labeling work and have no added value for neural network training; remove information in the ultrasound images that is not related to the automatic identification of arterial and venous processes , Including parameter information such as depth, width and probe direction of the ultrasound image;
具体地,步骤S12中,在收集的所有超声图像中随机选取3/5的图像作为训练集;随机选取1/5的图像作为验证集;剩下的1/5的超声图像作为测试集使用;所述训练集用于训练神经网络模型,所述验证集用于验证神经网络的识别准确度并优化神经网络模型的权重参数,所述测试集用于最终评价神经网络模型的识别准确度;当然随机选取的比例可以是3/5、1/5、1/5,也可以是其他的比例;Specifically, in step S12, 3/5 of the collected ultrasound images are randomly selected as the training set; 1/5 of the images are randomly selected as the verification set; the remaining 1/5 of the ultrasound images are used as the test set; The training set is used to train a neural network model, the verification set is used to verify the recognition accuracy of the neural network and optimize the weight parameters of the neural network model, and the test set is used to finally evaluate the recognition accuracy of the neural network model; of course Randomly selected ratios can be 3/5, 1/5, 1/5, or other ratios;
具体地,步骤S13中,采用矩形框标记出所有超声图像中的动脉和静脉,记录矩形框的信息:包括坐标信息和类别信息;如矩形框坐标信息包括矩形框的左上角和右下角两个点的坐标信息,类别信息包括标记的矩形框代表动脉或静脉;标记可以是一个图形或符号,例如正方形框,矩形框,三角形或者其他规则图形。Specifically, in step S13, a rectangular frame is used to mark all the arteries and veins in the ultrasound image, and the information of the rectangular frame is recorded: including coordinate information and category information; for example, the rectangular frame coordinate information includes both the upper left corner and the lower right corner of the rectangular frame The coordinate information of the point, the category information includes the marked rectangular frame representing an artery or vein; the mark can be a figure or symbol, such as a square frame, rectangular frame, triangle, or other regular figure.
如图4所示,神经网络训练单元的处理流程包括:As shown in Figure 4, the processing flow of the neural network training unit includes:
步骤S21,超声图像预处理:将超声图像固定到一定尺寸,并归一化同样尺寸的超声图像;如预处理后的超声图像为416×416×1;416×416表示预处理后超声图像的长和宽,即416像素长,416像素宽,可选地,将超声图像固定到一定尺寸时,保持原始图像的长宽比例,或者改变保持原始图像的长宽比例;对超声图像进行归一化操作的具体处理方法为将超声图像中每个像素值减去图像像素的均值后除以图像像素的方差;归一化后将超声图像的每个像素值转化到0~1之间;Step S21, ultrasonic image preprocessing: fix the ultrasonic image to a certain size, and normalize the ultrasonic image of the same size; for example, the preprocessed ultrasonic image is 416 × 416 × 1; 416 × 416 represents the preprocessed ultrasonic image Length and width, namely 416 pixels long and 416 pixels wide. Optionally, when the ultrasound image is fixed to a certain size, the aspect ratio of the original image is maintained, or the aspect ratio of the original image is maintained; the ultrasound image is normalized The specific processing method of the normalization operation is to subtract the average value of the image pixels from each pixel value in the ultrasound image and divide by the variance of the image pixels; after normalization, convert each pixel value of the ultrasound image to between 0 and 1;
由于超声图像预处理时超声图像的尺寸发生了变化,所有超声图像的标记信息也需要进行相应比例的改变;本实施例的处理方法是将超声图像的标记信息由绝对数转化为占原始超声图像的比例数;具体计算方法为:Because the size of the ultrasound image changes during the ultrasound image preprocessing, the marking information of all ultrasound images also needs to be changed in proportion; the processing method of this embodiment is to convert the marking information of the ultrasound image from an absolute number to account for the original ultrasound image Of the ratio; the specific calculation method is:
其中,width、height分别表示超声图像输入神经网络前原始的长度和宽度;(xmin,ymin),(xmax,ymax)为标记工作记录的原始矩形框左上角和右下角两个点的坐标;x_new,y_new为超声图像预处理即改变尺寸后的矩形框的中心坐标信 息,w_new、h_new分别表示超声图像预处理即改变尺寸后的矩形框的长度和宽度;Among them, width and height represent the original length and width of the ultrasound image before entering the neural network; (xmin, ymin), (xmax, ymax) are the coordinates of the two points on the upper left corner and the lower right corner of the original rectangular frame marking the work record; x_new , y_new is the center coordinate information of the rectangular frame after the ultrasonic image preprocessing, that is, the size is changed, w_new, h_new respectively represent the length and width of the ultrasonic image preprocessing, that is, the rectangular frame after the size is changed;
步骤S22,建立神经网络模型的结构;Step S22, the structure of the neural network model is established;
所述神经网络模型包括一个输入层、多个隐含层和一个输出层;所述神经网络模型的多个隐含层用来自动提取超声图像中动脉和静脉的特征;所述隐含层包含若干个卷积层、若干个池化层等;神经网络模型中的各隐含层之间、输入层和隐含层之间、隐含层和输出层之间通过权重参数相连接;所述隐含层还包括防止过拟合的一些设置,如随机失活一些输入层与隐含层之间或隐含层与输出层之间的权重参数,即反向传播算法不对这些失活权重进行调整;The neural network model includes an input layer, multiple hidden layers, and an output layer; the multiple hidden layers of the neural network model are used to automatically extract the characteristics of arteries and veins in the ultrasound image; the hidden layer contains Several convolutional layers, several pooling layers, etc .; between the hidden layers in the neural network model, between the input layer and the hidden layer, and between the hidden layer and the output layer are connected by weight parameters; The hidden layer also includes some settings to prevent overfitting, such as randomly deactivating some weight parameters between the input layer and the hidden layer or between the hidden layer and the output layer, that is, the back propagation algorithm does not adjust these inactive weights ;
首先设置输入层尺寸,以和输入神经网络模型的超声图像的尺寸相适配;First set the size of the input layer to match the size of the ultrasound image input to the neural network model;
如图5所示,本发明实施例中建立的第一种神经网络模型的结构包括一输入层,与输入层连接的多个隐含层,与最高层的隐含层连接的一输出层;图5中显示了神经网络模型的各隐含层和输出层;图5中神经网络模型的所有隐含层包括8个卷积层,5个最大池化层,输出层为1个Softmax分类层;首先是5个卷积层和5个最大池化层交替连接,每个最大池化层都起到对特征进行降维的作用;然后再连接3个卷积层,这些卷积层提取了高层的特征信息;最后连接输出层,输出神经网络的结果;图5中每层之间连接的箭头体现了神经网络模型各层之间的权重参数。As shown in FIG. 5, the structure of the first neural network model established in the embodiment of the present invention includes an input layer, a plurality of hidden layers connected to the input layer, and an output layer connected to the hidden layer of the highest layer; Figure 5 shows the hidden layers and output layers of the neural network model; all hidden layers of the neural network model in Figure 5 include 8 convolutional layers, 5 maximum pooling layers, and the output layer is a Softmax classification layer ; First, 5 convolutional layers and 5 maximum pooling layers are alternately connected, and each maximum pooling layer plays a role in dimensionality reduction of features; then 3 convolutional layers are connected, and these convolutional layers are extracted High-level feature information; finally connect the output layer to output the results of the neural network; the arrows connecting each layer in Figure 5 reflect the weight parameters between the layers of the neural network model.
如表一所示,显示了本发明实施例中建立的第二种神经网络模型的结构;表一中包括了该神经网络模型的各隐含层;表一中共四列,分别表示隐含层各层的名称,每层的过滤器数量、每层的输入图像尺寸和输出图像尺寸(图像尺寸的列中前两个数字表示长度和宽度);其中过滤器起到连接神经网络模型的不同隐含层的作用,其体现了神经网络模型的权重参数;该神经网络模型的所有隐含层先是5个卷积层和5个最大池化层交替连接;接着是连接若干个卷积层,表一选择连接了两个卷积层;随后再连接一个结合层(Route层),用于将结合层之前相连接的高级特征层(表一中的第11层)与该高级特征层之前的一层或数层隐含层相结合,以使得高级特征层与低级细粒度特征结合;该高级特征层与相结合的隐含层的输出图像的长和宽必需相应一致;表一中将第11层和第9层(一个最大池化层)结合,也可以将第11层和第9层、第10层结合;该高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层;这样增加了神经网络对偏小的目标对象的检测效果。As shown in Table 1, it shows the structure of the second neural network model established in the embodiment of the present invention; Table 1 includes the hidden layers of the neural network model; Table 1 has a total of four columns, respectively representing hidden layers The name of each layer, the number of filters in each layer, the size of the input image and the size of the output image in each layer (the first two numbers in the column of image size indicate the length and width); where the filters play different roles in connecting the neural network model The role of the contained layer, which reflects the weight parameters of the neural network model; all hidden layers of the neural network model are first connected with 5 convolutional layers and 5 maximum pooling layers alternately; followed by connecting several convolutional layers, table One option connects two convolutional layers; then connects a combination layer (Route layer), which is used to connect the advanced feature layer (layer 11 in Table 1) before the combination layer and the one before the advanced feature layer Combining layers or layers of hidden layers to combine high-level feature layers with low-level fine-grained features; the length and width of the output image of the high-level feature layers and the combined hidden layers must be correspondingly consistent; Layer and 9th (A maximum pooling layer) combination, you can also combine the 11th layer with the 9th layer, the 10th layer; this advanced feature layer is combined with the previous one or several hidden layers and input together to the last convolution Layer; this increases the detection effect of the neural network on the small target object.
本发明所述的神经网络模型优选地使用卷积神经网络模型。The neural network model described in the present invention preferably uses a convolutional neural network model.
表一Table I
如图6所示,显示了本发明实施例中建立的第三种神经网络模型的结构;As shown in FIG. 6, it shows the structure of the third neural network model established in the embodiment of the present invention;
图6中640×512的超声图像经过基础的特征提取网络,比如VGG,Inception,Alexnet等提取到若干特征图像,即图6中的52×52的特征图像;然后再经过一系列卷积运算,得到具有不同分辨率的特征图像,即图6中的26×26、13×13、7×7、4×4的特征图像;这些特征图像在图6中以长方体的形式表示,长方体的厚度代表特征图像的数量,长方体的长和宽对应特征图像的长和宽;图中长方体左下角的Conv即表示卷积运算,即体现了神经网络的权重参数;图中水平的直线表示分别通过卷积运算在这些不同分辨率的特征图像的不同位置上同时生成不同尺寸的边界框,这是第三种神经网络模型结构不同于前两者的地方;最后,即图6中的输出层对这些边界框进行softmax分类和位置回归,分别来预测边界框的类别和具体位置。The 640 × 512 ultrasound image in FIG. 6 is extracted through a basic feature extraction network, such as VGG, Inception, Alexnet, etc., to a number of feature images, that is, the 52 × 52 feature image in FIG. 6; then, after a series of convolution operations, Obtain feature images with different resolutions, that is, the 26 × 26, 13 × 13, 7 × 7, and 4 × 4 feature images in FIG. 6; these feature images are represented in the form of a rectangular parallelepiped in FIG. 6, and the thickness of the rectangular parallelepiped represents The number of feature images, the length and width of the cuboid correspond to the length and width of the feature image; Conv in the lower left corner of the cuboid in the figure represents the convolution operation, which reflects the weight parameters of the neural network; the horizontal straight lines in the figure represent the convolution respectively The operation generates boundary boxes of different sizes at different positions of these feature images with different resolutions. This is where the structure of the third neural network model is different from the first two; finally, the output layer in FIG. 6 The box performs softmax classification and position regression to predict the category and specific position of the bounding box, respectively.
最后,对应上述三种神经网络模型的结构,设置神经网络模型的输出层输出SxS个网格单元,例如13x13个网格单元,在每个网格单元输出B个预测的边界框,例如5个预测的边界框;神经网络开始训练前,用K均值的方法对训练集中的超声图像的动脉、静脉的长度和宽度数值进行聚类,得出B个聚类中心,作为神经网络输出边界框的先验知识;每个边界框的信息需要用2+4+1=7个数字表示,其中2个数字分别表示该边界框中的图像是动脉、静脉的概率信息,两个概率信息分别记作c 1,c 2;4个数字表示该边界框的中心位置的坐标信息(横坐标、纵坐标)和长度、宽度信息,中心位置的坐标信息用与网格单元的相对值记录,长宽信息是相对于整幅超声图像的预测值;1个数字记录了该边 界框中含有动脉或静脉的可能性的大小,若该边界框中既不含有动脉也不含有静脉,则该数值接近0,表示不含有目标对象;否则,接近1,表示含有目标对象;目标对象是指动脉或静脉; Finally, corresponding to the structure of the above three neural network models, the output layer of the neural network model is set to output SxS grid units, such as 13x13 grid units, and B predicted bounding boxes are output in each grid unit, such as 5 Predicted bounding box; before the neural network starts training, use the K-means method to cluster the length and width values of the arteries and veins of the ultrasound images in the training set to obtain B cluster centers, which are used as the output bounding box Prior knowledge; the information of each bounding box needs to be represented by 2 + 4 + 1 = 7 numbers, of which 2 numbers respectively represent the probability information that the images in the bounding box are arteries and veins, and the two probability information are recorded as c 1 , c 2 ; 4 numbers represent the coordinate information (abscissa, ordinate) and length and width information of the center position of the bounding box, the coordinate information of the center position is recorded with the relative value to the grid unit, and the length and width information Is the predicted value relative to the entire ultrasound image; 1 number records the possibility of containing arteries or veins in the bounding box, if the bounding box contains neither arteries nor veins , Then the value is close to 0, indicating that the target object is not included; otherwise, close to 1, indicating that the target object is included; the target object refers to the artery or vein;
基于以上可选的参数设置,神经网络模型的可选的输出大小为13×13×35,其中35记录了超声图像中每个网格单元中输出的5个边界框含有动脉或静脉的信息。在神经网络模型的最后设置softmax分类层,将2个概率信息限制到0到1之间,且当边界框中含有动脉或静脉时,2个概率信息c 1,c 2之和为1。边界框的中心位置横坐标、纵坐标、宽度、长度记作x,y,h,w,边界框中含有目标对象的可能性记作p c,则每个边界框的输出可以表示为: Based on the above optional parameter settings, the optional output size of the neural network model is 13 × 13 × 35, where 35 records the information of the arteries or veins contained in the five bounding boxes output from each grid unit in the ultrasound image. At the end of the neural network model, a softmax classification layer is set to limit the 2 probability information to 0 to 1, and when the boundary box contains an artery or vein, the sum of the 2 probability information c 1 and c 2 is 1. The horizontal, vertical, width, and length of the center position of the bounding box are denoted as x, y, h, w, and the probability of containing the target object in the bounding box is denoted as p c , then the output of each bounding box can be expressed as:
步骤S23,初始化神经网络模型:将神经网络模型的权重参数设置为随机数;Step S23, initialize the neural network model: set the weight parameters of the neural network model to random numbers;
步骤S24,定义神经网络模型的损失函数;Step S24, define the loss function of the neural network model;
神经网络模型的损失函数包括四项,分别为:The loss function of the neural network model includes four terms, namely:
含有目标对象的边界框的概率预测的误差;The error of the probability prediction of the bounding box containing the target object;
含有目标对象的边界框的位置和尺寸的预测误差;Prediction error of the position and size of the bounding box containing the target object;
不含有目标对象的边界框的概率预测的误差;The error of the probability prediction of the bounding box that does not contain the target object;
每个含有目标对象的网格单元预测类别的误差;The error of the prediction category of each grid unit containing the target object;
其中,边界框含有目标对象的判断标准是预测的边界框与超声图像中该网格单元中的真实矩形框(即用户在步骤S13进行的标记)的重叠比例大于设定阈值,具体衡量指标记为IOU;The criterion for judging that the bounding box contains the target object is that the overlap ratio between the predicted bounding box and the real rectangular frame in the grid unit in the ultrasound image (that is, the marking performed by the user in step S13) is greater than the set threshold. Is IOU;
可选地,IOU大于0.6的边界框作为含有目标对象的边界框;Optionally, a bounding box with an IOU greater than 0.6 is used as the bounding box containing the target object;
网格单元含有目标对象的判断标准是真实矩形框的中心落在该网格单元中;一张超声图像的损失函数的具体计算公式为:The criterion for judging that the grid cell contains the target object is that the center of the real rectangular frame falls in the grid cell; the specific calculation formula of the loss function of an ultrasound image is:
其中,λ 1-λ 4表示各项误差在总的损失函数中占的比重,各项误差都选用平方误差的形式; Among them, λ 1 -λ 4 represents the proportion of each error in the total loss function, and each error is selected in the form of square error;
损失函数的第一项表示含有目标对象的边界框的概率预测的误差;其中,S 2表示将超声图像划分成S×S个网格单元,B表示每个网格单元设置的边界框数量, 表示第i个网格单元的第j个边界框是否含有目标对象,C i表示第i个网格单元的概率向量, 表示该网格单元当前的第j个边界框的概率向量,这两个概率向量在本发明中的长度为2,即表示边界框是静脉、动脉的概率; The first term of the loss function represents the error of the probability prediction of the bounding box containing the target object; where S 2 represents the division of the ultrasound image into S × S grid cells, and B represents the number of bounding boxes set for each grid cell, Indicates whether the j-th bounding box of the i-th grid cell contains the target object, C i represents the probability vector of the i-th grid cell, Represents the probability vector of the current jth bounding box of the grid unit, and the length of these two probability vectors in the present invention is 2, which means the probability that the bounding box is a vein or an artery;
损失函数的第二项表示含有目标对象的边界框的位置和尺寸的预测误差;其中x i,y i,h i,w i分别表示第i个网格单元的矩形框的中心位置横坐标、纵坐标和宽度、长度信息, 分别表示预测的边界框相应的中心位置横坐标、纵坐标和宽度、长度信息;宽度、长度的误差部分采用根号形式目的是权衡不同大小的目标对象的预测误差; The second term represents the loss function prediction error position and the size of the bounding box containing the target object; wherein x i, y i, h i , w i represent the center position of the rectangular frame of the i-th grid units abscissa, Ordinate and width and length information, Respectively represent the horizontal position, vertical position and width and length information of the corresponding center position of the predicted bounding box; the error part of the width and length is in the form of a root sign. The purpose is to weigh the prediction errors of target objects of different sizes;
损失函数的第三项是不含有目标对象的边界框的概率预测的误差, 表示第i个网格单元的第j个边界框是否不含有目标对象;因为不含有目标对象的边界框占多数,所以λ 3通常会设置得比λ 1小,否则无法训练得到识别效果较好的神经网络。可选的,λ 1=5,λ 2=λ 3=λ 4=1; The third term of the loss function is the error of the probability prediction without the bounding box of the target object, Indicates whether the jth bounding box of the ith grid cell does not contain the target object; because the bounding box without the target object is the majority, λ 3 is usually set to be smaller than λ 1 , otherwise it cannot be trained to obtain a better recognition effect Neural network. Optionally, λ 1 = 5, λ 2 = λ 3 = λ 4 = 1;
损失函数的第四项表示每个含有目标对象的网格单元预测类别的误差,其中,当动脉或静脉的中心落在某网格单元时, 否则 p i(c)表示第i个网格单元是否含有第c个类别的目标对象,即静脉或动脉,取值为0或1; 表示预测第i个网格单元含有第c个类别的目标对象的概率,数值范围为[0,1]。 The fourth term of the loss function represents the error of the prediction category of each grid unit containing the target object, where, when the center of an artery or vein falls in a grid unit, otherwise p i (c) indicates whether the i-th grid cell contains the target object of the c-th category, namely vein or artery, and the value is 0 or 1; Represents the probability of predicting that the i-th grid cell contains the target object of the c-th category, and the value range is [0,1].
步骤S25,训练神经网络模型,得到能够自动识别超声图像中动脉和静脉的神经网络模型;Step S25: Train the neural network model to obtain a neural network model that can automatically identify the artery and vein in the ultrasound image;
此步骤中,用归一化后的训练集的超声图像训练神经网络模型;In this step, the neural network model is trained with the ultrasound images of the normalized training set;
优选地,随机选择训练集中的超声图像,对这些超声图像进行弹性形变后输入神经网络模型,对神经网络模型进行训练;这样能得到鲁棒性更好的神经网络模型。Preferably, the ultrasound images in the training set are randomly selected, these ultrasound images are elastically deformed, and then input to the neural network model to train the neural network model; in this way, a more robust neural network model can be obtained.
具体可采用反向传播算法训练神经网络模型;神经网络模型的权重参数初始值随机设置,迭代过程中按规律变化;将学习率设置为0.0001,动量设置为0.9,每迭代100次后,保存权重参数至网络参数文件,神经网络模型的最大迭代次数设置为50k;神经网络模型迭代过程中,计算神经网络模型在验证集上的召回率,即验证集中超声图像的真实的矩形框被识别出来的比例;待神经网络模型的损失函数收敛后,将收敛时对应的在验证集上识别效果最佳的权重参数作为神经网络模型的权重参数。Specifically, the back propagation algorithm can be used to train the neural network model; the initial value of the weight parameter of the neural network model is randomly set, and changes according to the law during the iteration process; the learning rate is set to 0.0001, the momentum is set to 0.9, and the weight is saved after 100 iterations Parameters to the network parameter file, the maximum number of iterations of the neural network model is set to 50k; during the iteration of the neural network model, the recall rate of the neural network model on the verification set is calculated, that is, the real rectangular frame of the ultrasound image in the verification set is identified Proportion; after the loss function of the neural network model converges, the weight parameter corresponding to the best recognition effect on the verification set at the time of convergence is used as the weight parameter of the neural network model.
如图7所示,通过所述神经网络模型从待识别超声图像中获取静脉和动脉的位置信息的处理流程包括:As shown in FIG. 7, the processing flow for acquiring the position information of veins and arteries from the ultrasound image to be recognized through the neural network model includes:
步骤S31,获取待识别的超声图像,并将超声图像固定到与神经网络模型输入层相适配的同样尺寸,对超声图像进行归一化;Step S31: Acquire the ultrasound image to be recognized, fix the ultrasound image to the same size as the input layer of the neural network model, and normalize the ultrasound image;
本例中,待识别的超声图像来自测试集;In this example, the ultrasound image to be recognized comes from the test set;
步骤S32,将待识别的超声图像输入训练好的神经网络模型,得到神经网络模型输出的所有边界框;Step S32, input the ultrasonic image to be recognized into the trained neural network model to obtain all the bounding boxes output by the neural network model;
所有边界框代表了对超声图像中动脉或静脉的预测;All bounding boxes represent the prediction of the artery or vein in the ultrasound image;
步骤S33,筛选边界框得到最终的识别结果。Step S33, screening the bounding box to obtain the final recognition result.
进一步地,筛选边界框是指挑选预测概率大于设定阈值的边界框作为预测结果;Further, screening the bounding box refers to selecting the bounding box whose prediction probability is greater than the set threshold as the prediction result;
在预测概率大于设定阈值的边界框中,使用极大值抑制方法进一步进行筛选,具体做法是计算边界框之间的重叠度,在重叠度指标大于设定阈值的边界框中,选择预测概率最高的边界框作为识别结果。In the bounding box where the prediction probability is greater than the set threshold, the maximum value suppression method is used for further screening. The specific method is to calculate the overlap between the bounding boxes. In the bounding box with the overlap index greater than the set threshold, select the prediction probability The highest bounding box is used as the recognition result.
最后,超声图像生成单元根据超声图像中识别出的静脉与动脉,生成含有静脉标记和动脉标记的超声图像。Finally, the ultrasound image generating unit generates an ultrasound image containing vein marks and artery marks based on the veins and arteries identified in the ultrasound image.
参照图9,是本发明实施例中第二种神经网络模型结构对应系统的颈静脉穿刺引导效果图像,与原始图像图8对应。图中artery标识着颈动脉的位置,vein标识着颈静脉的位置。Referring to FIG. 9, it is a jugular vein puncture guidance effect image of a second neural network model structure corresponding system in an embodiment of the present invention, which corresponds to the original image FIG. 8. In the picture, artery indicates the location of the carotid artery, vein indicates the location of the jugular vein.
最后所应说明的是,以上具体实施方式仅用以说明本发明的技术方案而非限制,尽管参照实例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。Finally, it should be noted that the above specific embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to examples, those of ordinary skill in the art should understand that the technical solutions of the present invention can be implemented Modifications or equivalent replacements without departing from the spirit and scope of the technical solutions of the present invention should be covered by the scope of the claims of the present invention.
Claims (20)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811301602.5A CN111144163B (en) | 2018-11-02 | 2018-11-02 | Vein and artery identification system based on neural network |
| CN201811301602.5 | 2018-11-02 | ||
| CN201811302485.4 | 2018-11-02 | ||
| CN201811302485.4A CN111145137B (en) | 2018-11-02 | 2018-11-02 | Vein and artery identification method based on neural network |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020087732A1 true WO2020087732A1 (en) | 2020-05-07 |
Family
ID=70464699
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/123978 Ceased WO2020087732A1 (en) | 2018-11-02 | 2018-12-26 | Neural network-based method and system for vein and artery identification |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2020087732A1 (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114202504A (en) * | 2021-09-24 | 2022-03-18 | 无锡祥生医疗科技股份有限公司 | Carotid artery ultrasonic automatic Doppler method, ultrasonic equipment and storage medium |
| CN114820503A (en) * | 2022-04-21 | 2022-07-29 | 深圳粒子群智能科技有限公司 | Elbow vein depth recognition method and system |
| CN116491983A (en) * | 2023-04-12 | 2023-07-28 | 岱特智能科技(上海)有限公司 | Vascular imaging method and related device of artificial arteriovenous arm |
| EP4248878A4 (en) * | 2020-11-19 | 2023-11-22 | FUJIFILM Corporation | INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM |
| EP4248880A4 (en) * | 2020-11-19 | 2023-11-22 | FUJIFILM Corporation | IMAGE PROCESSING DEVICE AND METHOD FOR CONTROLLING THE IMAGE PROCESSING DEVICE |
| CN117152417A (en) * | 2023-09-13 | 2023-12-01 | 上海微电机研究所(中国电子科技集团公司第二十一研究所) | Method for detecting defect of chip package, storage medium and detection device |
| CN119339951A (en) * | 2024-10-22 | 2025-01-21 | 首都医科大学宣武医院 | A vascular disease risk identification system based on machine learning model |
| CN119908753A (en) * | 2024-12-19 | 2025-05-02 | 广州索诺康医疗科技有限公司 | Ultrasonic image recognition method and device |
| CN120694748A (en) * | 2025-08-29 | 2025-09-26 | 武汉大学人民医院(湖北省人民医院) | Method, system and medium for positioning catheter tip during central venous catheterization |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103699904A (en) * | 2013-12-25 | 2014-04-02 | 大连理工大学 | Image computer-aided diagnosis method for multi-sequence nuclear magnetic resonance images |
| CN107563983A (en) * | 2017-09-28 | 2018-01-09 | 上海联影医疗科技有限公司 | Image processing method and medical imaging devices |
| CN107832718A (en) * | 2017-11-13 | 2018-03-23 | 重庆工商大学 | Finger vena anti false authentication method and system based on self-encoding encoder |
| CN108596046A (en) * | 2018-04-02 | 2018-09-28 | 上海交通大学 | A kind of cell detection method of counting and system based on deep learning |
-
2018
- 2018-12-26 WO PCT/CN2018/123978 patent/WO2020087732A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103699904A (en) * | 2013-12-25 | 2014-04-02 | 大连理工大学 | Image computer-aided diagnosis method for multi-sequence nuclear magnetic resonance images |
| CN107563983A (en) * | 2017-09-28 | 2018-01-09 | 上海联影医疗科技有限公司 | Image processing method and medical imaging devices |
| CN107832718A (en) * | 2017-11-13 | 2018-03-23 | 重庆工商大学 | Finger vena anti false authentication method and system based on self-encoding encoder |
| CN108596046A (en) * | 2018-04-02 | 2018-09-28 | 上海交通大学 | A kind of cell detection method of counting and system based on deep learning |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4248878A4 (en) * | 2020-11-19 | 2023-11-22 | FUJIFILM Corporation | INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM |
| EP4248880A4 (en) * | 2020-11-19 | 2023-11-22 | FUJIFILM Corporation | IMAGE PROCESSING DEVICE AND METHOD FOR CONTROLLING THE IMAGE PROCESSING DEVICE |
| CN114202504A (en) * | 2021-09-24 | 2022-03-18 | 无锡祥生医疗科技股份有限公司 | Carotid artery ultrasonic automatic Doppler method, ultrasonic equipment and storage medium |
| CN114820503A (en) * | 2022-04-21 | 2022-07-29 | 深圳粒子群智能科技有限公司 | Elbow vein depth recognition method and system |
| CN114820503B (en) * | 2022-04-21 | 2025-04-18 | 深圳市普蓝机器人有限公司 | A method and system for identifying elbow vein depth |
| CN116491983A (en) * | 2023-04-12 | 2023-07-28 | 岱特智能科技(上海)有限公司 | Vascular imaging method and related device of artificial arteriovenous arm |
| CN117152417A (en) * | 2023-09-13 | 2023-12-01 | 上海微电机研究所(中国电子科技集团公司第二十一研究所) | Method for detecting defect of chip package, storage medium and detection device |
| CN119339951A (en) * | 2024-10-22 | 2025-01-21 | 首都医科大学宣武医院 | A vascular disease risk identification system based on machine learning model |
| CN119908753A (en) * | 2024-12-19 | 2025-05-02 | 广州索诺康医疗科技有限公司 | Ultrasonic image recognition method and device |
| CN120694748A (en) * | 2025-08-29 | 2025-09-26 | 武汉大学人民医院(湖北省人民医院) | Method, system and medium for positioning catheter tip during central venous catheterization |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111134727B (en) | Puncture guiding system for vein and artery identification based on neural network | |
| WO2020087732A1 (en) | Neural network-based method and system for vein and artery identification | |
| CN112215843B (en) | Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium | |
| CN110870792B (en) | System and method for ultrasound navigation | |
| CN111214255B (en) | Medical ultrasonic image computer-aided method | |
| JP7330207B2 (en) | adaptive ultrasound scanning | |
| US20190239850A1 (en) | Augmented/mixed reality system and method for the guidance of a medical exam | |
| CN103222879B (en) | System and method for identifying an optimal image frame for ultrasound imaging | |
| JP2019521745A (en) | Automatic image acquisition to assist the user in operating the ultrasound system | |
| US11564663B2 (en) | Ultrasound imaging apparatus and control method thereof | |
| JP2017174039A (en) | Image classification device, method, and program | |
| JP2021029675A (en) | Information processor, inspection system, and information processing method | |
| CN118285919A (en) | Ultrasonic navigation method and system for puncture robot | |
| JP2020137974A (en) | Ultrasonic probe navigation system and navigation display device therefor | |
| CN111145137B (en) | Vein and artery identification method based on neural network | |
| AU2024251739A1 (en) | Three-dimensional characterization of atherosclerotic plaques using two-dimensional ultrasound images | |
| JP2025513686A (en) | Artificial intelligence based system and method for automated measurements for pre-treatment planning - Patents.com | |
| CN111242921A (en) | Method and system for automatically updating medical ultrasonic image auxiliary diagnosis system | |
| Serrano et al. | The promise of artificial intelligence-assisted point-of-care ultrasonography in perioperative care | |
| CN111144163B (en) | Vein and artery identification system based on neural network | |
| US20250111513A1 (en) | System and Method for Providing AI-Assisted Checklist for Interventional Medical Procedures | |
| US12251261B2 (en) | Device for acquiring a sequence of ultrasonograms and associated method | |
| KR20250145102A (en) | System and method for user-assisted acquisition of ultrasound images | |
| CN113112882B (en) | Ultrasonic image examination system | |
| US20250176935A1 (en) | Guidance assistance device for acquiring an ultrasound image and associated method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18938886 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18938886 Country of ref document: EP Kind code of ref document: A1 |