WO2021051593A1 - Image processing method and apparatus, computer device, and storage medium - Google Patents
Image processing method and apparatus, computer device, and storage medium Download PDFInfo
- Publication number
- WO2021051593A1 WO2021051593A1 PCT/CN2019/118248 CN2019118248W WO2021051593A1 WO 2021051593 A1 WO2021051593 A1 WO 2021051593A1 CN 2019118248 W CN2019118248 W CN 2019118248W WO 2021051593 A1 WO2021051593 A1 WO 2021051593A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- noise reduction
- images
- training
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- This application relates to the field of computer technology, and in particular to an image processing method, device, computer equipment, and storage medium.
- the embodiments of the present application provide an image processing method, device, computer equipment, and storage medium, aiming to solve the problem of long processing time when performing image enhancement on an OCT image in the image processing method in the prior art method.
- an embodiment of the present application provides an image processing method, which includes: randomly acquiring a preset number of training images from a preset training image set; The image is spatially transformed to obtain the corresponding registered image; the registered image is superimposed and denoised according to the preset convolutional denoising model to obtain the corresponding denoised image; according to the preset gradient descent training model, the The registration image, the denoising image, and the training image set are iteratively trained on the convolution denoising model to obtain the trained convolution denoising model; if the image to be processed input by the user is received, The image to be processed is processed according to the spatial transformation network, the trained convolutional noise reduction model, and a preset superposition rule to obtain a corresponding optimized image.
- an embodiment of the present application provides an image processing device, which includes: a training image acquisition unit, which randomly acquires a preset number of multiple training images from a preset training image set;
- the preset spatial transformation network performs spatial transformation on a plurality of the training images to obtain corresponding registered images; a superimposed noise reduction processing unit for superimposing and denoising the registered images according to a preset convolutional noise reduction model To obtain the corresponding denoising image;
- a model training unit for iterating the convolution denoising model according to a preset gradient descent training model, the registration image, the denoising image, and the training image set Training to obtain the trained convolutional noise reduction model;
- an optimized image acquisition unit for receiving the image to be processed input by the user, according to the spatial transformation network, the trained convolutional noise reduction model, and
- the preset superposition rule processes the to-be-processed image to obtain a corresponding optimized image.
- an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the computer
- the program implements the image processing method described in the first aspect.
- the embodiments of the present application also provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor executes the above-mentioned first One aspect of the image processing method.
- the embodiments of the present application provide an image processing method, device, computer equipment, and storage medium. Randomly acquire a variety of training images, perform spatial transformation on the training image according to the spatial transformation network to obtain a registered image, superimpose the registered image according to the convolutional noise reduction model to obtain a noise-reduced image, based on the noise-reduced image, the registered image and The training image set performs iterative training on the convolution noise reduction model to obtain the trained convolution noise reduction model.
- the image to be processed is processed into an optimized image according to the spatial transformation network, the trained convolution noise reduction model and the superposition rule.
- FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the application
- FIG. 2 is a schematic diagram of a sub-flow of an image processing method provided by an embodiment of the application
- FIG. 3 is a schematic diagram of another sub-flow of the image processing method provided by an embodiment of the application.
- FIG. 4 is a schematic diagram of another sub-flow of the image processing method provided by an embodiment of the application.
- FIG. 5 is a schematic diagram of another sub-flow of the image processing method provided by an embodiment of the application.
- FIG. 6 is a schematic block diagram of an image processing device provided by an embodiment of the application.
- FIG. 7 is a schematic block diagram of a computer device provided by an embodiment of the application.
- FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
- the image processing method is applied to a user terminal, and the method is executed by application software installed in the user terminal.
- the user terminal is a terminal device used to execute image processing methods to complete image optimization processing, such as desktop computers and notebooks. Computer, tablet or mobile phone, etc.
- the method includes steps S110 to S150.
- S110 randomly obtains a preset number of multiple training images from a preset training image set.
- the training image set is the image set obtained by repeatedly scanning the suspected lesion area through the OCT scanning equipment.
- the training image set can contain 35-60 training images.
- the preset number is the number of training images obtained randomly from the training image set.
- a preset number of images can be randomly obtained from the training image set. Two training images, that is, only part of the training images in the training image set are used to train the convolutional noise reduction model.
- the preset number can be preset by the user. Specifically, the preset number can be set to 5-10.
- S120 Perform spatial transformation on a plurality of the training images according to a preset spatial transformation network to obtain corresponding registration images.
- the multiple training images are spatially transformed according to a preset spatial transformation network to obtain corresponding registered images, wherein the spatial transformation network includes a convolutional neural network and a two-dimensional affine transformation function.
- the Spatial Transformer Network is an image processing neural network that spatially transforms multiple training images. As the patient is not completely still during the repeated scanning of the suspected lesion area, the collected multiple images There are displacement phenomena such as rotation and translation between OCT images, and the images obtained by directly superimposing multiple training images have distortion problems. Therefore, in order to avoid the distortion of the images obtained by superimposing, the training images can be spaced before being superimposed. Transformation processing.
- step S120 includes sub-steps S121, S122, S123, and S124.
- the reference image As a reference, and acquiring, according to the convolutional neural network, all the parameter matrices of the image to be converted corresponding to the reference image. Specifically, after performing convolution processing on the image to be converted and the reference image through the convolutional neural network, the reference image is used as a reference, and the parameter matrix corresponding to each image to be converted is obtained through regression of the fully connected layer in the convolutional neural network.
- the resolution of the image (or reference image) to be converted is 600 ⁇ 600.
- the resolution is 16*16 as the window, and the step size is 1, and the volume is
- the product operation obtains a vector matrix with a size of 585 ⁇ 585, which is the shallow feature of the image to be converted; according to the pooling calculation formula, with a resolution of 13 ⁇ 13 as the window and a step size of 13, downsampling is performed to obtain the size It is a 45 ⁇ 45 vector matrix, which is the in-depth feature of the image to be converted; according to the calculation formula in the 5 second convolution kernels, the resolution is 5 ⁇ 5 as the window, and the step size is 5 for convolution operation , To get 5 vector matrices with a size of 9 ⁇ 9.
- the full connection calculation formula includes 2 ⁇ 5 ⁇ 9 ⁇ 9 input nodes and 6 output nodes.
- the full connection calculation formula is used to reflect the difference between the input node and the output node. Association relationship, the output result of 6 output nodes is the parameter matrix.
- the parameter matrix A ⁇ can be expressed by the following formula:
- the parameter matrix A ⁇ contains six parameters, four of which are rotation parameters, and the other two are translation parameters.
- the image to be converted is mapped according to the two-dimensional affine transformation function and the parameter matrix to obtain a corresponding mapped image. Specifically, according to the two-dimensional affine transformation function and the parameter matrix, the affine transformation calculation is performed on the coordinate values of the pixels included in the image to be transformed to obtain the affine transformation coordinate value, and the image to be transformed is mapped and filled based on the affine transformation coordinate value to obtain the corresponding Map image.
- T ⁇ is the two-dimensional affine transformation function
- the coordinate value calculated by performing affine transformation on the corresponding pixel in the image to be converted is (x s i , y s i )
- the coordinate value of the pixel in the image to be converted is (x t i , y t i ).
- mapping and filling can be expressed as: Among them, U nm is the pixel value corresponding to the coordinates of the nth row and mth column in the image to be converted, the resolution of the image to be converted is (H ⁇ W), (x s i , y s i ) is the mapped image and the conversion image corresponding to the pixel to be converted in the image coordinate values, V i is the i-th mapping image pixels to fill pixel values obtained.
- all the images to be converted are spatially transformed based on the registered image, and multiple mapped images with the same angle and orientation of the suspected lesion area in the reference image are obtained.
- the resolution of the mapped image is consistent with the reference image, and the resulting mapped image
- the reference image is used as the registration image.
- randomly obtain the corresponding 5 training images take 1 of them as the reference image, and 4 as the image to be converted, respectively perform 4 spatial transformations on the 4 images to be converted to obtain 4 mapped images, and map the 4 images
- the image and a reference image are used as the corresponding registration image.
- S130 Perform superposition noise reduction on the registered image according to a preset convolution noise reduction model to obtain a corresponding noise reduction image.
- the convolutional denoising model is a model used to denoise the image.
- the convolutional denoising model includes an activation function and multiple convolution kernels. Each convolution kernel contains multiple parameters, and each parameter corresponds to For a parameter value, performing a convolution operation on an image is to perform a convolution operation on a two-dimensional array corresponding to the image through the parameter values included in the convolution kernel.
- step S130 includes sub-steps S131 and S132.
- All the registered images are superimposed pixel by pixel to obtain a first superimposed image.
- the obtained registration images are multiple, and the angle and orientation of the suspected lesion area in all the registration images are the same.
- the multiple registration images can be superimposed pixel by pixel to obtain the first superimposed image, and the obtained first superimposed image
- the resolution is the same as the registered image. Specifically, the pixel values of all registered images in the same pixel are added and averaged to complete the superimposition of the pixel. Based on the above method, the average value of each pixel in the multiple registered images is obtained. Then the corresponding first superimposed image can be obtained.
- S132 Perform convolution noise reduction on the first superimposed image according to the convolution noise reduction model to obtain a noise reduction image.
- the array value corresponding to each pixel in the first superimposed image is first calculated by the activation function to obtain the two-dimensional array corresponding to the first superimposed image, based on the array value in the two-dimensional array and the parameters in each convolution kernel
- the value performs convolution operation on the two-dimensional array to obtain the corresponding two-dimensional convolution array, and the corresponding denoising image can be obtained by inversely activating the two-dimensional convolution array through the activation function.
- any one of the activation functions can be selected.
- the pixel value of the pixel is 238 (the pixel value is an integer between [0,255]), the corresponding array value of the pixel is calculated to be 0.9907, and the array value corresponding to each pixel in the first superimposed image can be obtained to obtain a two-dimensional array .
- the gradient descent training model is a model for training the convolutional noise reduction model.
- the gradient descent training model contains a loss function and a gradient calculation formula. The loss function can be used to calculate the loss value between two images.
- the update value corresponding to each parameter can be calculated based on the calculated loss value and gradient calculation formula, and the parameter value corresponding to each parameter can be updated through the update value, that is, Train the convolutional noise reduction model.
- step S140 includes sub-steps S141, S142, S143, and S144.
- the pixel mean values of the noise-reduced image and all the registered images are superimposed pixel by pixel according to the superimposition rule to obtain a high-order superimposed image. Specifically, the pixel values of the registered images in the same pixel are obtained, and the pixel values of all registered images are averaged to obtain the pixel averages of all registered images. The denoising image and the pixel averages of all registered images are superimposed pixel by pixel to obtain a high-order superimposed image .
- the loss value between the high-order superimposed image and the target image in the training image set according to the loss function in the gradient descent training model. Based on the loss function, the loss value between the high-order superimposed image and the target image can be calculated. In order to make the image obtained after image enhancement processing approach the target image, the loss value between the high-order superimposed image and the target image can be calculated based on the loss value. The difference is quantified.
- J 4 is the gradient difference of pixels between the high-order superimposed image and the target image.
- the structural similarity between the calculated image x and the image y can be expressed as: Among them, ⁇ x is the average pixel value of image x, ⁇ y is the average pixel value of image y, ⁇ xy is the covariance between image x and image y, ⁇ x is the standard deviation of image x, and ⁇ y is image y
- ⁇ x is the standard deviation of image x
- ⁇ y is image y
- the standard deviation of c 1 and c 2 are the parameter values preset in the formula.
- the loss value after the loss value is calculated, it can also be judged whether the loss value is less than the preset threshold. If the loss value is less than the preset threshold, it indicates that the current convolution noise reduction model has met the usage requirements, and the subsequent convolution reduction can be terminated. Training of the noise model; if the loss value is not less than the preset threshold, it indicates that the current convolution noise reduction model does not meet the requirements of use, and the convolution noise reduction model needs to be trained through the subsequent processing process.
- the updated value of each parameter in the convolution noise reduction model is calculated according to the gradient calculation formula in the gradient descent training model, the loss value, and the calculated value corresponding to each parameter in the convolution noise reduction model. Specifically, the calculated value obtained by calculating a parameter in the convolutional noise reduction model on the array value in the two-dimensional array corresponding to the first superimposed image, inputting the gradient calculation formula and combining the above loss value, can be calculated and obtained.
- the update value corresponding to the parameter, this calculation process is also the gradient descent calculation.
- the gradient calculation formula can be expressed as: among them, Is the updated value of the calculated parameter x, ⁇ x is the original value of the parameter x, and ⁇ is the preset learning rate in the gradient calculation formula, It is the partial derivative of the parameter based on the loss value and the calculated value corresponding to the parameter x (the calculated value corresponding to the parameter needs to be used in this calculation process).
- the parameter value of the corresponding parameter in the convolution noise reduction model is updated according to the update value of each parameter, so as to train the convolution noise reduction model.
- the parameter value of each parameter in the convolution noise reduction model is correspondingly updated, that is, a training process of the convolution noise reduction model is completed.
- the convolution noise reduction model can be iteratively trained; when the calculated loss value is less than the preset Threshold, that is, terminate the training process to obtain the trained convolutional noise reduction model.
- the to-be-processed image input by the user is received, the to-be-processed image is processed according to the spatial transformation network, the trained convolutional noise reduction model, and a preset superposition rule to obtain a corresponding optimized image.
- the convolutional noise reduction model obtained after training combined with the spatial transformation network and superposition rules can process the image to be processed input by the user to obtain a clear optimized image.
- the number of images to be processed input by the user is much smaller than the training image
- the number of training images is concentrated, so it is possible to use fewer OCT images for processing to obtain an image with the same quality as the target image, which greatly reduces the amount of calculation in the image processing process and shortens the image processing time.
- step S150 includes sub-steps S151, S152, S153, and S154.
- S152 Perform pixel-by-pixel superposition on all the target registration images to obtain a second superimposed image.
- All the target registration images are superimposed pixel by pixel to obtain a second superimposed image.
- the specific process of pixel-by-pixel superimposition on the obtained target registration image is the same as the above steps, and will not be repeated here.
- the pixel mean values of the target noise reduction image and all the target registration images are superimposed pixel by pixel according to the superposition rule to obtain an optimized image.
- the specific process of pixel-by-pixel superposition of the target noise reduction image and the pixel averages of all target registration images according to the superposition rule is the same as the above steps, and will not be repeated here.
- a variety of training images are randomly obtained, the training images are spatially transformed according to the spatial transformation network to obtain the registered images, and the registered images are superimposed and denoised according to the convolutional noise reduction model.
- Denoising image iteratively train the convolution denoising model based on denoising image, registration image and training image set to obtain the trained convolution denoising model, according to the spatial transformation network, the trained convolution denoising model and superposition
- the rules process the to-be-processed image into an optimized image.
- An embodiment of the present application also provides an image processing device, which is used to execute any embodiment of the foregoing image processing method.
- FIG. 6, is a schematic block diagram of an image processing apparatus provided by an embodiment of the present application.
- the image processing device can be configured in a user terminal.
- the image processing apparatus 100 includes a training image acquisition unit 110, a registration image acquisition unit 120, a superposition noise reduction processing unit 130, a model training unit 140 and an optimized image acquisition unit 150.
- the training image acquisition unit 110 is configured to randomly acquire a preset number of training images from a preset training image set.
- the registration image acquisition unit 120 is configured to perform spatial transformation on a plurality of the training images according to a preset spatial transformation network to obtain corresponding registration images.
- the registration image acquisition unit 120 includes sub-units: a training image allocation unit, a parameter matrix acquisition unit, a mapped image acquisition unit, and a registration image determination unit.
- the training image distribution unit is used to determine any one of the multiple training images as a reference image, and other training images are determined as the images to be converted;
- the parameter matrix acquisition unit is used to use the reference image as a reference, according to
- the convolutional neural network obtains all the parameter matrices of the image to be converted corresponding to the reference image;
- the mapping image obtaining unit is configured to perform the calculation of the image to be converted according to the two-dimensional affine transformation function and the parameter matrix.
- the mapping is performed to obtain the corresponding mapped image;
- the registration image determining unit is used to obtain the mapped images and reference images corresponding to all the images to be converted to obtain the registered images.
- the superposition noise reduction processing unit 130 is configured to perform superposition noise reduction on the registered image according to a preset convolution noise reduction model to obtain a corresponding noise reduction image.
- the superimposition noise reduction processing unit 130 includes sub-units: a first superimposition image acquisition unit and a convolution noise reduction processing unit.
- the first superimposed image acquisition unit is used to superimpose all the registered images pixel by pixel to obtain the first superimposed image;
- the convolution noise reduction processing unit is used to superimpose the first superimposed image according to the convolution noise reduction model
- the image undergoes convolution noise reduction to obtain a noise-reduced image.
- the model training unit 140 is configured to iteratively train the convolutional noise reduction model according to a preset gradient descent training model, the registration image, the noise reduction image, and the training image set to obtain the trained all Describe the convolution noise reduction model.
- the model training unit 140 includes sub-units: a high-order superimposed image acquisition unit, a loss value calculation unit, an update value calculation unit, and a parameter update unit.
- the high-order superimposed image acquisition unit is used to superimpose the pixel mean values of the noise-reduction image and all the registered images pixel by pixel according to the superimposition rule to obtain a high-order superimposed image;
- a loss value calculation unit is used to The loss function in the gradient descent training model calculates the loss value between the high-order superimposed image and the target image in the training image set;
- the update value calculation unit is used to calculate the gradient according to the gradient calculation formula in the gradient descent training model,
- the loss value and the calculated value in the convolutional noise reduction model are calculated to obtain the updated value of each parameter in the convolutional noise reduction model;
- the parameter values of the corresponding parameters in the convolution noise reduction model are updated to train the convolution noise reduction model.
- the optimized image acquisition unit 150 is configured to process the to-be-processed image according to the spatial transformation network, the trained convolutional noise reduction model, and preset superimposition rules if the to-be-processed image input by the user is received To get the corresponding optimized image.
- the optimized image acquisition unit 150 includes sub-units: a target registration image acquisition unit, a second superimposed image acquisition unit, a target noise reduction image acquisition unit, and an image superimposition unit.
- the target registration image acquisition unit is configured to perform spatial transformation on the to-be-processed image according to the spatial transformation network to obtain a corresponding target registration image;
- the second superimposed image acquisition unit is configured to combine all the target registration images Perform pixel-by-pixel superposition to obtain a second superimposed image;
- a target noise reduction image acquisition unit configured to perform convolution noise reduction on the second superimposed image according to the convolution noise reduction model to obtain a target noise reduction image;
- image superposition unit For performing pixel-by-pixel superposition on the target noise reduction image and the pixel mean values of all the target registration images according to the superposition rule to obtain an optimized image.
- the image processing device provided in the embodiment of the application is used to perform the above-mentioned image processing method, randomly obtain a variety of training images, perform spatial transformation on the training images according to the spatial transformation network to obtain a registered image, and register according to the convolutional noise reduction model
- the image is superimposed and denoised to obtain the denoised image
- the convolution denoising model is iteratively trained based on the denoised image, the registration image and the training image set to obtain the trained convolution denoising model.
- the trained convolution The product noise reduction model and the superposition rule process the to-be-processed image into an optimized image.
- the above-mentioned image processing apparatus may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in FIG. 7.
- FIG. 7 is a schematic block diagram of a computer device according to an embodiment of the present application.
- the computer device 500 includes a processor 502, a memory, and a network interface 505 connected through a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
- the non-volatile storage medium 503 can store an operating system 5031 and a computer program 5032.
- the processor 502 can execute the image processing method.
- the processor 502 is used to provide computing and control capabilities, and support the operation of the entire computer device 500.
- the internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503. When the computer program 5032 is executed by the processor 502, the processor 502 can execute the image processing method.
- the network interface 505 is used for network communication, such as providing data information transmission.
- the structure shown in FIG. 7 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device 500 to which the solution of the present application is applied.
- the specific computer device 500 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
- the processor 502 is configured to run a computer program 5032 stored in a memory to implement the image processing method of this embodiment.
- the embodiment of the computer device shown in FIG. 7 does not constitute a limitation on the specific configuration of the computer device.
- the computer device may include more or less components than those shown in the figure. Or some parts are combined, or different parts are arranged.
- the computer device may only include a memory and a processor. In such an embodiment, the structures and functions of the memory and the processor are consistent with the embodiment shown in FIG. 7 and will not be repeated here.
- the processor 502 may be a central processing unit (Central Processing Unit, CPU), and the processor 502 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
- a computer-readable storage medium may be a non-volatile computer-readable storage medium.
- the computer-readable storage medium stores a computer program, where the computer program is executed by a processor to implement the image processing method of the embodiment of the present application.
- the storage medium may be an internal storage unit of the aforementioned device, such as a hard disk or memory of the device.
- the storage medium may also be an external storage device of the device, such as a plug-in hard disk equipped on the device, a Smart Media Card (SMC), a Secure Digital (SD) card, and a flash memory card. (Flash Card) and so on.
- the storage medium may also include both an internal storage unit of the device and an external storage device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
Abstract
Description
本申请要求于2019年09月19日提交中国专利局、申请号为201910888024.8、申请名称为“图像处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on September 19, 2019, the application number is 201910888024.8, and the application name is "Image processing methods, devices, computer equipment and storage media", the entire contents of which are incorporated by reference In this application.
本申请涉及计算机技术领域,尤其涉及一种图像处理方法、装置、计算机设备及存储介质。This application relates to the field of computer technology, and in particular to an image processing method, device, computer equipment, and storage medium.
在基于OCT图像进行病灶判断过程中,由于使用的相干成像模式,在所采集得到的单张图像中不可避免地产生斑点噪声,严重影响了后续对OCT图像的处理及病灶判断。为解决这一问题,常用的图像增强方法均是通过OCT扫描设备采集大约50张OCT图像,并基于所采集得到的图像叠加降噪得到一张清晰的融合图像,然而这一处理过程需在同一区域重复扫描约50次,大大提升了采集OCT图像所需的时间,且较多的图像数量大幅增加了处理时间,导致获取清晰图像的过程耗时长。因而,现有的对OCT图像进行图像增强的方法存在处理时间长的问题。In the process of lesion judgment based on OCT images, due to the coherent imaging mode used, speckle noise is inevitably generated in the collected single image, which seriously affects the subsequent processing of OCT images and lesion judgment. In order to solve this problem, the commonly used image enhancement methods are to collect about 50 OCT images through OCT scanning equipment, and superimpose and reduce noise based on the collected images to obtain a clear fused image. However, this processing process needs to be in the same The area is repeatedly scanned about 50 times, which greatly increases the time required to acquire OCT images, and the larger number of images greatly increases the processing time, resulting in a long time-consuming process for obtaining clear images. Therefore, the existing method for image enhancement of OCT images has the problem of long processing time.
发明内容Summary of the invention
本申请实施例提供了一种图像处理方法、装置、计算机设备及存储介质,旨在解决现有技术方法中的图像处理方法在对OCT图像进行图像增强时处理时间较长的问题。The embodiments of the present application provide an image processing method, device, computer equipment, and storage medium, aiming to solve the problem of long processing time when performing image enhancement on an OCT image in the image processing method in the prior art method.
第一方面,本申请实施例提供了一种图像处理方法,其包括:从预置的训练图像集中随机获取预设数量的多张训练图像;根据预置的空间变换网络对多张所述训练图像进行空间变换以得到对应的配准图像;根据预置的卷积降噪模型对所述配准图像进行叠加降噪以得到对应的降噪图像;根据预置的梯度下降训练模型、所述配准图像、所述降噪图像及所述训练图像集对所述卷积降噪模型进行迭代训练以得到训练后的所述卷积降噪模型;若接收到用户所输入的待 处理图像,根据所述空间变换网络、训练后的所述卷积降噪模型及预置的叠加规则对所述待处理图像进行处理以得到对应的优化图像。In the first aspect, an embodiment of the present application provides an image processing method, which includes: randomly acquiring a preset number of training images from a preset training image set; The image is spatially transformed to obtain the corresponding registered image; the registered image is superimposed and denoised according to the preset convolutional denoising model to obtain the corresponding denoised image; according to the preset gradient descent training model, the The registration image, the denoising image, and the training image set are iteratively trained on the convolution denoising model to obtain the trained convolution denoising model; if the image to be processed input by the user is received, The image to be processed is processed according to the spatial transformation network, the trained convolutional noise reduction model, and a preset superposition rule to obtain a corresponding optimized image.
第二方面,本申请实施例提供了一种图像处理装置,其包括:训练图像获取单元,从预置的训练图像集中随机获取预设数量的多张训练图像;配准图像获取单元,根据预置的空间变换网络对多张所述训练图像进行空间变换以得到对应的配准图像;叠加降噪处理单元,用于根据预置的卷积降噪模型对所述配准图像进行叠加降噪以得到对应的降噪图像;模型训练单元,用于根据预置的梯度下降训练模型、所述配准图像、所述降噪图像及所述训练图像集对所述卷积降噪模型进行迭代训练以得到训练后的所述卷积降噪模型;优化图像获取单元,用于若接收到用户所输入的待处理图像,根据所述空间变换网络、训练后的所述卷积降噪模型及预置的叠加规则对所述待处理图像进行处理以得到对应的优化图像。In the second aspect, an embodiment of the present application provides an image processing device, which includes: a training image acquisition unit, which randomly acquires a preset number of multiple training images from a preset training image set; The preset spatial transformation network performs spatial transformation on a plurality of the training images to obtain corresponding registered images; a superimposed noise reduction processing unit for superimposing and denoising the registered images according to a preset convolutional noise reduction model To obtain the corresponding denoising image; a model training unit for iterating the convolution denoising model according to a preset gradient descent training model, the registration image, the denoising image, and the training image set Training to obtain the trained convolutional noise reduction model; an optimized image acquisition unit for receiving the image to be processed input by the user, according to the spatial transformation network, the trained convolutional noise reduction model, and The preset superposition rule processes the to-be-processed image to obtain a corresponding optimized image.
第三方面,本申请实施例又提供了一种计算机设备,其包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述的图像处理方法。In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the computer The program implements the image processing method described in the first aspect.
第四方面,本申请实施例还提供了一种计算机可读存储介质,其中所述计算机可读存储介质存储有计算机程序,所述计算机程序当被处理器执行时使所述处理器执行上述第一方面所述的图像处理方法。In a fourth aspect, the embodiments of the present application also provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor executes the above-mentioned first One aspect of the image processing method.
本申请实施例提供了一种图像处理方法、装置、计算机设备及存储介质。随机获取多种训练图像,根据空间变换网络对训练图像进行空间变换得到配准图像,根据卷积降噪模型对配准图像进行叠加降噪得到降噪图像,基于降噪图像、配准图像及训练图像集对卷积降噪模型进行迭代训练得到训练后的卷积降噪模型,根据空间变换网络、训练后的卷积降噪模型及叠加规则将待处理图像处理为优化图像。通过上述方法,可大幅减少所需处理的图像数量,缩减对图像进行处理的时间,提高了对OCT图像进行图像增强时的处理效率。The embodiments of the present application provide an image processing method, device, computer equipment, and storage medium. Randomly acquire a variety of training images, perform spatial transformation on the training image according to the spatial transformation network to obtain a registered image, superimpose the registered image according to the convolutional noise reduction model to obtain a noise-reduced image, based on the noise-reduced image, the registered image and The training image set performs iterative training on the convolution noise reduction model to obtain the trained convolution noise reduction model. The image to be processed is processed into an optimized image according to the spatial transformation network, the trained convolution noise reduction model and the superposition rule. Through the above method, the number of images that need to be processed can be greatly reduced, the time for image processing can be shortened, and the processing efficiency when image enhancement is performed on OCT images is improved.
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings used in the description of the embodiments. Obviously, the drawings in the following description are some embodiments of the present application. Ordinary technicians can obtain other drawings based on these drawings without creative work.
图1为本申请实施例提供的图像处理方法的流程示意图;FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the application;
图2为本申请实施例提供的图像处理方法的子流程示意图;FIG. 2 is a schematic diagram of a sub-flow of an image processing method provided by an embodiment of the application;
图3为本申请实施例提供的图像处理方法的另一子流程示意图;FIG. 3 is a schematic diagram of another sub-flow of the image processing method provided by an embodiment of the application;
图4为本申请实施例提供的图像处理方法的另一子流程示意图;FIG. 4 is a schematic diagram of another sub-flow of the image processing method provided by an embodiment of the application;
图5为本申请实施例提供的图像处理方法的另一子流程示意图;FIG. 5 is a schematic diagram of another sub-flow of the image processing method provided by an embodiment of the application;
图6为本申请实施例提供的图像处理装置的示意性框图;FIG. 6 is a schematic block diagram of an image processing device provided by an embodiment of the application;
图7为本申请实施例提供的计算机设备的示意性框图。FIG. 7 is a schematic block diagram of a computer device provided by an embodiment of the application.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in this specification and appended claims, the terms "including" and "including" indicate the existence of the described features, wholes, steps, operations, elements and/or components, but do not exclude one or The existence or addition of multiple other features, wholes, steps, operations, elements, components, and/or collections thereof.
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should also be understood that the terms used in the specification of this application are only for the purpose of describing specific embodiments and are not intended to limit the application. As used in the specification of this application and the appended claims, unless the context clearly indicates other circumstances, the singular forms "a", "an" and "the" are intended to include plural forms.
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should be further understood that the term "and/or" used in the specification and appended claims of this application refers to any combination and all possible combinations of one or more of the associated listed items, and includes these combinations .
请参阅图1,图1是本申请实施例提供的图像处理方法的流程示意图。该图像处理方法应用于用户终端中,该方法通过安装于用户终端中的应用软件进行执行,用户终端即是用于执行图像处理方法以完成对图像进行优化处理的终端设备,例如台式电脑、笔记本电脑、平板电脑或手机等。Please refer to FIG. 1. FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application. The image processing method is applied to a user terminal, and the method is executed by application software installed in the user terminal. The user terminal is a terminal device used to execute image processing methods to complete image optimization processing, such as desktop computers and notebooks. Computer, tablet or mobile phone, etc.
如图1所示,该方法包括步骤S110~S150。As shown in Fig. 1, the method includes steps S110 to S150.
S110从预置的训练图像集中随机获取预设数量的多张训练图像。S110 randomly obtains a preset number of multiple training images from a preset training image set.
从预置的训练图像集中随机获取预设数量的多张训练图像。OCT扫描设备为一种比较常见的眼科疾病图像的采集设备,训练图像集即为通过OCT扫描设 备对疑似病灶区域进行重复扫描所得到的图像集,训练图像集中可包含35-60张训练图像,以及基于训练图像集中的所有训练图像通过图像增强方法所得到的一张目标图像,所有训练图像的尺寸均相同,所得到的目标图像即为对卷积降噪模型进行训练的目标。预设数量即为从训练图像集中随机获取训练图像的数量信息。Randomly acquire a preset number of training images from the preset training image set. OCT scanning equipment is a relatively common ophthalmic disease image acquisition equipment. The training image set is the image set obtained by repeatedly scanning the suspected lesion area through the OCT scanning equipment. The training image set can contain 35-60 training images. And a target image obtained by image enhancement method based on all training images in the training image set, all training images have the same size, and the obtained target image is the target for training the convolutional noise reduction model. The preset number is the number of training images obtained randomly from the training image set.
传统的图像增强方法需对大约50张OCT图像进行叠加、降噪处理得到一张清晰的融合图像,计算量十分巨大,采用本方案即可使用较少的OCT图像叠加降噪得到与上述融合图像质量等同的图像,大幅减少了图像处理过程中的计算量,缩短了处理时间。为达到使用较少OCT图像叠加降噪即可取得与目标图像质量等同的图像,在卷积降噪模型训练过程中即需减少训练图像的数量,可从训练图像集中随机获取预设数量的多张训练图像,也即是仅使用训练图像集中的部分训练图像对卷积降噪模型进行训练,预设数量可由用户预先设置,具体的,预设数量可设置为5-10。Traditional image enhancement methods need to superimpose about 50 OCT images and reduce noise to obtain a clear fused image. The amount of calculation is very huge. With this solution, less OCT images can be used to superimpose and reduce noise to obtain the fused image. Images with the same quality greatly reduce the amount of calculation in the image processing process and shorten the processing time. In order to use less OCT image overlay noise reduction to obtain images with the same quality as the target image, it is necessary to reduce the number of training images during the training process of the convolutional noise reduction model. A preset number of images can be randomly obtained from the training image set. Two training images, that is, only part of the training images in the training image set are used to train the convolutional noise reduction model. The preset number can be preset by the user. Specifically, the preset number can be set to 5-10.
S120、根据预置的空间变换网络对多张所述训练图像进行空间变换以得到对应的配准图像。S120: Perform spatial transformation on a plurality of the training images according to a preset spatial transformation network to obtain corresponding registration images.
根据预置的空间变换网络对多张所述训练图像进行空间变换以得到对应的配准图像,其中,所述空间变换网络包括卷积神经网络及二维仿射变换函数。空间变换网络(Spatial Transformer Network,STN)即为对多张训练图像进行空间变换的图像处理神经网络,由于对疑似病灶区域重复扫描的过程中患者并非完全静止不动,导致所采集得到的多张OCT图像之间存在旋转、平移等位移现象,直接叠加多张训练图像所得到的图像存在失真的问题,因此为避免叠加所得到的图像失真,可在对张训练图像进行叠加之前对其进行空间变换处理。The multiple training images are spatially transformed according to a preset spatial transformation network to obtain corresponding registered images, wherein the spatial transformation network includes a convolutional neural network and a two-dimensional affine transformation function. The Spatial Transformer Network (STN) is an image processing neural network that spatially transforms multiple training images. As the patient is not completely still during the repeated scanning of the suspected lesion area, the collected multiple images There are displacement phenomena such as rotation and translation between OCT images, and the images obtained by directly superimposing multiple training images have distortion problems. Therefore, in order to avoid the distortion of the images obtained by superimposing, the training images can be spaced before being superimposed. Transformation processing.
在一实施例中,如图2所示,步骤S120包括子步骤S121、S122、S123和S124。In an embodiment, as shown in FIG. 2, step S120 includes sub-steps S121, S122, S123, and S124.
S121、将所述多张训练图像中的任意一张确定为基准图像,将其它训练图像确定为待转换图像。S121. Determine any one of the multiple training images as a reference image, and determine other training images as images to be converted.
将所述多张训练图像中的任意一张确定为基准图像,将其它训练图像确定为待转换图像。在对多张训练图像进行空间变换之前,需先确定其中任意一张训练图像为基准图像,其它训练图像根据该基准图像进行空间变换,则将其它训练图像确定为待转换图像。Determine any one of the multiple training images as a reference image, and determine other training images as images to be converted. Before performing spatial transformation on multiple training images, it is necessary to determine that any one of the training images is a reference image, and other training images are spatially transformed according to the reference image, and then the other training images are determined as the images to be converted.
S122、以所述基准图像为基准,根据所述卷积神经网络获取所有所述待转 换图像对应于所述基准图像的参数矩阵。S122. Using the reference image as a reference, obtain, according to the convolutional neural network, all parameter matrices of the image to be converted corresponding to the reference image.
以所述基准图像为基准,根据所述卷积神经网络获取所有所述待转换图像对应于所述基准图像的参数矩阵。具体的,通过卷积神经网络分别对待转换图像与基准图像进行卷积处理后,以基准图像为基准,通过卷积神经网络中的全连接层回归得到与每一待转换图像对应的参数矩阵。Taking the reference image as a reference, and acquiring, according to the convolutional neural network, all the parameter matrices of the image to be converted corresponding to the reference image. Specifically, after performing convolution processing on the image to be converted and the reference image through the convolutional neural network, the reference image is used as a reference, and the parameter matrix corresponding to each image to be converted is obtained through regression of the fully connected layer in the convolutional neural network.
例如,待转换图像(或基准图像)的分辨率为600×600,根据卷积神经网络中第一卷积核中的计算公式,以分辨率16*16作为窗口,步长为1,进行卷积操作得到大小为585×585的向量矩阵,也即是待转换图像的浅层特征;根据池化计算公式,以分辨率13×13作为窗口,步长为13,进行降采样,以得到大小为45×45的向量矩阵,也即是待转换图像的深层次特征;根据5个第二卷积核中的计算公式,以分辨率5×5作为窗口,步长为5的进行卷积操作,以得到大小为9×9的5个向量矩阵。将所得到一张待转换图像的5个9×9向量矩阵及基准图像的5个9×9向量矩阵输入全连接层中的全连接计算公式进行计算,由于输入卷积神经网络的图片包含任意一张待转换图像及一张基准图像,则全连接计算公式中包含2×5×9×9个输入节点,6个输出节点,全连接计算公式即用于体现输入节点与输出节点之间的关联关系,6个输出节点的输出结果即为参数矩阵。For example, the resolution of the image (or reference image) to be converted is 600×600. According to the calculation formula in the first convolution kernel in the convolutional neural network, the resolution is 16*16 as the window, and the step size is 1, and the volume is The product operation obtains a vector matrix with a size of 585×585, which is the shallow feature of the image to be converted; according to the pooling calculation formula, with a resolution of 13×13 as the window and a step size of 13, downsampling is performed to obtain the size It is a 45×45 vector matrix, which is the in-depth feature of the image to be converted; according to the calculation formula in the 5 second convolution kernels, the resolution is 5×5 as the window, and the step size is 5 for convolution operation , To get 5 vector matrices with a size of 9×9. Input the five 9×9 vector matrices of the image to be converted and the five 9×9 vector matrices of the reference image into the fully-connected calculation formula in the fully-connected layer for calculation. Since the image input to the convolutional neural network contains any For an image to be converted and a reference image, the full connection calculation formula includes 2×5×9×9 input nodes and 6 output nodes. The full connection calculation formula is used to reflect the difference between the input node and the output node. Association relationship, the output result of 6 output nodes is the parameter matrix.
参数矩阵A θ可用下式进行表示: The parameter matrix A θ can be expressed by the following formula:
其中,参数矩阵A θ中包含六个参数,其中四个为旋转参数,另外两个为平移参数。 Among them, the parameter matrix A θ contains six parameters, four of which are rotation parameters, and the other two are translation parameters.
S123、根据所述二维仿射变换函数、所述参数矩阵对所述待转换图像进行映射以得到对应的映射图像。S123: Map the image to be converted according to the two-dimensional affine transformation function and the parameter matrix to obtain a corresponding mapped image.
根据所述二维仿射变换函数、所述参数矩阵对所述待转换图像进行映射以得到对应的映射图像。具体的,根据二维仿射变换函数及参数矩阵对待转换图像中所包含像素的坐标值进行仿射变换计算得到仿射变换坐标值,基于仿射变换坐标值对待转换图像进行映射填充以得到对应的映射图像。The image to be converted is mapped according to the two-dimensional affine transformation function and the parameter matrix to obtain a corresponding mapped image. Specifically, according to the two-dimensional affine transformation function and the parameter matrix, the affine transformation calculation is performed on the coordinate values of the pixels included in the image to be transformed to obtain the affine transformation coordinate value, and the image to be transformed is mapped and filled based on the affine transformation coordinate value to obtain the corresponding Map image.
具体的,仿射变换计算的具体过程可表示为:Specifically, the specific process of affine transformation calculation can be expressed as:
其中,T θ即为二维仿射变换函数,对待转换图像中对应像素进行仿射变换计算所得的坐标值为(x s i,y s i), 待转换图像中像素的坐标值为(x t i,y t i)。 Among them, T θ is the two-dimensional affine transformation function, the coordinate value calculated by performing affine transformation on the corresponding pixel in the image to be converted is (x s i , y s i ), and the coordinate value of the pixel in the image to be converted is (x t i , y t i ).
映射填充的具体过程可表示为: 其中,U nm即为待转换图像中第n行第m列坐标对应的像素值,待转换图像的分辨率为(H×W),(x s i,y s i)即为映射图像与待转换图像对应的像素在待转换图像中的坐标值,V i即为映射图像中第i个像素点进行填充得到的像素值。 The specific process of mapping and filling can be expressed as: Among them, U nm is the pixel value corresponding to the coordinates of the nth row and mth column in the image to be converted, the resolution of the image to be converted is (H×W), (x s i , y s i ) is the mapped image and the conversion image corresponding to the pixel to be converted in the image coordinate values, V i is the i-th mapping image pixels to fill pixel values obtained.
S124、获取所有待转换图像对应的映射图像及基准图像以得到配准图像。S124: Obtain the mapped images and reference images corresponding to all the images to be converted to obtain registered images.
获取所有待转换图像对应的映射图像及基准图像以得到配准图像。基于上述过程,基于配准图像对所有待转换图像进行空间变换,得到与基准图像中疑似病灶区域的角度、方位一致的多张映射图像,映射图像的分辨率与基准图像一致,将所得映射图像与基准图像作为配准图像。Obtain the mapped images and reference images corresponding to all the images to be converted to obtain the registered images. Based on the above process, all the images to be converted are spatially transformed based on the registered image, and multiple mapped images with the same angle and orientation of the suspected lesion area in the reference image are obtained. The resolution of the mapped image is consistent with the reference image, and the resulting mapped image The reference image is used as the registration image.
例如,随机获取得到对应的5张训练图像,将其中1张作为基准图像,另外4张作为待转换图像,分别对4张待转换图像进行4次空间变换得到4张映射图像,将4张映射图像与1张基准图像作为对应的配准图像。For example, randomly obtain the corresponding 5 training images, take 1 of them as the reference image, and 4 as the image to be converted, respectively perform 4 spatial transformations on the 4 images to be converted to obtain 4 mapped images, and map the 4 images The image and a reference image are used as the corresponding registration image.
S130、根据预置的卷积降噪模型对所述配准图像进行叠加降噪以得到对应的降噪图像。S130: Perform superposition noise reduction on the registered image according to a preset convolution noise reduction model to obtain a corresponding noise reduction image.
根据预置的卷积降噪模型对所述配准图像进行叠加降噪以得到对应的降噪图像。卷积降噪模型即是用于对图像进行降噪处理的模型,卷积降噪模型中包含一个激活函数及多个卷积核,每一卷积核中包含多个参数,每一个参数对应一个参数值,对图像进行卷积操作即为通过卷积核中所包含的参数值对该图像对应的二维阵列进行卷积操作。Perform superposition and noise reduction on the registered image according to a preset convolution noise reduction model to obtain a corresponding noise reduction image. The convolutional denoising model is a model used to denoise the image. The convolutional denoising model includes an activation function and multiple convolution kernels. Each convolution kernel contains multiple parameters, and each parameter corresponds to For a parameter value, performing a convolution operation on an image is to perform a convolution operation on a two-dimensional array corresponding to the image through the parameter values included in the convolution kernel.
在一实施例中,如图3所示,步骤S130包括子步骤S131和S132。In an embodiment, as shown in FIG. 3, step S130 includes sub-steps S131 and S132.
S131、将所有所述配准图像进行逐像素叠加以得到第一叠加图像。S131. Perform pixel-by-pixel superposition on all the registered images to obtain a first superimposed image.
将所有所述配准图像进行逐像素叠加以得到第一叠加图像。所得到的配准图像为多张,且所有配准图像中疑似病灶区域的角度、方位均一致,可对多张配准图像进行逐像素叠加得到第一叠加图像,所得到的第一叠加图像的分辨率与配准图像相同。具体的,将所有配准图像在同一像素的像素值进行相加并平均,即可完成对该像素的叠加处理,基于上述方式获取每一像素在多张配准图像中像素值的平均值,即可得到对应的第一叠加图像。All the registered images are superimposed pixel by pixel to obtain a first superimposed image. The obtained registration images are multiple, and the angle and orientation of the suspected lesion area in all the registration images are the same. The multiple registration images can be superimposed pixel by pixel to obtain the first superimposed image, and the obtained first superimposed image The resolution is the same as the registered image. Specifically, the pixel values of all registered images in the same pixel are added and averaged to complete the superimposition of the pixel. Based on the above method, the average value of each pixel in the multiple registered images is obtained. Then the corresponding first superimposed image can be obtained.
S132、根据所述卷积降噪模型对所述第一叠加图像进行卷积降噪以得到降噪图像。S132: Perform convolution noise reduction on the first superimposed image according to the convolution noise reduction model to obtain a noise reduction image.
根据所述卷积降噪模型对所述第一叠加图像进行卷积降噪以得到降噪图像。具体的,先通过激活函数计算第一叠加图像中每一像素对应的阵列值,得到该第一叠加图像对应的二维阵列,基于二维阵列中的阵列值及每一卷积核中的参数值对二维阵列进行卷积操作得到对应的二维卷积阵列,通过激活函数对二维卷积阵列进行反向激活即可得到对应的降噪图像。Perform convolution noise reduction on the first superimposed image according to the convolution noise reduction model to obtain a noise reduction image. Specifically, the array value corresponding to each pixel in the first superimposed image is first calculated by the activation function to obtain the two-dimensional array corresponding to the first superimposed image, based on the array value in the two-dimensional array and the parameters in each convolution kernel The value performs convolution operation on the two-dimensional array to obtain the corresponding two-dimensional convolution array, and the corresponding denoising image can be obtained by inversely activating the two-dimensional convolution array through the activation function.
激活函数可选用任意一种,例如,若选择Sigmoid函数作为激活函数,则激活函数的表达式为:f(x)=(1+e -x/51) -1;第一叠加图像中某一像素的像素值为238(像素值为[0,255]之间的整数),计算得到该像素对应的阵列值为0.9907,获取第一叠加图像中每一像素对应的阵列值即可得到一个二维阵列。 Any one of the activation functions can be selected. For example, if the Sigmoid function is selected as the activation function, the expression of the activation function is: f(x)=(1+e -x/51 ) -1 ; one of the first superimposed images The pixel value of the pixel is 238 (the pixel value is an integer between [0,255]), the corresponding array value of the pixel is calculated to be 0.9907, and the array value corresponding to each pixel in the first superimposed image can be obtained to obtain a two-dimensional array .
S140、根据预置的梯度下降训练模型、所述配准图像、所述降噪图像及所述训练图像集对所述卷积降噪模型进行迭代训练以得到训练后的所述卷积降噪模型。S140. Perform iterative training on the convolutional noise reduction model according to a preset gradient descent training model, the registration image, the noise reduction image, and the training image set to obtain the trained convolution noise reduction model.
根据预置的梯度下降训练模型、所述配准图像、所述降噪图像及所述训练图像集对所述卷积降噪模型进行迭代训练以得到训练后的所述卷积降噪模型。为使卷积降噪模型在对图像进行卷积降噪处理时具有良好的使用效果,需对卷积降噪模型进行迭代训练,即为对卷积降噪模型中卷积核的参数值进行调整,训练后所得到的卷积降噪模型即可对图像进行精准降噪得到更加清晰的图像。梯度下降训练模型即为对卷积降噪模型进行训练的模型,梯度下降训练模型中包含损失函数及梯度计算公式,损失函数即可用于计算两张图像之间的损失值,损失值越小则两张图像中的内容越接近,基于计算得到的损失值及梯度计算公式即可计算得到每一参数对应的更新值,通过更新值即可对每一参数对应的参数值进行更新,也即是对卷积降噪模型进行训练。Performing iterative training on the convolution noise reduction model according to a preset gradient descent training model, the registration image, the noise reduction image, and the training image set to obtain the trained convolution noise reduction model. In order to make the convolution noise reduction model have a good use effect when performing convolution noise reduction processing on the image, the convolution noise reduction model needs to be iteratively trained, that is, the parameter value of the convolution kernel in the convolution noise reduction model After adjustment, the convolutional noise reduction model obtained after training can accurately reduce the noise of the image to obtain a clearer image. The gradient descent training model is a model for training the convolutional noise reduction model. The gradient descent training model contains a loss function and a gradient calculation formula. The loss function can be used to calculate the loss value between two images. The smaller the loss value, the smaller the loss value. The closer the contents of the two images are, the update value corresponding to each parameter can be calculated based on the calculated loss value and gradient calculation formula, and the parameter value corresponding to each parameter can be updated through the update value, that is, Train the convolutional noise reduction model.
在一实施例中,如图4所示,步骤S140包括子步骤S141、S142、S143和S144。In an embodiment, as shown in FIG. 4, step S140 includes sub-steps S141, S142, S143, and S144.
S141、根据所述叠加规则对所述降噪图像及所有所述配准图像的像素均值进行逐像素叠加以得到高次叠加图像。S141. Perform pixel-by-pixel superposition on the noise reduction image and the pixel averages of all the registered images according to the superposition rule to obtain a high-order superimposed image.
根据所述叠加规则对所述降噪图像及所有所述配准图像的像素均值进行逐像素叠加以得到高次叠加图像。具体的,获取配准图像在同一像素的像素值进行相加并平均得到所有配准图像的像素均值,将降噪图像与所有配准图像的像素均值进行逐像素叠加即可得到高次叠加图像。The pixel mean values of the noise-reduced image and all the registered images are superimposed pixel by pixel according to the superimposition rule to obtain a high-order superimposed image. Specifically, the pixel values of the registered images in the same pixel are obtained, and the pixel values of all registered images are averaged to obtain the pixel averages of all registered images. The denoising image and the pixel averages of all registered images are superimposed pixel by pixel to obtain a high-order superimposed image .
S142、根据所述梯度下降训练模型中的损失函数计算所述高次叠加图像与 所述训练图像集中目标图像之间的损失值。S142. Calculate the loss value between the high-order superimposed image and the target image in the training image set according to the loss function in the gradient descent training model.
根据所述梯度下降训练模型中的损失函数计算所述高次叠加图像与所述训练图像集中目标图像之间的损失值。基于损失函数即可计算高次叠加图像与目标图像之间的损失值,为使经图像增强处理后所得的图像趋近于目标图像,可基于损失值对高次叠加图像与目标图像之间的差异进行量化。具体的,损失函数可表示为:损失值S=w 1×J 1+w 2×J 2+w 3×J 3+w 4×J 4;其中,w 1、w 2、w 3及w 4均为公式中预置的权重值,J 1为高次叠加图像与目标图像整体上的结构相似度,J 2为高次叠加图像与目标图像在视网膜区域的结构相似度,J 3为高次叠加图像与目标图像在重点区域(病灶区域、色素上皮区域、神经纤维层等区域)的结构相似度,J 4为高次叠加图像与目标图像之间像素的梯度差。其中,计算图像x与图像y之间的结构相似度可表示为: 其中,μ x为图像x的像素平均值,μ y为图像y的像素平均值,σ xy为图像x与图像y之间的协方差,σ x为图像x的标准差,σ y为图像y的标准差,c 1及c 2均为公式中预设的参数值。 Calculate the loss value between the high-order superimposed image and the target image in the training image set according to the loss function in the gradient descent training model. Based on the loss function, the loss value between the high-order superimposed image and the target image can be calculated. In order to make the image obtained after image enhancement processing approach the target image, the loss value between the high-order superimposed image and the target image can be calculated based on the loss value. The difference is quantified. Specifically, the loss function can be expressed as: loss value S=w 1 ×J 1 +w 2 ×J 2 +w 3 ×J 3 +w 4 ×J 4 ;wherein, w 1 , w 2 , w 3 and w 4 Both are the weight values preset in the formula, J 1 is the overall structural similarity between the high-order superimposed image and the target image, J 2 is the structural similarity between the high-order superimposed image and the target image in the retinal area, and J 3 is the high-order superimposed image. The structural similarity between the superimposed image and the target image in the key areas (focus area, pigment epithelial area, nerve fiber layer, etc.), J 4 is the gradient difference of pixels between the high-order superimposed image and the target image. Among them, the structural similarity between the calculated image x and the image y can be expressed as: Among them, μ x is the average pixel value of image x, μ y is the average pixel value of image y, σ xy is the covariance between image x and image y, σ x is the standard deviation of image x, and σ y is image y The standard deviation of c 1 and c 2 are the parameter values preset in the formula.
此外,在计算损失值之后,还可对损失值是否小于预设阈值进行判断,若损失值小于预设阈值,则表明当前卷积降噪模型已符合使用需求,即可终止后续对卷积降噪模型的训练;若损失值不小于预设阈值,则表明当前卷积降噪模型不符合使用需求,需通过后续处理过程对卷积降噪模型进行训练。In addition, after the loss value is calculated, it can also be judged whether the loss value is less than the preset threshold. If the loss value is less than the preset threshold, it indicates that the current convolution noise reduction model has met the usage requirements, and the subsequent convolution reduction can be terminated. Training of the noise model; if the loss value is not less than the preset threshold, it indicates that the current convolution noise reduction model does not meet the requirements of use, and the convolution noise reduction model needs to be trained through the subsequent processing process.
S143、根据所述梯度下降训练模型中的梯度计算公式、所述损失值及所述卷积降噪模型中的计算值计算得到所述卷积降噪模型中每一参数的更新值。S143. Calculate the updated value of each parameter in the convolution noise reduction model according to the gradient calculation formula in the gradient descent training model, the loss value, and the calculated value in the convolution noise reduction model.
根据所述梯度下降训练模型中的梯度计算公式、所述损失值及所述卷积降噪模型中每一参数对应的计算值计算得到所述卷积降噪模型中每一参数的更新值。具体的,将卷积降噪模型中一个参数对第一叠加图像所对应二维阵列中的阵列值进行计算所得到的计算值,输入梯度计算公式并结合上述损失值,即可计算得到与该参数对应的更新值,这一计算过程也即为梯度下降计算。The updated value of each parameter in the convolution noise reduction model is calculated according to the gradient calculation formula in the gradient descent training model, the loss value, and the calculated value corresponding to each parameter in the convolution noise reduction model. Specifically, the calculated value obtained by calculating a parameter in the convolutional noise reduction model on the array value in the two-dimensional array corresponding to the first superimposed image, inputting the gradient calculation formula and combining the above loss value, can be calculated and obtained The update value corresponding to the parameter, this calculation process is also the gradient descent calculation.
具体的,梯度计算公式可表示为: 其中, 为计算得到的参数x的更新值,ω x为参数x的原始值,η为梯度计算公式中预置的学习率, 为基于损失值及参数x对应的计算值对该参数的偏导值(这一计算过程中需使用该参数对应的计算值)。 Specifically, the gradient calculation formula can be expressed as: among them, Is the updated value of the calculated parameter x, ω x is the original value of the parameter x, and η is the preset learning rate in the gradient calculation formula, It is the partial derivative of the parameter based on the loss value and the calculated value corresponding to the parameter x (the calculated value corresponding to the parameter needs to be used in this calculation process).
S144、根据每一所述参数的更新值对所述卷积降噪模型中对应参数的参数值进行更新,以对所述卷积降噪模型进行训练。S144. Update the parameter value of the corresponding parameter in the convolution noise reduction model according to the update value of each parameter, so as to train the convolution noise reduction model.
根据每一所述参数的更新值对所述卷积降噪模型中对应参数的参数值进行更新,以对所述卷积降噪模型进行训练。基于所计算得到更新值对卷积降噪模型中每一参数的参数值对应更新,即完成对卷积降噪模型的一次训练过程。基于一次训练后所得到的卷积降噪模型对第一叠加图像再次进行卷积操作,并重复上述训练过程,即可对卷积降噪模型进行迭代训练;当所计算得到的损失值小于预设阈值,即终止训练过程得到训练后的卷积降噪模型。The parameter value of the corresponding parameter in the convolution noise reduction model is updated according to the update value of each parameter, so as to train the convolution noise reduction model. Based on the calculated update value, the parameter value of each parameter in the convolution noise reduction model is correspondingly updated, that is, a training process of the convolution noise reduction model is completed. Based on the convolution noise reduction model obtained after one training, perform the convolution operation again on the first superimposed image, and repeat the above training process, then the convolution noise reduction model can be iteratively trained; when the calculated loss value is less than the preset Threshold, that is, terminate the training process to obtain the trained convolutional noise reduction model.
S150、若接收到用户所输入的待处理图像,根据所述空间变换网络、训练后的所述卷积降噪模型及预置的叠加规则对所述待处理图像进行处理以得到对应的优化图像。S150. If the to-be-processed image input by the user is received, process the to-be-processed image according to the spatial transformation network, the trained convolutional noise reduction model, and a preset superposition rule to obtain a corresponding optimized image .
若接收到用户所输入的待处理图像,根据所述空间变换网络、训练后的所述卷积降噪模型及预置的叠加规则对所述待处理图像进行处理以得到对应的优化图像。训练后所得到的卷积降噪模型结合空间变换网络及叠加规则,即可对用户所输入的待处理图像进行处理,得到清晰的优化图像,用户所输入的待处理图像的数量远小于训练图像集中训练图像的数量,因此可实现使用较少的OCT图像进行处理后得到与目标图像质量等同的图像,以大幅减少图像处理过程中的计算量并缩短图像处理时间。If the to-be-processed image input by the user is received, the to-be-processed image is processed according to the spatial transformation network, the trained convolutional noise reduction model, and a preset superposition rule to obtain a corresponding optimized image. The convolutional noise reduction model obtained after training combined with the spatial transformation network and superposition rules can process the image to be processed input by the user to obtain a clear optimized image. The number of images to be processed input by the user is much smaller than the training image The number of training images is concentrated, so it is possible to use fewer OCT images for processing to obtain an image with the same quality as the target image, which greatly reduces the amount of calculation in the image processing process and shortens the image processing time.
在一实施例中,如图5所示,步骤S150包括子步骤S151、S152、S153和S154。In an embodiment, as shown in FIG. 5, step S150 includes sub-steps S151, S152, S153, and S154.
S151、根据所述空间变换网络对所述待处理图像进行空间变换以得到对应的目标配准图像。S151. Perform spatial transformation on the to-be-processed image according to the spatial transformation network to obtain a corresponding target registration image.
根据所述空间变换网络对所述待处理图像进行空间变换以得到对应的目标配准图像。对待处理图像进行空间变换的具体过程与上述步骤相同,在此不作赘述。Performing spatial transformation on the to-be-processed image according to the spatial transformation network to obtain a corresponding target registration image. The specific process of performing spatial transformation on the image to be processed is the same as the above steps, and will not be repeated here.
S152、将所有所述目标配准图像进行逐像素叠加以得到第二叠加图像。S152: Perform pixel-by-pixel superposition on all the target registration images to obtain a second superimposed image.
将所有所述目标配准图像进行逐像素叠加以得到第二叠加图像。对所得到的目标配准图像进行逐像素叠加的具体过程与上述步骤相同,在此不作赘述。All the target registration images are superimposed pixel by pixel to obtain a second superimposed image. The specific process of pixel-by-pixel superimposition on the obtained target registration image is the same as the above steps, and will not be repeated here.
S153、根据所述卷积降噪模型对所述第二叠加图像进行卷积降噪以得到目标降噪图像。S153. Perform convolution noise reduction on the second superimposed image according to the convolution noise reduction model to obtain a target noise reduction image.
根据所述卷积降噪模型对所述第二叠加图像进行卷积降噪以得到目标降噪 图像。通过卷积降噪模型对所得到的第二叠加图像进行卷积降噪的具体过程与上述步骤相同,在此不作赘述。Perform convolution noise reduction on the second superimposed image according to the convolution noise reduction model to obtain a target noise reduction image. The specific process of performing convolution noise reduction on the obtained second superimposed image through the convolution noise reduction model is the same as the above steps, and will not be repeated here.
S154、根据所述叠加规则对所述目标降噪图像及所有所述目标配准图像的像素均值进行逐像素叠加以得到优化图像。S154. Perform pixel-by-pixel superposition on the target noise reduction image and the pixel averages of all the target registration images according to the superimposition rule to obtain an optimized image.
根据所述叠加规则对所述目标降噪图像及所有所述目标配准图像的像素均值进行逐像素叠加以得到优化图像。根据叠加规则对目标降噪图像及所有目标配准图像的像素均值进行逐像素叠加的具体过程与上述步骤相同,在此不作赘述。The pixel mean values of the target noise reduction image and all the target registration images are superimposed pixel by pixel according to the superposition rule to obtain an optimized image. The specific process of pixel-by-pixel superposition of the target noise reduction image and the pixel averages of all target registration images according to the superposition rule is the same as the above steps, and will not be repeated here.
在本申请实施例所提供的图像处理方法中,随机获取多种训练图像,根据空间变换网络对训练图像进行空间变换得到配准图像,根据卷积降噪模型对配准图像进行叠加降噪得到降噪图像,基于降噪图像、配准图像及训练图像集对卷积降噪模型进行迭代训练得到训练后的卷积降噪模型,根据空间变换网络、训练后的卷积降噪模型及叠加规则将待处理图像处理为优化图像。通过上述方法,可大幅减少所需处理的图像数量,缩减对图像进行处理的时间,提高了对OCT图像进行图像增强时的处理效率。In the image processing method provided by the embodiments of the present application, a variety of training images are randomly obtained, the training images are spatially transformed according to the spatial transformation network to obtain the registered images, and the registered images are superimposed and denoised according to the convolutional noise reduction model. Denoising image, iteratively train the convolution denoising model based on denoising image, registration image and training image set to obtain the trained convolution denoising model, according to the spatial transformation network, the trained convolution denoising model and superposition The rules process the to-be-processed image into an optimized image. Through the above method, the number of images that need to be processed can be greatly reduced, the time for image processing can be shortened, and the processing efficiency when image enhancement is performed on OCT images is improved.
本申请实施例还提供一种图像处理装置,该图像处理装置用于执行前述图像处理方法的任一实施例。具体地,请参阅图6,图6是本申请实施例提供的图像处理装置的示意性框图。该图像处理装置可以配置于用户终端中。An embodiment of the present application also provides an image processing device, which is used to execute any embodiment of the foregoing image processing method. Specifically, please refer to FIG. 6, which is a schematic block diagram of an image processing apparatus provided by an embodiment of the present application. The image processing device can be configured in a user terminal.
如图6所示,图像处理装置100包括训练图像获取单元110、配准图像获取单元120、叠加降噪处理单元130、模型训练单元140和优化图像获取单元150。As shown in FIG. 6, the
训练图像获取单元110,用于从预置的训练图像集中随机获取预设数量的多张训练图像。The training
配准图像获取单元120,用于根据预置的空间变换网络对多张所述训练图像进行空间变换以得到对应的配准图像。The registration
其它申请实施例中,所述配准图像获取单元120包括子单元:训练图像分配单元、参数矩阵获取单元、映射图像获取单元和配准图像确定单元。In other application embodiments, the registration
训练图像分配单元,用于将所述多张训练图像中的任意一张确定为基准图像,将其它训练图像确定为待转换图像;参数矩阵获取单元,用于以所述基准图像为基准,根据所述卷积神经网络获取所有所述待转换图像对应于所述基准图像的参数矩阵;映射图像获取单元,用于根据所述二维仿射变换函数、所述参数矩阵对所述待转换图像进行映射以得到对应的映射图像;配准图像确定单 元,用于获取所有待转换图像对应的映射图像及基准图像以得到配准图像。The training image distribution unit is used to determine any one of the multiple training images as a reference image, and other training images are determined as the images to be converted; the parameter matrix acquisition unit is used to use the reference image as a reference, according to The convolutional neural network obtains all the parameter matrices of the image to be converted corresponding to the reference image; the mapping image obtaining unit is configured to perform the calculation of the image to be converted according to the two-dimensional affine transformation function and the parameter matrix. The mapping is performed to obtain the corresponding mapped image; the registration image determining unit is used to obtain the mapped images and reference images corresponding to all the images to be converted to obtain the registered images.
叠加降噪处理单元130,用于根据预置的卷积降噪模型对所述配准图像进行叠加降噪以得到对应的降噪图像。The superposition noise
其它申请实施例中,所述叠加降噪处理单元130包括子单元:第一叠加图像获取单元和卷积降噪处理单元。In other application embodiments, the superimposition noise
第一叠加图像获取单元,用于将所有所述配准图像进行逐像素叠加以得到第一叠加图像;卷积降噪处理单元,用于根据所述卷积降噪模型对所述第一叠加图像进行卷积降噪以得到降噪图像。The first superimposed image acquisition unit is used to superimpose all the registered images pixel by pixel to obtain the first superimposed image; the convolution noise reduction processing unit is used to superimpose the first superimposed image according to the convolution noise reduction model The image undergoes convolution noise reduction to obtain a noise-reduced image.
模型训练单元140,用于根据预置的梯度下降训练模型、所述配准图像、所述降噪图像及所述训练图像集对所述卷积降噪模型进行迭代训练以得到训练后的所述卷积降噪模型。The
其它申请实施例中,所述模型训练单元140包括子单元:高次叠加图像获取单元、损失值计算单元、更新值计算单元和参数更新单元。In other application embodiments, the
高次叠加图像获取单元,用于根据所述叠加规则对所述降噪图像及所有所述配准图像的像素均值进行逐像素叠加以得到高次叠加图像;损失值计算单元,用于根据所述梯度下降训练模型中的损失函数计算所述高次叠加图像与所述训练图像集中目标图像之间的损失值;更新值计算单元,用于根据所述梯度下降训练模型中的梯度计算公式、所述损失值及所述卷积降噪模型中的计算值计算得到所述卷积降噪模型中每一参数的更新值;参数更新单元,用于根据每一所述参数的更新值对所述卷积降噪模型中对应参数的参数值进行更新,以对所述卷积降噪模型进行训练。The high-order superimposed image acquisition unit is used to superimpose the pixel mean values of the noise-reduction image and all the registered images pixel by pixel according to the superimposition rule to obtain a high-order superimposed image; a loss value calculation unit is used to The loss function in the gradient descent training model calculates the loss value between the high-order superimposed image and the target image in the training image set; the update value calculation unit is used to calculate the gradient according to the gradient calculation formula in the gradient descent training model, The loss value and the calculated value in the convolutional noise reduction model are calculated to obtain the updated value of each parameter in the convolutional noise reduction model; The parameter values of the corresponding parameters in the convolution noise reduction model are updated to train the convolution noise reduction model.
优化图像获取单元150,用于若接收到用户所输入的待处理图像,根据所述空间变换网络、训练后的所述卷积降噪模型及预置的叠加规则对所述待处理图像进行处理以得到对应的优化图像。The optimized
其它申请实施例中,所述优化图像获取单元150包括子单元:目标配准图像获取单元、第二叠加图像获取单元、目标降噪图像获取单元和图像叠加单元。In other application embodiments, the optimized
目标配准图像获取单元,用于根据所述空间变换网络对所述待处理图像进行空间变换以得到对应的目标配准图像;第二叠加图像获取单元,用于将所有所述目标配准图像进行逐像素叠加以得到第二叠加图像;目标降噪图像获取单元,用于根据所述卷积降噪模型对所述第二叠加图像进行卷积降噪以得到目标降噪图像;图像叠加单元,用于根据所述叠加规则对所述目标降噪图像及所有 所述目标配准图像的像素均值进行逐像素叠加以得到优化图像。The target registration image acquisition unit is configured to perform spatial transformation on the to-be-processed image according to the spatial transformation network to obtain a corresponding target registration image; the second superimposed image acquisition unit is configured to combine all the target registration images Perform pixel-by-pixel superposition to obtain a second superimposed image; a target noise reduction image acquisition unit, configured to perform convolution noise reduction on the second superimposed image according to the convolution noise reduction model to obtain a target noise reduction image; image superposition unit , For performing pixel-by-pixel superposition on the target noise reduction image and the pixel mean values of all the target registration images according to the superposition rule to obtain an optimized image.
在本申请实施例所提供的图像处理装置用于执行上述图像处理方法,随机获取多种训练图像,根据空间变换网络对训练图像进行空间变换得到配准图像,根据卷积降噪模型对配准图像进行叠加降噪得到降噪图像,基于降噪图像、配准图像及训练图像集对卷积降噪模型进行迭代训练得到训练后的卷积降噪模型,根据空间变换网络、训练后的卷积降噪模型及叠加规则将待处理图像处理为优化图像。通过上述方法,可大幅减少所需处理的图像数量,缩减对图像进行处理的时间,提高了对OCT图像进行图像增强时的处理效率。The image processing device provided in the embodiment of the application is used to perform the above-mentioned image processing method, randomly obtain a variety of training images, perform spatial transformation on the training images according to the spatial transformation network to obtain a registered image, and register according to the convolutional noise reduction model The image is superimposed and denoised to obtain the denoised image, and the convolution denoising model is iteratively trained based on the denoised image, the registration image and the training image set to obtain the trained convolution denoising model. According to the spatial transformation network, the trained convolution The product noise reduction model and the superposition rule process the to-be-processed image into an optimized image. Through the above method, the number of images that need to be processed can be greatly reduced, the time for image processing can be shortened, and the processing efficiency when image enhancement is performed on OCT images is improved.
上述图像处理装置可以实现为计算机程序的形式,该计算机程序可以在如图7所示的计算机设备上运行。The above-mentioned image processing apparatus may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in FIG. 7.
请参阅图7,图7是本申请实施例提供的计算机设备的示意性框图。Please refer to FIG. 7, which is a schematic block diagram of a computer device according to an embodiment of the present application.
参阅图7,该计算机设备500包括通过系统总线501连接的处理器502、存储器和网络接口505,其中,存储器可以包括非易失性存储介质503和内存储器504。该非易失性存储介质503可存储操作系统5031和计算机程序5032。该计算机程序5032被执行时,可使得处理器502执行图像处理方法。该处理器502用于提供计算和控制能力,支撑整个计算机设备500的运行。该内存储器504为非易失性存储介质503中的计算机程序5032的运行提供环境,该计算机程序5032被处理器502执行时,可使得处理器502执行图像处理方法。该网络接口505用于进行网络通信,如提供数据信息的传输等。本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备500的限定,具体的计算机设备500可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Referring to FIG. 7, the
其中,所述处理器502用于运行存储在存储器中的计算机程序5032,以实现本实施例的图像处理方法。Wherein, the
本领域技术人员可以理解,图7中示出的计算机设备的实施例并不构成对计算机设备具体构成的限定,在其它实施例中,计算机设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。例如,在一些实施例中,计算机设备可以仅包括存储器及处理器,在这样的实施例中,存储器及处理器的结构及功能与图7所示实施例一致,在此不再赘述。Those skilled in the art can understand that the embodiment of the computer device shown in FIG. 7 does not constitute a limitation on the specific configuration of the computer device. In other embodiments, the computer device may include more or less components than those shown in the figure. Or some parts are combined, or different parts are arranged. For example, in some embodiments, the computer device may only include a memory and a processor. In such an embodiment, the structures and functions of the memory and the processor are consistent with the embodiment shown in FIG. 7 and will not be repeated here.
应当理解,在本申请实施例中,处理器502可以是中央处理单元(Central Processing Unit,CPU),该处理器502还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that in this embodiment of the application, the
在本申请的另一实施例中提供计算机可读存储介质。该计算机可读存储介质可以为非易失性的计算机可读存储介质。该计算机可读存储介质存储有计算机程序,其中计算机程序被处理器执行时实现本申请实施例的图像处理方法。In another embodiment of the present application, a computer-readable storage medium is provided. The computer-readable storage medium may be a non-volatile computer-readable storage medium. The computer-readable storage medium stores a computer program, where the computer program is executed by a processor to implement the image processing method of the embodiment of the present application.
所述存储介质可以是前述设备的内部存储单元,例如设备的硬盘或内存。所述存储介质也可以是所述设备的外部存储设备,例如所述设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储介质还可以既包括所述设备的内部存储单元也包括外部存储设备。The storage medium may be an internal storage unit of the aforementioned device, such as a hard disk or memory of the device. The storage medium may also be an external storage device of the device, such as a plug-in hard disk equipped on the device, a Smart Media Card (SMC), a Secure Digital (SD) card, and a flash memory card. (Flash Card) and so on. Further, the storage medium may also include both an internal storage unit of the device and an external storage device.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的设备、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the above-described equipment, device, and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Anyone familiar with the technical field can easily think of various equivalents within the technical scope disclosed in this application. Modifications or replacements, these modifications or replacements shall be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
Claims (20)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910888024.8A CN110782421B (en) | 2019-09-19 | 2019-09-19 | Image processing method, device, computer equipment and storage medium |
| CN201910888024.8 | 2019-09-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021051593A1 true WO2021051593A1 (en) | 2021-03-25 |
Family
ID=69384199
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/118248 Ceased WO2021051593A1 (en) | 2019-09-19 | 2019-11-14 | Image processing method and apparatus, computer device, and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN110782421B (en) |
| WO (1) | WO2021051593A1 (en) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113379786A (en) * | 2021-06-30 | 2021-09-10 | 深圳市斯博科技有限公司 | Image matting method and device, computer equipment and storage medium |
| CN113378973A (en) * | 2021-06-29 | 2021-09-10 | 沈阳雅译网络技术有限公司 | Image classification method based on self-attention mechanism |
| CN113538416A (en) * | 2021-08-19 | 2021-10-22 | 合肥工业大学智能制造技术研究院 | Medical image processing method based on deep learning |
| CN113822289A (en) * | 2021-06-15 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Image noise reduction model training method, device, equipment and storage medium |
| CN114565511A (en) * | 2022-02-28 | 2022-05-31 | 西安交通大学 | Lightweight image registration method, system and device based on global homography estimation |
| CN115393406A (en) * | 2022-08-17 | 2022-11-25 | 武汉华中天经通视科技有限公司 | Image registration method based on twin convolution network |
| US11718327B1 (en) | 2022-08-08 | 2023-08-08 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for operating a vehicle based on a user's health and emotional state |
| WO2024045442A1 (en) * | 2022-08-30 | 2024-03-07 | 青岛云天励飞科技有限公司 | Image correction model training method, image correction method, device and storage medium |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113379753B (en) * | 2020-03-10 | 2023-06-23 | Tcl科技集团股份有限公司 | Image processing method, storage medium and terminal equipment |
| CN111582410B (en) * | 2020-07-16 | 2023-06-02 | 平安国际智慧城市科技股份有限公司 | Image recognition model training method, device, computer equipment and storage medium |
| CN111984548B (en) * | 2020-07-22 | 2024-04-02 | 深圳云天励飞技术股份有限公司 | Neural network computing device |
| CN111931754B (en) * | 2020-10-14 | 2021-01-15 | 深圳市瑞图生物技术有限公司 | Method and system for identifying target object in sample and readable storage medium |
| CN112184787A (en) * | 2020-10-27 | 2021-01-05 | 北京市商汤科技开发有限公司 | Image registration method and device, electronic equipment and storage medium |
| CN112598597B (en) * | 2020-12-25 | 2025-01-17 | 华为技术有限公司 | Training method and related device of noise reduction model |
| CN115086686A (en) * | 2021-03-11 | 2022-09-20 | 北京有竹居网络技术有限公司 | Video processing method and related device |
| CN113192067B (en) * | 2021-05-31 | 2024-03-26 | 平安科技(深圳)有限公司 | Intelligent prediction method, device, equipment and medium based on image detection |
| CN114187194B (en) * | 2021-11-29 | 2025-09-30 | 中山大学 | Sensor-induced image noise reduction processing method, system, device and storage medium |
| CN119941562B (en) * | 2025-04-08 | 2025-07-04 | 杭州电子科技大学 | A method for reducing noise in integrated circuit scanning electron microscope images |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108737750A (en) * | 2018-06-07 | 2018-11-02 | 北京旷视科技有限公司 | Image processing method, device and electronic equipment |
| CN108830816A (en) * | 2018-06-27 | 2018-11-16 | 厦门美图之家科技有限公司 | Image enchancing method and device |
| CN109741379A (en) * | 2018-12-19 | 2019-05-10 | 上海商汤智能科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
| CN109754414A (en) * | 2018-12-27 | 2019-05-14 | 上海商汤智能科技有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105427263A (en) * | 2015-12-21 | 2016-03-23 | 努比亚技术有限公司 | Method and terminal for realizing image registering |
| CN108198191B (en) * | 2018-01-02 | 2019-10-25 | 武汉斗鱼网络科技有限公司 | Image processing method and device |
| CN109064428B (en) * | 2018-08-01 | 2021-04-13 | Oppo广东移动通信有限公司 | Image denoising processing method, terminal device and computer readable storage medium |
| CN109584179A (en) * | 2018-11-29 | 2019-04-05 | 厦门美图之家科技有限公司 | A kind of convolutional neural networks model generating method and image quality optimization method |
| CN110010249B (en) * | 2019-03-29 | 2021-04-27 | 北京航空航天大学 | Augmented reality surgical navigation method, system and electronic device based on video overlay |
-
2019
- 2019-09-19 CN CN201910888024.8A patent/CN110782421B/en active Active
- 2019-11-14 WO PCT/CN2019/118248 patent/WO2021051593A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108737750A (en) * | 2018-06-07 | 2018-11-02 | 北京旷视科技有限公司 | Image processing method, device and electronic equipment |
| CN108830816A (en) * | 2018-06-27 | 2018-11-16 | 厦门美图之家科技有限公司 | Image enchancing method and device |
| CN109741379A (en) * | 2018-12-19 | 2019-05-10 | 上海商汤智能科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
| CN109754414A (en) * | 2018-12-27 | 2019-05-14 | 上海商汤智能科技有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113822289A (en) * | 2021-06-15 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Image noise reduction model training method, device, equipment and storage medium |
| CN113378973A (en) * | 2021-06-29 | 2021-09-10 | 沈阳雅译网络技术有限公司 | Image classification method based on self-attention mechanism |
| CN113378973B (en) * | 2021-06-29 | 2023-08-08 | 沈阳雅译网络技术有限公司 | An image classification method based on self-attention mechanism |
| CN113379786A (en) * | 2021-06-30 | 2021-09-10 | 深圳市斯博科技有限公司 | Image matting method and device, computer equipment and storage medium |
| CN113379786B (en) * | 2021-06-30 | 2024-02-02 | 深圳万兴软件有限公司 | Image matting method, device, computer equipment and storage medium |
| CN113538416A (en) * | 2021-08-19 | 2021-10-22 | 合肥工业大学智能制造技术研究院 | Medical image processing method based on deep learning |
| CN114565511A (en) * | 2022-02-28 | 2022-05-31 | 西安交通大学 | Lightweight image registration method, system and device based on global homography estimation |
| CN114565511B (en) * | 2022-02-28 | 2024-05-21 | 西安交通大学 | Lightweight image registration method, system and device based on global homography estimation |
| US11718327B1 (en) | 2022-08-08 | 2023-08-08 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for operating a vehicle based on a user's health and emotional state |
| CN115393406A (en) * | 2022-08-17 | 2022-11-25 | 武汉华中天经通视科技有限公司 | Image registration method based on twin convolution network |
| CN115393406B (en) * | 2022-08-17 | 2024-05-10 | 中船智控科技(武汉)有限公司 | Image registration method based on twin convolution network |
| WO2024045442A1 (en) * | 2022-08-30 | 2024-03-07 | 青岛云天励飞科技有限公司 | Image correction model training method, image correction method, device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110782421A (en) | 2020-02-11 |
| CN110782421B (en) | 2023-09-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2021051593A1 (en) | Image processing method and apparatus, computer device, and storage medium | |
| CN109767461B (en) | Medical image registration method, apparatus, computer equipment and storage medium | |
| JP7155271B2 (en) | Image processing system and image processing method | |
| WO2020134769A1 (en) | Image processing method and apparatus, electronic device, and computer readable storage medium | |
| WO2022134971A1 (en) | Noise reduction model training method and related apparatus | |
| US11080833B2 (en) | Image manipulation using deep learning techniques in a patch matching operation | |
| CN114155365B (en) | Model training method, image processing method and related device | |
| CN112200725B (en) | Super-resolution reconstruction method and device, storage medium and electronic equipment | |
| CN110310314B (en) | Image registration method and device, computer equipment and storage medium | |
| CN114387353B (en) | Camera calibration method, calibration device and computer-readable storage medium | |
| KR20230165191A (en) | Self-regularizing inverse filter for image deblurring | |
| CN111105452A (en) | High-low resolution fusion stereo matching method based on binocular vision | |
| JP6645442B2 (en) | Information processing apparatus, information processing method, and program | |
| JP7398938B2 (en) | Information processing device and its learning method | |
| CN116862782A (en) | Image optimization method, device, electronic equipment and storage medium | |
| CN116228753A (en) | Tumor prognosis assessment method, device, computer equipment and storage medium | |
| CN115511744A (en) | Image processing method, image processing device, computer equipment and storage medium | |
| WO2022247004A1 (en) | Image prediction model training method and apparatus, image prediction model application method and apparatus, device, and storage medium | |
| CN111739008B (en) | Image processing method, device, equipment and readable storage medium | |
| CN108961161B (en) | Image data processing method, device and computer storage medium | |
| CN111369425B (en) | Image processing method, apparatus, electronic device, and computer readable medium | |
| WO2023138273A1 (en) | Image enhancement method and system | |
| CN115829878A (en) | A method and device for image enhancement | |
| CN113065566A (en) | Method, system and application for removing mismatching | |
| CN114693821A (en) | Medical image reconstruction method, device, computer equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19945612 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19945612 Country of ref document: EP Kind code of ref document: A1 |