CN111415325A - A method for defect detection of copper foil substrate based on convolutional neural network - Google Patents
A method for defect detection of copper foil substrate based on convolutional neural network Download PDFInfo
- Publication number
- CN111415325A CN111415325A CN201911095396.1A CN201911095396A CN111415325A CN 111415325 A CN111415325 A CN 111415325A CN 201911095396 A CN201911095396 A CN 201911095396A CN 111415325 A CN111415325 A CN 111415325A
- Authority
- CN
- China
- Prior art keywords
- layer
- size
- convolutional
- neural network
- stride
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于卷积神经网络的铜箔基板缺陷检测方法,包括以下步骤:数据集收集与标记;对数据集的图像进行数据扩展;构造一个快速准确的卷积神经网络模型;将数据集样本图片输入到上述卷积神经网络模型中进行迭代训练,获得最佳检测模型;将待检测铜箔基板图像输入到上述检测模型中进行图像类别的识别并实现在线的自动检测。上述方法通过将数据集样本输入构建的卷积神经网络模型迭代训练,从而获得深度学习检测模型,对铜箔基板缺陷产品实现在线检测,克服人工设计缺陷特征的缺点,提高了生产效率,从而快速准确进行分类检测,具有很强的适应性和鲁棒性,保证了铜箔基板产品的品质。The invention discloses a method for detecting defects of a copper foil substrate based on a convolutional neural network, comprising the following steps: collecting and marking a data set; performing data expansion on an image of the data set; constructing a fast and accurate convolutional neural network model; The sample images of the dataset are input into the above-mentioned convolutional neural network model for iterative training to obtain the best detection model; the image of the copper foil substrate to be detected is input into the above-mentioned detection model to identify the image category and realize online automatic detection. The above method is iteratively trained by inputting the data set samples into the constructed convolutional neural network model, thereby obtaining a deep learning detection model, realizing online detection of defective products of copper foil substrates, overcoming the shortcomings of manual design of defect features, improving production efficiency, and thus rapidly Accurately carry out classification detection, has strong adaptability and robustness, and ensures the quality of copper foil substrate products.
Description
技术领域technical field
本发明涉及一种铜箔基板缺陷检测方法,尤其涉及一种基于卷积神经网络的铜箔基板缺陷检测方法。The invention relates to a copper foil substrate defect detection method, in particular to a copper foil substrate defect detection method based on a convolutional neural network.
背景技术Background technique
有资料显示,铜箔基板缺陷的快速准确检测是工业生产中的一个重要研究内容。铜箔基板在生产制造过程中,存在的外观缺陷很难避免,这对铜箔基板的性能和品质造成了极大的负面影响,为避免缺陷造成的影响,目前通常采用手动设计特征的检测方法,包括几何特征、颜色特征、纹理特征等。这种检测方法存在局限性并且执行过程费时费力,精度和速度都难以达到要求。Some data show that the rapid and accurate detection of copper foil substrate defects is an important research content in industrial production. In the production process of copper foil substrates, it is difficult to avoid appearance defects, which have a great negative impact on the performance and quality of copper foil substrates. , including geometric features, color features, texture features, etc. This detection method has limitations and the implementation process is time-consuming and labor-intensive, and the accuracy and speed are difficult to meet the requirements.
发明内容SUMMARY OF THE INVENTION
本发明主要解决原有的手动设计特征检测费时费力的技术问题,提供一种基于卷积神经网络的铜箔基板缺陷检测方法,通过将数据集样本输入构建的卷积神经网络模型迭代训练,从而获得深度学习检测模型,对铜箔基板缺陷产品实现在线检测,克服人工设计缺陷特征的缺点,提高了生产效率,从而快速准确进行分类检测,具有很强的适应性和鲁棒性,保证了铜箔基板产品的品质。The invention mainly solves the time-consuming and laborious technical problem of the original manual design feature detection, and provides a copper foil substrate defect detection method based on a convolutional neural network. Obtaining a deep learning detection model, realizing online detection of defective products on copper foil substrates, overcoming the shortcomings of artificially designed defect features, and improving production efficiency, so as to quickly and accurately classify and detect, with strong adaptability and robustness, ensuring copper The quality of foil substrate products.
本发明的上述技术问题主要是通过下述技术方案得以解决的:本发明包括以下步骤:The above-mentioned technical problems of the present invention are mainly solved by the following technical solutions: the present invention comprises the following steps:
(1)数据集收集与标记;收集若干类铜箔基板缺陷样本图像并进行分类和标记,同时收集一类正常样本进行标记,将收集的上述样本图像作为数据集。(1) Data set collection and labeling: Collect several types of copper foil substrate defect sample images, classify and label, and collect a class of normal samples for labeling, and use the collected sample images as a data set.
(2)对数据集的样本图像进行数据扩展;(2) Data expansion is performed on the sample images of the dataset;
(3)构建卷积神经网络模型;输入图像大小为96×96×1。(3) Build a convolutional neural network model; the input image size is 96×96×1.
(4)将样本图像输入到上述卷积神经网络模型中进行迭代训练;以获得最佳模型。(4) Input the sample image into the above-mentioned convolutional neural network model for iterative training; to obtain the best model.
(5)将待检测铜箔基板图像输入到上述检测模型中进行图像类别的识别并实现在线的自动检测。对铜箔基板缺陷产品实现在线的自动检测,从而提高产品品质。(5) Input the image of the copper foil substrate to be detected into the above detection model to identify the image category and realize online automatic detection. Realize online automatic detection of defective products of copper foil substrate, thereby improving product quality.
作为优选,所述的步骤(2)通过将数据集中所有样本图像按翻转、降噪的方式扩展样本数量,按9:1的比例划分出训练集和验证集。Preferably, the step (2) expands the number of samples by flipping and denoising all the sample images in the data set, and divides the training set and the validation set in a ratio of 9:1.
作为优选,所述的步骤(3)卷积神经网络模型包括十四层,第一层为卷积层;第二层为重叠的最大池化层;第三层、第四层、第五层和第七层为并联结构的卷积模块DepthwiseFire深度可分离模块;第六层和第八层为最大池化层;第九层、第十层、第十一层和第十二层为卷积模块DepthwiseResidual深度可分离残差模块;第十三层为平均池化层,第十四层为softmax分类层。softmax分类层用于计算输出属于每一类的概率。Preferably, the step (3) convolutional neural network model includes fourteen layers, the first layer is a convolution layer; the second layer is an overlapping maximum pooling layer; the third layer, the fourth layer, and the fifth layer And the seventh layer is the convolution module DepthwiseFire depth separable module of parallel structure; the sixth and eighth layers are the maximum pooling layers; the ninth, tenth, eleventh and twelfth layers are convolutional layers The module DepthwiseResidual is a deeply separable residual module; the thirteenth layer is an average pooling layer, and the fourteenth layer is a softmax classification layer. The softmax classification layer is used to calculate the probability that the output belongs to each class.
作为优选,所述的步骤(3)卷积神经网络模型的第一层为卷积层,包括32个感受野大小为3×3的卷积核,步长为2,输出为32通道、尺寸为48×48的特征图。Preferably, the first layer of the convolutional neural network model in the step (3) is a convolution layer, including 32 convolution kernels with a receptive field size of 3×3, a stride of 2, and an output of 32 channels and a size of 3. is a 48×48 feature map.
作为优选,所述的步骤(3)卷积神经网络模型的第二层为重叠的最大池化层,是感受野大小为3×3的卷积核,步长为2,输出为32通道,尺寸为24×24的特征图。Preferably, the second layer of the convolutional neural network model in the step (3) is an overlapping maximum pooling layer, which is a convolution kernel with a receptive field size of 3×3, a stride of 2, and an output of 32 channels. Feature maps of size 24×24.
作为优选,所述的步骤(3)卷积神经网络模型的第三层、第四层、第五层和第七层为并联结构的卷积模块DepthwiseFire深度可分离模块,第三层依次是8个感受野大小为1×1的卷积核和并联的左分支、右分支,所述卷积核步长1,输出为8通道、尺寸为24×24的特征图,所述左分支的卷积层是32个感受野大小1×1的卷积核,步长为1,所述右分支包含两个级联的卷积层,其中上级卷积层是32个受野大小为3×3的深度可分离卷积核,步长为1,下级卷积层是32个受野大小为1×1的卷积核,步长为1,将左分支和右分支的输出进行拼接,第三层最终输出为64个通道,尺寸为24×24的特征图;第四层依次是12个感受野大小为1×1的卷积核和并联的左分支、右分支,所述卷积核步长1,输出为12通道、尺寸为24×24的特征图,所述左分支的卷积层是48个感受野大小1×1的卷积核,步长为1,所述右分支包含两个级联的卷积层,其中上级卷积层是48个受野大小为3×3的深度可分离卷积核,步长为1,下级卷积层是48个受野大小为1×1的卷积核,步长为1,将左分支和右分支的输出进行拼接,第四层最终输出为96个通道,尺寸为24×24的特征图;第五层依次是16个感受野大小为1×1的卷积核和并联的左分支、右分支,所述卷积核步长1,输出为16通道、尺寸为24×24的特征图,所述左分支的卷积层是64个感受野大小1×1的卷积核,步长为1,所述右分支包含两个级联的卷积层,其中上级卷积层是64个受野大小为3×3的深度可分离卷积核,步长为1,下级卷积层是64个受野大小为1×1的卷积核,步长为1,将左分支和右分支的输出进行拼接,第五层最终输出为128个通道,尺寸为24×24的特征图;第七层依次是24个感受野大小为1×1的卷积核和并联的左分支、右分支,所述卷积核步长1,输出为24通道、尺寸为12×12的特征图,所述左分支的卷积层是96个感受野大小1×1的卷积核,步长为1,所述右分支包含两个级联的卷积层,其中上级卷积层是96个受野大小为3×3的深度可分离卷积核,步长为1,下级卷积层是96个受野大小为1×1的卷积核,步长为1,将左分支和右分支的输出进行拼接,第七层最终输出为192个通道,尺寸为12×12的特征图。其中卷积核起到压缩作用,左分支的一个卷积层和右分支的两个级联结构的卷积层起到扩张作用。Preferably, the third layer, the fourth layer, the fifth layer and the seventh layer of the step (3) convolutional neural network model are the convolution module DepthwiseFire depth separable module of parallel structure, and the third layer is 8 A convolution kernel with a receptive field size of 1×1 and parallel left and right branches, the convolution kernel has a stride of 1, and the output is a feature map with 8 channels and a size of 24×24. The volume of the left branch The product layer is 32 convolution kernels with a receptive field size of 1×1 and a stride of 1. The right branch contains two cascaded convolutional layers, of which the upper convolutional layer has 32 receptive fields with a size of 3×3. The depth of the separable convolution kernel, the stride is 1, the lower convolution layer is 32 convolution kernels with a field size of 1 × 1, the stride is 1, the outputs of the left branch and the right branch are spliced, and the third The final output of the layer is a feature map with 64 channels and a size of 24 × 24; the fourth layer is followed by 12 convolution kernels with a receptive field size of 1 × 1 and parallel left and right branches. The length is 1, and the output is a feature map with 12 channels and a size of 24×24. The convolutional layer of the left branch is 48 convolution kernels with a receptive field size of 1×1, and the stride is 1. The right branch contains two A cascaded convolutional layer, in which the upper convolutional layer is 48 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1, and the lower convolutional layer is 48 receiving fields with a size of 1×1. The convolution kernel, the stride is 1, the output of the left branch and the right branch is spliced, the fourth layer finally outputs a feature map with 96 channels and a size of 24 × 24; the fifth layer is 16 receptive field sizes in turn is a 1×1 convolution kernel and parallel left and right branches. The convolution kernel has a stride of 1, and the output is a feature map with 16 channels and a size of 24×24. The convolution layer of the left branch is 64 A convolution kernel with a receptive field size of 1×1 and a stride of 1. The right branch contains two cascaded convolutional layers, of which the upper convolutional layer is 64 depthwise separable with a receptive field size of 3×3. The convolution kernel, the stride is 1, the lower convolution layer is 64 convolution kernels with a field size of 1×1, the stride is 1, the outputs of the left branch and the right branch are spliced, and the final output of the fifth layer is 128 channels and feature maps of size 24×24; the seventh layer is followed by 24 convolution kernels with a receptive field size of 1×1 and parallel left and right branches. The convolution kernel has a stride of 1 and outputs is a feature map with 24 channels and a size of 12 × 12. The convolutional layer of the left branch is 96 convolution kernels with a receptive field size of 1 × 1, and the stride is 1. The right branch contains two cascaded Convolutional layer, in which the upper convolutional layer is 96 depthwise separable convolution kernels with a receiving field size of 3×3, the stride is 1, and the lower convolutional layer is 96 convolution kernels with a receiving field size of 1×1 , the stride is 1, the outputs of the left branch and the right branch are spliced, and the final output of the seventh layer is a feature map with 192 channels and a size of 12 × 12. The convolution kernel plays a role in compression, and a convolution layer in the left branch and two convolution layers in a cascade structure in the right branch play an expansion role.
作为优选,所述的步骤(3)卷积神经网络模型的第六层和第八层为最大池化层,其中第六层是感受野大小为3×3的卷积层,步长为2,该层输出为128通道,尺寸为12×12的特征图;第八层是感受野大小为3×3的卷积层,步长为2,该层输出为192通道,尺寸为6×6的特征图。Preferably, the sixth and eighth layers of the convolutional neural network model in the step (3) are maximum pooling layers, wherein the sixth layer is a convolutional layer with a receptive field size of 3×3 and a step size of 2 , the output of this layer is a feature map with 128 channels and a size of 12 × 12; the eighth layer is a convolutional layer with a receptive field size of 3 × 3, the stride is 2, and the output of this layer is 192 channels with a size of 6 × 6 feature map.
作为优选,所述的步骤(3)卷积神经网络模型的第九层、第十层、第十一层和第十二层为卷积模块DepthwiseResidual深度可分离残差模块,其中第九层和第十层由两个上级卷积层加下级卷积层构成。上级卷积层是256个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是256个受野大小为1×1的卷积核,步长为1,输出为256个通道,尺寸为6×6的特征图;其中第十一层和第十二层由两个上级卷积层加下级卷积层构成。上级卷积层是512个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是512个受野大小为1×1的卷积核,步长为1,输出为512个通道,尺寸为6×6的特征图。Preferably, the ninth layer, tenth layer, eleventh layer and twelfth layer of the convolutional neural network model in the step (3) are the convolution module DepthwiseResidual depth separable residual module, wherein the ninth layer and The tenth layer consists of two upper-level convolutional layers plus a lower-level convolutional layer. The upper-level convolutional layer is 256 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; , the output is a feature map with 256 channels and a size of 6 × 6; the eleventh and twelfth layers are composed of two upper-level convolutional layers and lower-level convolutional layers. The upper-level convolutional layer is 512 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; , the output is a 512-channel feature map of size 6×6.
作为优选,所述的步骤(3)卷积神经网络模型的第十三层为平均池化层,是感受野大小为6×6的卷积层,步长为1,该层输出为512通道,尺寸为1×1的特征图。Preferably, the thirteenth layer of the convolutional neural network model in step (3) is an average pooling layer, which is a convolutional layer with a receptive field size of 6×6, a step size of 1, and the output of this layer is 512 channels , a feature map of size 1×1.
作为优选,所述的步骤(4)设置迭代周期数为400,每迭代一个周期,输出一次验证集准确率。根据准确率验证对参数进行微调或修改迭代周期数来获得最优模型。Preferably, in the step (4), the number of iteration cycles is set to 400, and the validation set accuracy is output once for each iteration cycle. The parameters are fine-tuned or the number of iteration cycles is modified according to the accuracy verification to obtain the optimal model.
本发明的有益效果是:通过将数据集样本输入构建的卷积神经网络模型迭代训练,从而获得深度学习检测模型,对铜箔基板缺陷产品实现在线检测,克服人工设计缺陷特征的缺点,提高了生产效率,从而快速准确进行分类检测,具有很强的适应性和鲁棒性,保证了铜箔基板产品的品质。The beneficial effects of the present invention are: by inputting the data set samples into the iterative training of the convolutional neural network model constructed, a deep learning detection model is obtained, online detection of defective copper foil substrate products is realized, the shortcomings of artificial design of defect features are overcome, and the Production efficiency, so as to quickly and accurately classify and detect, with strong adaptability and robustness, ensuring the quality of copper foil substrate products.
具体实施方式Detailed ways
下面通过实施例,对本发明的技术方案作进一步具体的说明。The technical solutions of the present invention are further described in detail below through the examples.
实施例:本实施例的一种基于卷积神经网络的铜箔基板缺陷检测方法,包括以下步骤:Embodiment: A method for detecting defects of copper foil substrates based on convolutional neural network in this embodiment includes the following steps:
(1)数据集收集与标记。收集若干类铜箔基板缺陷样本图像并进行分类和标记,同时收集一类正常样本进行标记,将收集的上述样本图像作为数据集。(1) Dataset collection and labeling. Several types of copper foil substrate defect sample images are collected, classified and marked, and one type of normal samples are collected and marked, and the collected above-mentioned sample images are used as a data set.
(2)对数据集的样本图像进行数据扩展。将数据集中所有样本图像按翻转和降噪的方式扩展样本数量,将所有样本数量按9:1的比例划分出训练集和验证集。(2) Data expansion is performed on the sample images of the dataset. All sample images in the dataset are expanded by flipping and denoising, and all samples are divided into training set and validation set in a ratio of 9:1.
(3)构建卷积神经网络模型,输入图像大小为96×96×1。(3) Build a convolutional neural network model with an input image size of 96×96×1.
第一层为卷积层,包括32个感受野大小为3×3的卷积核,步长为2,输出为32通道、尺寸为48×48的特征图。The first layer is a convolutional layer, including 32 convolution kernels with a receptive field size of 3 × 3, a stride of 2, and the output is a feature map with 32 channels and a size of 48 × 48.
第二层为重叠的最大池化层,是感受野大小为3×3的卷积核,步长为2,输出为32通道,尺寸为24×24的特征图。The second layer is the overlapped max pooling layer, which is a convolution kernel with a receptive field size of 3×3, a stride of 2, and the output is a 32-channel feature map of size 24×24.
第三层采用并联结构的卷积模块DepthwiseFire深度可分离模块。首先是8个感受野大小为1×1的卷积核,步长1,输出为8通道、尺寸为24×24的特征图,起到压缩作用。然后是并联的左分支和右分支,其中左分支的卷积层是32个感受野大小1×1的卷积核,步长为1,起到扩张作用。右分支两个级联的卷积层,上级卷积层是32个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是32个受野大小为1×1的卷积核,步长为1,起到扩张作用。最后将左右两个分支的输出进行拼接,该层输出为64个通道,尺寸为24×24的特征图。The third layer adopts the convolution module DepthwiseFire depth separable module in parallel structure. The first is 8 convolution kernels with a receptive field size of 1 × 1, stride 1, and the output is a feature map with 8 channels and a size of 24 × 24, which plays a role in compression. Then there are the parallel left and right branches, in which the convolutional layer of the left branch is 32 convolution kernels with a receptive field size of 1×1, and the stride is 1, which plays an expansion role. The right branch consists of two cascaded convolutional layers. The upper-level convolutional layer has 32 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; the lower-level convolutional layer has 32 receiving fields with a size of 1. The convolution kernel of ×1, with a stride of 1, plays an expansion role. Finally, the outputs of the left and right branches are spliced, and the output of this layer is a feature map with 64 channels and a size of 24×24.
第四层同样采用并联结构的卷积模块DepthwiseFire深度可分离模块。首先是12个感受野大小为1×1的卷积核,步长1,输出为12通道、尺寸为24×24的特征图,起到压缩作用。然后是并联的左分支和右分支,其中左分支的卷积层是48个感受野大小1×1的卷积核,步长为1,起到扩张作用。右分支两个级联的卷积层,上级卷积层是48个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是48个受野大小为1×1的卷积核,步长为1,起到扩张作用。最后将左右两个分支的输出进行拼接,该层输出为96个通道,尺寸为24×24的特征图。The fourth layer also adopts the convolution module DepthwiseFire depth separable module in parallel structure. The first is 12 convolution kernels with a receptive field size of 1 × 1, stride 1, and the output is a feature map with 12 channels and a size of 24 × 24, which plays a role in compression. Then there are the parallel left and right branches, in which the convolutional layer of the left branch is 48 convolution kernels with a receptive field size of 1×1, and the stride is 1, which plays an expansion role. The right branch consists of two cascaded convolutional layers. The upper-level convolutional layer has 48 depthwise separable convolution kernels with a receiving field size of 3 × 3 and a stride of 1; the lower-level convolutional layer has 48 receiving fields with a size of 1. The convolution kernel of ×1, with a stride of 1, plays an expansion role. Finally, the outputs of the left and right branches are spliced, and the output of this layer is a feature map with 96 channels and a size of 24×24.
第五层同样采用并联结构的卷积模块DepthwiseFire深度可分离模块。首先是16个感受野大小为1×1的卷积核,步长1,输出为16通道、尺寸为24×24的特征图,起到压缩作用。然后是并联的左分支和右分支,其中左分支的卷积层是64个感受野大小1×1的卷积核,步长为1,起到扩张作用。右分支两个级联的卷积层,上级卷积层是64个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是64个受野大小为1×1的卷积核,步长为1,起到扩张作用。最后将左右两个分支的输出进行拼接,该层输出为128个通道,尺寸为24×24的特征图。The fifth layer also adopts the convolution module DepthwiseFire depth separable module in parallel structure. The first is 16 convolution kernels with a receptive field size of 1 × 1, stride 1, and the output is a feature map with 16 channels and a size of 24 × 24, which plays a role in compression. Then there are the parallel left branch and right branch, where the convolutional layer of the left branch is 64 convolution kernels with a receptive field size of 1×1, and the stride is 1, which plays an expansion role. The right branch consists of two cascaded convolutional layers. The upper-level convolutional layer has 64 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; the lower-level convolutional layer has 64 receiving fields with a size of 1. The convolution kernel of ×1, with a stride of 1, plays an expansion role. Finally, the outputs of the left and right branches are spliced, and the output of this layer is a feature map with 128 channels and a size of 24×24.
第六层采用最大池化层,是感受野大小为3×3的卷积层,步长为2,该层输出为128通道,尺寸为12×12的特征图。The sixth layer adopts the maximum pooling layer, which is a convolutional layer with a receptive field size of 3 × 3 and a stride of 2. The output of this layer is a feature map with 128 channels and a size of 12 × 12.
第七层同样采用并联结构的卷积模块DepthwiseFire深度可分离模块。首先是24个感受野大小为1×1的卷积核,步长1,输出为24通道、尺寸为12×12的特征图,起到压缩作用。然后是并联的左分支和右分支,其中左分支的卷积层是96个感受野大小1×1的卷积核,步长为1,起到扩张作用。右分支两个级联的卷积层,上级卷积层是96个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是96个受野大小为1×1的卷积核,步长为1,起到扩张作用。最后将左右两个分支的输出进行拼接,该层输出为192个通道,尺寸为12×12的特征图。The seventh layer also adopts the convolution module DepthwiseFire depth separable module in parallel structure. The first is 24 convolution kernels with a receptive field size of 1 × 1, stride 1, and the output is a feature map with 24 channels and a size of 12 × 12, which plays a role in compression. Then there are the parallel left branch and right branch, where the convolution layer of the left branch is 96 convolution kernels with a receptive field size of 1 × 1, and the stride is 1, which plays an expansion role. The right branch consists of two cascaded convolutional layers. The upper-level convolutional layer has 96 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; the lower-level convolutional layer has 96 receiving fields with a size of 1. The convolution kernel of ×1, with a stride of 1, plays an expansion role. Finally, the outputs of the left and right branches are spliced, and the output of this layer is a feature map with 192 channels and a size of 12×12.
第八层采用最大池化层,是感受野大小为3×3的卷积层,步长为2,该层输出为192通道,尺寸为6×6的特征图。The eighth layer adopts the max pooling layer, which is a convolutional layer with a receptive field size of 3 × 3 and a stride of 2. The output of this layer is a feature map of 192 channels and a size of 6 × 6.
第九层采用卷积模块DepthwiseResidual深度可分离残差模块。由两个上级卷积层加下级卷积层构成。上级卷积层是256个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是256个受野大小为1×1的卷积核,步长为1,输出为256个通道,尺寸为6×6的特征图。The ninth layer adopts the convolution module DepthwiseResidual depthwise separable residual module. It consists of two upper-level convolutional layers and a lower-level convolutional layer. The upper-level convolutional layer is 256 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; , the output is a 256-channel feature map of size 6×6.
第十层同样采用卷积模块DepthwiseResidual深度可分离残差模块。由两个上级卷积层加下级卷积层构成。上级卷积层是256个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是256个受野大小为1×1的卷积核,步长为1,输出为256个通道,尺寸为6×6的特征图。The tenth layer also adopts the convolution module Depthwise Residual depth separable residual module. It consists of two upper-level convolutional layers and a lower-level convolutional layer. The upper-level convolutional layer is 256 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; , the output is a 256-channel feature map of size 6×6.
第十一层采用卷积模块DepthwiseResidual深度可分离残差模块。由两个上级卷积层加下级卷积层构成。上级卷积层是512个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是512个受野大小为1×1的卷积核,步长为1,输出为512个通道,尺寸为6×6的特征图。The eleventh layer adopts the convolution module DepthwiseResidual depthwise separable residual module. It consists of two upper-level convolutional layers and a lower-level convolutional layer. The upper-level convolutional layer is 512 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; , the output is a 512-channel feature map of size 6×6.
第十二层采用卷积模块DepthwiseResidual深度可分离残差模块。由两个上级卷积层加下级卷积层构成。上级卷积层是512个受野大小为3×3的深度可分离卷积核,步长为1;下级卷积层是512个受野大小为1×1的卷积核,步长为1,输出为512个通道,尺寸为6×6的特征图。The twelfth layer adopts the convolution module Depthwise Residual depth separable residual module. It consists of two upper-level convolutional layers and a lower-level convolutional layer. The upper-level convolutional layer is 512 depthwise separable convolution kernels with a receiving field size of 3×3, and the stride is 1; , the output is a 512-channel feature map of size 6×6.
第十三层采用平均池化层,是感受野大小为6×6的卷积层,步长为1,该层输出为512通道,尺寸为1×1的特征图。The thirteenth layer adopts an average pooling layer, which is a convolutional layer with a receptive field size of 6×6 and a stride of 1. The output of this layer is a feature map with 512 channels and a size of 1×1.
第十四层为softmax分类层,用于计算输出属于每一类的概率。The fourteenth layer is the softmax classification layer, which is used to calculate the probability that the output belongs to each class.
(4)将样本图像输入到上述卷积神经网络模型中进行迭代训练。设置迭代周期数为400,每迭代一个周期,输出一次验证集准确率。可对参数进行微调或修改迭代周期数来获得最优模型。(4) Input the sample image into the above convolutional neural network model for iterative training. Set the number of iteration cycles to 400, and output the validation set accuracy once for each iteration cycle. The parameters can be fine-tuned or the number of iteration cycles can be modified to obtain the optimal model.
(5)将待检测铜箔基板图像输入到上述检测模型中进行图像类别的识别,实现铜箔基板缺陷产品的在线自动检测,从而提高工作效率,保证产品品质。(5) Input the image of the copper foil substrate to be detected into the above detection model to identify the image category, and realize the online automatic detection of defective products of the copper foil substrate, thereby improving work efficiency and ensuring product quality.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911095396.1A CN111415325B (en) | 2019-11-11 | 2019-11-11 | A Copper Foil Substrate Defect Detection Method Based on Convolutional Neural Network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911095396.1A CN111415325B (en) | 2019-11-11 | 2019-11-11 | A Copper Foil Substrate Defect Detection Method Based on Convolutional Neural Network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111415325A true CN111415325A (en) | 2020-07-14 |
| CN111415325B CN111415325B (en) | 2023-04-25 |
Family
ID=71490708
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911095396.1A Active CN111415325B (en) | 2019-11-11 | 2019-11-11 | A Copper Foil Substrate Defect Detection Method Based on Convolutional Neural Network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111415325B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112669292A (en) * | 2020-12-31 | 2021-04-16 | 上海工程技术大学 | Method for detecting and classifying defects on painted surface of aircraft skin |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108985348A (en) * | 2018-06-25 | 2018-12-11 | 西安理工大学 | Calligraphic style recognition methods based on convolutional neural networks |
| US20180353348A1 (en) * | 2017-06-13 | 2018-12-13 | The Procter & Gamble Company | Systems and Methods for Inspecting Absorbent Articles on A Converting Line |
| US20190005357A1 (en) * | 2017-06-28 | 2019-01-03 | Applied Materials, Inc. | Classification, search and retrieval of semiconductor processing metrology images using deep learning/convolutional neural networks |
| CN109615609A (en) * | 2018-11-15 | 2019-04-12 | 北京航天自动控制研究所 | A Deep Learning-Based Solder Joint Defect Detection Method |
| CN109859207A (en) * | 2019-03-06 | 2019-06-07 | 华南理工大学 | A kind of defect detection method of high-density flexible substrate |
| CN110060238A (en) * | 2019-04-01 | 2019-07-26 | 桂林电子科技大学 | Pcb board based on deep learning marks print quality inspection method |
| CN110378338A (en) * | 2019-07-11 | 2019-10-25 | 腾讯科技(深圳)有限公司 | A kind of text recognition method, device, electronic equipment and storage medium |
-
2019
- 2019-11-11 CN CN201911095396.1A patent/CN111415325B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180353348A1 (en) * | 2017-06-13 | 2018-12-13 | The Procter & Gamble Company | Systems and Methods for Inspecting Absorbent Articles on A Converting Line |
| US20190005357A1 (en) * | 2017-06-28 | 2019-01-03 | Applied Materials, Inc. | Classification, search and retrieval of semiconductor processing metrology images using deep learning/convolutional neural networks |
| CN108985348A (en) * | 2018-06-25 | 2018-12-11 | 西安理工大学 | Calligraphic style recognition methods based on convolutional neural networks |
| CN109615609A (en) * | 2018-11-15 | 2019-04-12 | 北京航天自动控制研究所 | A Deep Learning-Based Solder Joint Defect Detection Method |
| CN109859207A (en) * | 2019-03-06 | 2019-06-07 | 华南理工大学 | A kind of defect detection method of high-density flexible substrate |
| CN110060238A (en) * | 2019-04-01 | 2019-07-26 | 桂林电子科技大学 | Pcb board based on deep learning marks print quality inspection method |
| CN110378338A (en) * | 2019-07-11 | 2019-10-25 | 腾讯科技(深圳)有限公司 | A kind of text recognition method, device, electronic equipment and storage medium |
Non-Patent Citations (3)
| Title |
|---|
| SESEN_S: "Anchor-free目标检测系列2-CornerNet-Lite解读" * |
| VENKAT ANIL ADIBHATLA 等: "Detecting Defects in PCB using Deep Learning via Convolution Neural Networks" * |
| 王永利 等: "基于卷积神经网络的PCB缺陷检测与识别算法" * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112669292A (en) * | 2020-12-31 | 2021-04-16 | 上海工程技术大学 | Method for detecting and classifying defects on painted surface of aircraft skin |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111415325B (en) | 2023-04-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111507990B (en) | Tunnel surface defect segmentation method based on deep learning | |
| CN105957086B (en) | A Change Detection Method of Remote Sensing Image Based on Optimal Neural Network Model | |
| CN110992317A (en) | A method for detecting PCB board defects based on semantic segmentation | |
| CN111476307B (en) | Lithium battery surface defect detection method based on depth field adaptation | |
| CN109376792A (en) | Appearance defect classification method of photovoltaic cells based on multi-channel residual neural network | |
| CN109272500B (en) | Fabric classification method based on adaptive convolutional neural network | |
| CN109239102A (en) | A kind of flexible circuit board open defect detection method based on CNN | |
| CN110097053A (en) | A kind of power equipment appearance defect inspection method based on improvement Faster-RCNN | |
| CN104850858A (en) | Injection-molded product defect detection and recognition method | |
| CN116071338A (en) | Method, device and equipment for detecting surface defects of steel plate based on YOLOX | |
| CN111553433B (en) | Lithium battery defect classification method based on multi-scale convolution feature fusion network | |
| CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
| CN111351860A (en) | Wood internal defect detection method based on Faster R-CNN | |
| CN107123111A (en) | A kind of depth residual error net structure method for mobile phone screen defects detection | |
| CN107016396B (en) | Method for deep learning and identifying image characteristics of assembly connecting piece | |
| CN111709477A (en) | A method and tool for garbage classification based on improved MobileNet network | |
| CN111932639B (en) | A detection method of unbalanced defect samples based on convolutional neural network | |
| CN111598854A (en) | Complex texture small defect segmentation method based on rich robust convolution characteristic model | |
| CN114743102A (en) | A kind of defect detection method, system and device for furniture board | |
| CN115631186B (en) | A method for surface defect detection of industrial components based on dual-branch neural network | |
| CN103090946A (en) | Method and system for measuring single fruit tree yield | |
| CN116958053B (en) | A bamboo stick counting method based on yolov4-tiny | |
| CN111415325B (en) | A Copper Foil Substrate Defect Detection Method Based on Convolutional Neural Network | |
| CN117636057A (en) | Train bearing damage classification and identification method based on multi-branch cross-space attention model | |
| CN112950613B (en) | Surface defect detection method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |