CN114821182A - Rice growth stage image recognition method - Google Patents
Rice growth stage image recognition method Download PDFInfo
- Publication number
- CN114821182A CN114821182A CN202210494136.7A CN202210494136A CN114821182A CN 114821182 A CN114821182 A CN 114821182A CN 202210494136 A CN202210494136 A CN 202210494136A CN 114821182 A CN114821182 A CN 114821182A
- Authority
- CN
- China
- Prior art keywords
- swin
- transformer
- odrl
- rice
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及农作物图像识别方法领域,具体是一种水稻生长阶段图像识别方法。The invention relates to the field of crop image recognition methods, in particular to an image recognition method for rice growth stages.
背景技术Background technique
为了对水稻进行更好的栽培,需要了解水稻各个生长阶段,并在不同的生长阶段采取相应的种植措施。当前的智慧农业中,一般是通过在水稻各个生长阶段进行图像采集,并对采集的图像进行识别,来获知水稻当前处于什么生长阶段,并基于识别结果采取相应种植措施。因此,如何通过对水稻生长阶段图像的识别来获知水稻当前的生长状态,是影响水稻智能化管理种植的重要因素。In order to better cultivate rice, it is necessary to understand the various growth stages of rice, and take corresponding planting measures at different growth stages. In current smart agriculture, images are generally collected at various growth stages of rice, and the collected images are identified to know what growth stage the rice is currently in, and to take corresponding planting measures based on the identification results. Therefore, how to know the current growth state of rice by identifying the images of rice growth stages is an important factor affecting the intelligent management and planting of rice.
随着计算机技术的不断发展,越来越多的技术能够与其他领域结合。在农业上,精准、迅速和便捷的识别技术能够降低农业工作者的成本,对农作物的产量也有着积极的影响。现有技术大多使用卷积网络作为农业作物的检测识别的模型,卷积网络使用方便,但以卷积网络为主的模型检测存在精度不高的缺点。对于以Transformer为主的模型,其性能比传统的卷积网络高,但其需要大量的数据集作为支撑对模型进行训练。而在农业上,数据集往往很少,大多需要自己收集并且人工标注,制作大量数据集需要投入大量人力物力,存在着成本高的特点。因此,目前缺乏一种在小规模数据集上达到高精度的识别模型。With the continuous development of computer technology, more and more technologies can be combined with other fields. In agriculture, accurate, rapid and convenient identification technology can reduce the cost of agricultural workers and also have a positive impact on the yield of crops. Most of the existing technologies use convolutional networks as models for detection and identification of agricultural crops. Convolutional networks are easy to use, but the detection of models based on convolutional networks has the disadvantage of low accuracy. For the Transformer-based model, its performance is higher than that of traditional convolutional networks, but it requires a large number of datasets as support to train the model. In agriculture, there are often very few datasets, and most of them need to be collected and labeled manually. The production of a large number of datasets requires a lot of manpower and material resources, which is characterized by high cost. Therefore, there is currently a lack of a recognition model that can achieve high accuracy on small-scale datasets.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种水稻生长阶段图像识别方法,以解决现有技术识别水稻准确率低、训练需要大量数据集支撑的问题。The purpose of the present invention is to provide an image recognition method of rice growth stage, so as to solve the problems of low accuracy of rice identification in the prior art and the need for a large number of data sets for training.
为了达到上述目的,本发明所采用的技术方案为:In order to achieve the above object, the technical scheme adopted in the present invention is:
一种水稻生长阶段图像识别方法,包括以下步骤:An image recognition method for rice growth stage, comprising the following steps:
步骤1、获取水稻在不同生长阶段的多个图像数据作为数据集;
步骤2、将步骤1得到的数据集分为训练集和测试集,并分别对训练集和测试集中的数据进行预处理,再分别进行数据增强;Step 2: Divide the data set obtained in
步骤3、以Swin Transformer模型为基础,Swin Transformer模块由块划分、线性嵌入、若干个Swin Transformer Block组成,在Swin Transformer模型中的SwinTransformer Block上添加优化密集相对定位(Optimized dense relativelocalization),并且在损失函数中添加密集相对位置损失(Dense relativelocalization loss),由此构建得到ODRL-Swin Transformer模型;
在原始Swin Transformer模块会将特征图依据窗口大小进行划分成互不重叠的窗口,窗口中的元素称为块,优化密集相对定位通过学习块之间的相对位置使得SwinTransformer在不借助额外的标注信息就能够融合更多的空间信息;在ODRL-SwinTransformer模型中,优化密集相对定位通过对原来Swin Transformer窗口中的块进行密集采样,采样时随机选取窗口中的两个块,在这我们称选中的两个块为嵌入对,之后计算嵌入对的几何相对位置距离和使用MLP预测嵌入对的相对位置距离两种途径实现空间信息的收集,并通过在Swin Transformer的损失函数上添加空间相对位置损失来进一步指导嵌入对的相对位置的计算;In the original Swin Transformer module, the feature map is divided into non-overlapping windows according to the size of the window. The elements in the window are called blocks, and the dense relative positioning is optimized by learning the relative position between the blocks. can integrate more spatial information; in the ODRL-SwinTransformer model, the optimized dense relative positioning is performed by densely sampling the blocks in the original Swin Transformer window, and randomly selecting two blocks in the window during sampling, here we call the selected block The two blocks are embedding pairs, and then the geometric relative position distance of the embedding pair is calculated and the relative position distance of the embedding pair is predicted using MLP to collect spatial information, and the spatial relative position loss is added to the loss function of the Swin Transformer. further guides the computation of the relative positions of the embedding pairs;
步骤4、利用步骤2的训练集对步骤3构建的ODRL-Swin Transformer模型进行训练,结合训练结果和步骤2的测试集调整ODRL-Swin Transformer模型的参数,直至ODRL-Swin Transformer模型的参数为最优配置参数;
步骤5、将待识别的水稻生长阶段图像数据输入至步骤4得到的最优配置参数下的ODRL-Swin Transformer模型,由ODRL-Swin Transformer模型输出水稻生长阶段图像预测识别结果。Step 5: Input the image data of the rice growth stage to be identified into the ODRL-Swin Transformer model under the optimal configuration parameters obtained in
进一步的,步骤2中的预处理包括预处理包括对获取水稻在不同生长阶段的多个图像数据进行去除重复图片和删除损坏图片的操作,删除标注文件中不匹配的信息,并将数据集按7:2:1的比例将数据集划分成训练集、测试集和验证集。Further, the preprocessing in
进一步的,步骤3中的数据增强包括Mosaic数据广增、随机翻转、缩放、随机裁剪。Further, the data enhancement in
进一步的,步骤4中,Swin Transformer检测精度对比传统的卷积网络有很大的提高,但Swin Transformer在小规模数据集上检测精度不高,优化密集相对定位(Optimizeddense relative localization)和密集相对位置损失(Dense relative localizationloss)能够使Swin Transformer在小规模数据集上同样有较好的表现。Further, in
进一步的,步骤5中进行训练时,将训练集数据输入至ODRL-Swin Transformer模型中得到输出结果,并将输出结果与测试集进行误差计算,然后基于误差计算结果调整ODRL-Swin Transformer模型的配置参数,直至误差计算结果符合预期,此时ODRL-SwinTransformer模型的配置参数为最优配置参数。Further, during training in step 5, input the training set data into the ODRL-Swin Transformer model to obtain the output results, and perform error calculation between the output results and the test set, and then adjust the configuration of the ODRL-Swin Transformer model based on the error calculation results. parameters until the error calculation results meet expectations, at which time the configuration parameters of the ODRL-SwinTransformer model are the optimal configuration parameters.
本发明基于ODRL-Swin Transformer模型对待识别的水稻图片进行检测,从而实现水稻不同阶段的检测,其中所使用的ODRL-Swin Transformer模型在Swin Transformer中的Swin Transformer Block处添加,由ODRL-Swin Transformer模型输出最终的水稻的生长阶段预测识别结果。The invention detects the rice pictures to be recognized based on the ODRL-Swin Transformer model, so as to realize the detection of different stages of rice. The ODRL-Swin Transformer model used is added at the Swin Transformer Block in the Swin Transformer, and the ODRL-Swin Transformer model is added by the ODRL-Swin Transformer model. Output the final rice growth stage prediction and recognition result.
本发明能保证识别水稻图像的准确性,实现通过小规模数据集对水稻的精准识别,本发明能够高效精准的识别水稻处于何种生长阶段,从而使得农户可以对水稻提供最合理的种植措施,提高水稻的产量。The invention can ensure the accuracy of recognizing rice images, and realize the accurate recognition of rice through small-scale data sets. The invention can efficiently and accurately identify which growth stage the rice is in, so that farmers can provide the most reasonable planting measures for the rice. Increase rice yield.
附图说明Description of drawings
图1是本发明方法流程框图。Fig. 1 is a flow chart of the method of the present invention.
图2是本发明ODRL-Swin Transformer模型的结构图。FIG. 2 is a structural diagram of the ODRL-Swin Transformer model of the present invention.
图3是本发明Opimized Dense Relative Localization部分的结构图。FIG. 3 is a structural diagram of the Optimized Dense Relative Localization part of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例对本发明进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
如图1所示,本发明一种水稻生长阶段的图像识别方法,包括以下步骤:As shown in Figure 1, an image recognition method of a rice growth stage of the present invention includes the following steps:
(1)准备数据集:(1) Prepare the dataset:
在实地采集和网上采集水稻不同生长阶段的图片数据,由此组建数据集;Collect the picture data of different growth stages of rice in the field and online to form a data set;
(2)处理数据集:(2) Processing datasets:
首先从步骤1中得到的数据集获得多种水稻生长阶段图像的具体标签和数目,剔除重复和异常的数据,然后将数据集按照7:3的比例为训练集和测试集,并分别对训练集和测试集中的数据进行预处理,预处理包括获取水稻在不同生长阶段的多个图像数据进行去除重复图片和删除损坏图片的操作,删除标注文件中不匹配的信息,并将数据集按7:2:1的比例将数据集划分成训练集、测试集和验证集。First, the specific labels and numbers of images of various rice growth stages are obtained from the data set obtained in
(3)数据增强:(3) Data enhancement:
对训练集和测试集中的数据分别进行数据增强,数据增强包括Mosaic数据广增、随机翻转(RandomFlip)、缩放(Resize)、随机裁剪(RandomCrop)。数据增强后,再对数据进行填充(Pad),以避免特征损失,保留水稻数据集的特征。Data augmentation is performed on the data in the training set and the test set respectively. Data augmentation includes Mosaic data augmentation, random flip (RandomFlip), scaling (Resize), and random cropping (RandomCrop). After the data is enhanced, pad the data to avoid feature loss and retain the features of the rice dataset.
(4)以Swin Transformer为基础构建ODRL-Swin Transformer模型:(4) Build the ODRL-Swin Transformer model based on Swin Transformer:
在Swin Transformer模型中的SwinTransformer Block上添加优化密集相对定位(Optimized dense relative localization),并且在损失函数中添加密集相对位置损失(Dense relative localization loss),由此构建得到ODRL-Swin Transformer模型。The optimized dense relative localization is added to the SwinTransformer Block in the Swin Transformer model, and the Dense relative localization loss is added to the loss function to construct the ODRL-Swin Transformer model.
经过块划分和线性嵌入后得到的特征图在原始Swin Transformer Block中依据窗口大小进行划分成互不重叠的窗口,窗口中的元素称为块,优化密集相对定位通过学习块之间的相对位置使得Swin Transformer在不借助额外的标注信息就能够融合更多的空间信息。在我们的ODRL-Swin Transformer模型中,优化密集相对定位通过对原来SwinTransformer窗口中的块进行密集采样,采样时随机选取窗口中的两个块,在这称选中的两个块为嵌入对,之后计算嵌入对的几何相对位置距离和使用MLP预测嵌入对的相对位置距离两种途径实现空间信息的收集,实现方法如下:The feature map obtained after block division and linear embedding is divided into non-overlapping windows according to the size of the window in the original Swin Transformer Block. The elements in the window are called blocks, and the optimized dense relative positioning makes the relative position between the blocks to be learned. Swin Transformer can fuse more spatial information without additional annotation information. In our ODRL-Swin Transformer model, the dense relative positioning is optimized by densely sampling the blocks in the original SwinTransformer window, and randomly selecting two blocks in the window during sampling, where the selected two blocks are called embedding pairs, and then There are two ways to collect spatial information by calculating the geometric relative position distance of the embedding pair and using MLP to predict the relative position distance of the embedding pair. The implementation methods are as follows:
给定图像x,Swin Transformer模型中Swin Transformer Block将经过块划分和线性嵌入得到的特征图以窗口大小为7*7的大小进行划分,将输入的特征图分成H*W个大小为相同的窗口,其中H为划分后窗口行数,W为划分后窗口的列数。其中的一个窗口可以表示为Gx={ei,j}1≤i≤H,1≤j≤W,其中ei,j∈RD,ei,j表示嵌入块,D是划分窗口空间的维数。i表示第i行的嵌入块,j表示第j列第嵌入块。Given an image x, the Swin Transformer Block in the Swin Transformer model divides the feature map obtained by block division and linear embedding with a window size of 7*7, and divides the input feature map into H*W windows of the same size , where H is the number of rows of the divided window, and W is the number of columns of the divided window. One of the windows can be expressed as G x ={e i,j } 1≤i≤H,1≤j≤W , where e i,j ∈R D , e i,j represents the embedding block, and D is the partitioned window space dimension. i represents the embedded block in the i-th row, and j represents the j-th column of the embedded block.
对于每个Gx,在优化密集相对定位模块中随机采样多对嵌入,并且对于每个采样对(ei,j,ep,j),计算2D归一化平移偏移量(tu,tv)T,计算方式如下:For each G x , pairs of embeddings are randomly sampled in the optimized dense relative localization module, and for each sampled pair ( ei,j ,e p,j ), a 2D normalized translation offset (t u , t v ) T , calculated as follows:
(tu,tv)T∈[0,1]2. (t u ,t v ) T ∈[0,1] 2 .
其中 in
sign()为符号函数,函数返回一个整型变量,指出参数的正负号,返回值如果number大于0,则sign返回1;等于0,则返回0;小于0,则返回-1。其中number参数的符号决定了sign函数的返回值。当即返回1表示正输入,返回-1表示负输入,返回0表示其他输入。α为分段函数的分段点,默认值为4。p表示嵌入对中的另一个嵌入块是第p行,h表示嵌入对中的另一个嵌入块是第h行。sign() is a sign function. The function returns an integer variable indicating the sign of the parameter. If the return value is greater than 0, sign returns 1; if it is equal to 0, it returns 0; if it is less than 0, it returns -1. The sign of the number parameter determines the return value of the sign function. Immediately return 1 for positive input, -1 for negative input, and 0 for other input. α is the segment point of the piecewise function, and the default value is 4. p means that the other embedding block in the embedding pair is the pth row, and h means that the other embedding block in the embedding pair is the hth row.
将选定的嵌入向量ei,j和ep,h进行连接运算,将经过连接运算后的向量输入到优化密集相对定位模块的多层感知机MLP中,该多层感知机MLP具有两个隐藏层和两个输出神经元,用来预测网格上位置(i,j)和位置(p,h)之间的相对距离,du表示预测横坐标上的距离,dv表示预测纵坐标上的距离。其结构如图3所示,计算公式如下:The selected embedded vectors e i,j and e p,h are connected to the operation, and the vector after the connection operation is input into the multi-layer perceptron MLP that optimizes the dense relative positioning module. The multi-layer perceptron MLP has two The hidden layer and two output neurons are used to predict the relative distance between position (i, j) and position (p, h) on the grid, d u represents the distance on the predicted abscissa, and d v represents the predicted ordinate on the distance. Its structure is shown in Figure 3, and the calculation formula is as follows:
(du,dv)T=f(ei,j,ep,j)T (d u ,d v ) T =f(e i,j ,e p,j ) T
B表示为模型并行计算时同时处理的图片数目,本发明提出的密集相对定位损失(Dense relative localization loss)为:B represents the number of pictures processed simultaneously when the model is calculated in parallel, and the dense relative localization loss proposed by the present invention is:
被添加到Swin Transformer的标准交叉熵损失中。最后总的损失为:在本发明中初始时使用λ=0.5。正则化损失的引入能让SwinTransformer在不使用额外的人工标注的情况下学习空间信息。实现不依靠大量的数据集就能有效学习图像内的空间关系。 Standard cross-entropy loss added to Swin Transformer middle. The final total loss is: In the present invention, λ=0.5 is used initially. The introduction of regularization loss enables SwinTransformer to learn spatial information without using additional human annotations. This enables efficient learning of spatial relationships within images without relying on large datasets.
本发明在Swin Transformer模型中的Swin Transformer Block中添加OpimizedDense Relative Localization优化的相对位置模块,Swin Transformer Block修改为ODRL-Swin Transformer Block,修改后的ODRL-Swin Transformer网络的结构图如图2所示。可以使Swin Transformer在不使用额外的人工标注的情况下学习空间信息,通过对每个图像的多个嵌入对进行密集采样并要求网络预测它们的相对位置来实现空间信息的学习。In the present invention, a relative position module optimized by OptimizedDense Relative Localization is added to the Swin Transformer Block in the Swin Transformer model, and the Swin Transformer Block is modified into an ODRL-Swin Transformer Block. The structure diagram of the modified ODRL-Swin Transformer network is shown in Figure 2. The Swin Transformer can be made to learn spatial information without using additional human annotations by densely sampling multiple embedding pairs for each image and asking the network to predict their relative positions.
(5)训练ODRL-Swin Transformer模型,并设置该模型的最优配置参数:(5) Train the ODRL-Swin Transformer model and set the optimal configuration parameters of the model:
将ODRL-Swin Transformer模型训练后的输出结果为水稻的生长阶段预测识别结果,并将模型输出结果与测试集进行分类误差和回归误差计算,根据验证和测试结果,将训练后ODRL-Swin Transformer模型的配置参数调节为最优配置参数,将待识别水稻生长阶段图像的数据集输入至参数调整为最优配置参数的ODRL-Swin Transformer模型,由ODRL-Swin Transformer模型输出最终的水稻预测识别结果。The output result after training the ODRL-Swin Transformer model is the prediction and recognition result of the growth stage of rice, and the classification error and regression error are calculated between the model output and the test set. According to the verification and test results, the trained ODRL-Swin Transformer model is used. The configuration parameters of the ODRL-Swin Transformer model are adjusted to the optimal configuration parameters, and the data set of the images of the rice growth stage to be recognized is input into the ODRL-Swin Transformer model whose parameters are adjusted to the optimal configuration parameters, and the ODRL-Swin Transformer model outputs the final rice prediction and recognition results.
通过对水稻不同阶段的图像进行识别,将这些水稻图像数据作为待识别数据集,输入至最终的ODRL-Swin Transformer模型中,由ODRL-Swin Transformer模型输出对水稻的检测结果,得到输入的图像属于水稻的哪一种生长阶段,实现精准辨别检测。By recognizing the images of different stages of rice, these rice image data are used as the data set to be recognized and input into the final ODRL-Swin Transformer model, and the ODRL-Swin Transformer model outputs the detection results of rice, and the input image belongs to Which growth stage of rice can be accurately identified and detected.
本发明所述的实施例仅仅是对本发明的优选实施方式进行的描述,并非对本发明构思和范围进行限定,在不脱离本发明设计思想的前提下,本领域中工程技术人员对本发明的技术方案作出的各种变型和改进,均应落入本发明的保护范围,本发明请求保护的技术内容,已经全部记载在权利要求书中。The embodiments of the present invention are only descriptions of the preferred embodiments of the present invention, and do not limit the concept and scope of the present invention. Various modifications and improvements made should fall within the protection scope of the present invention, and the technical content claimed in the present invention has been fully recorded in the claims.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210494136.7A CN114821182A (en) | 2022-05-05 | 2022-05-05 | Rice growth stage image recognition method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210494136.7A CN114821182A (en) | 2022-05-05 | 2022-05-05 | Rice growth stage image recognition method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114821182A true CN114821182A (en) | 2022-07-29 |
Family
ID=82512188
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210494136.7A Pending CN114821182A (en) | 2022-05-05 | 2022-05-05 | Rice growth stage image recognition method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114821182A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116863403A (en) * | 2023-07-11 | 2023-10-10 | 仲恺农业工程学院 | Crop big data environment monitoring method and device and electronic equipment |
| CN117935138A (en) * | 2023-12-14 | 2024-04-26 | 广东东为信息技术有限公司 | Rice planting monitoring method, device, equipment and storage medium |
| CN119360372A (en) * | 2024-09-30 | 2025-01-24 | 东北石油大学 | A deep learning-enhanced intelligent microscopic image recognition and screening method for pollen fossils |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170351790A1 (en) * | 2016-06-06 | 2017-12-07 | The Climate Corporation | Data assimilation for calculating computer-based models of crop growth |
| CN109740483A (en) * | 2018-12-26 | 2019-05-10 | 南宁五加五科技有限公司 | A kind of rice growing season detection method based on deep-neural-network |
| US20210183045A1 (en) * | 2018-08-30 | 2021-06-17 | Ntt Data Ccs Corporation | Server of crop growth stage determination system, growth stage determination method, and storage medium storing program |
| CN113505810A (en) * | 2021-06-10 | 2021-10-15 | 长春工业大学 | Pooling vision-based method for detecting weed growth cycle by using Transformer |
| CN113610108A (en) * | 2021-07-06 | 2021-11-05 | 中南民族大学 | Rice pest identification method based on improved residual error network |
| CN114066820A (en) * | 2021-10-26 | 2022-02-18 | 武汉纺织大学 | A fabric defect detection method based on Swin-Transformer and NAS-FPN |
-
2022
- 2022-05-05 CN CN202210494136.7A patent/CN114821182A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170351790A1 (en) * | 2016-06-06 | 2017-12-07 | The Climate Corporation | Data assimilation for calculating computer-based models of crop growth |
| US20210183045A1 (en) * | 2018-08-30 | 2021-06-17 | Ntt Data Ccs Corporation | Server of crop growth stage determination system, growth stage determination method, and storage medium storing program |
| CN109740483A (en) * | 2018-12-26 | 2019-05-10 | 南宁五加五科技有限公司 | A kind of rice growing season detection method based on deep-neural-network |
| CN113505810A (en) * | 2021-06-10 | 2021-10-15 | 长春工业大学 | Pooling vision-based method for detecting weed growth cycle by using Transformer |
| CN113610108A (en) * | 2021-07-06 | 2021-11-05 | 中南民族大学 | Rice pest identification method based on improved residual error network |
| CN114066820A (en) * | 2021-10-26 | 2022-02-18 | 武汉纺织大学 | A fabric defect detection method based on Swin-Transformer and NAS-FPN |
Non-Patent Citations (1)
| Title |
|---|
| YAHUI LIU等: "Efficient Training of Visual Transformers with Small Datasets", Retrieved from the Internet <URL:https://arxiv.org/abs/2106.03746v2> * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116863403A (en) * | 2023-07-11 | 2023-10-10 | 仲恺农业工程学院 | Crop big data environment monitoring method and device and electronic equipment |
| CN116863403B (en) * | 2023-07-11 | 2024-01-02 | 仲恺农业工程学院 | A crop big data environmental monitoring method, device and electronic equipment |
| CN117935138A (en) * | 2023-12-14 | 2024-04-26 | 广东东为信息技术有限公司 | Rice planting monitoring method, device, equipment and storage medium |
| CN119360372A (en) * | 2024-09-30 | 2025-01-24 | 东北石油大学 | A deep learning-enhanced intelligent microscopic image recognition and screening method for pollen fossils |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111898432B (en) | Pedestrian detection system and method based on improved YOLOv3 algorithm | |
| CN116306813B (en) | A method based on YOLOX lightweight and network optimization | |
| CN115311502A (en) | A small sample scene classification method for remote sensing images based on multi-scale dual-stream architecture | |
| CN114821182A (en) | Rice growth stage image recognition method | |
| CN116258914B (en) | Remote Sensing Image Classification Method Based on Machine Learning and Local and Global Feature Fusion | |
| CN118657784B (en) | A desoldering detection method based on unmanned aerial vehicle (UAV) acquisition of photovoltaic module images | |
| CN114519402B (en) | Citrus disease and insect pest detection method based on neural network | |
| CN114694039A (en) | A kind of remote sensing hyperspectral and lidar image fusion classification method and device | |
| CN118429329B (en) | Road damage identification method and device based on RD-YOLO network | |
| CN112348007A (en) | Optical character recognition method based on neural network | |
| CN116091763A (en) | Apple leaf disease image semantic segmentation system, segmentation method, device and medium | |
| CN111582401A (en) | A Sunflower Seed Sorting Method Based on Double-branch Convolutional Neural Network | |
| CN117830821A (en) | Citrus disease identification method based on improved SheffeNetV 2 | |
| CN116206123A (en) | Method for detecting target based on multi-characterization feature extraction method | |
| CN112991257B (en) | A Fast Detection Method of Heterogeneous Remote Sensing Image Changes Based on Semi-Supervised Siamese Network | |
| CN110188864A (en) | The small-sample learning method of measurement is indicated and is distributed based on distribution | |
| CN118365974B (en) | A water quality category detection method, system and device based on hybrid neural network | |
| CN114120315A (en) | Method and device for identifying electron microscope viruses based on small sample | |
| CN117274666B (en) | Apple leaf disease detection method based on YOLOv7 and parameter-free attention mechanism | |
| CN118506364A (en) | Light-weight cell image target detection method for parallel expansion fusion characteristics | |
| CN114882292B (en) | Marine target recognition method in remote sensing images based on cross-sample attention mechanism graphical neural network | |
| Xinru et al. | Plant disease detection based on improved YOLOv5 | |
| CN117315348A (en) | SDSS damaged image classification network based on multi-layer perceptron and depth separable convolution | |
| CN117132919A (en) | Multi-scale high-dimensional feature analysis unsupervised learning video anomaly detection method | |
| CN115631377A (en) | Image classification method based on space transformation network and convolutional neural network |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220729 |