CN111814563A - A kind of classification method and device of planting structure - Google Patents
A kind of classification method and device of planting structure Download PDFInfo
- Publication number
- CN111814563A CN111814563A CN202010526298.5A CN202010526298A CN111814563A CN 111814563 A CN111814563 A CN 111814563A CN 202010526298 A CN202010526298 A CN 202010526298A CN 111814563 A CN111814563 A CN 111814563A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- index
- network model
- planting structure
- classifying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种种植结构的分类方法及装置,属于农业信息技术领域。The invention relates to a planting structure classification method and device, belonging to the technical field of agricultural information.
背景技术Background technique
农作物种植结构是农作物空间格局的重要组成部分,其时空变化信息是一个地区或生产单位作物种植方式的重要表达。区域农作物估产、农业种植结构调整及优化有赖于准确及时地确定农业种植结构变化信息,使用遥感结合高新技术提取作物种植结构具有很好的应用前景和巨大的发展潜力。The planting structure of crops is an important part of the spatial pattern of crops, and its spatiotemporal change information is an important expression of the crop planting method of a region or production unit. Regional crop yield estimation, agricultural planting structure adjustment and optimization depend on accurate and timely determination of agricultural planting structure change information. The use of remote sensing combined with high and new technology to extract crop planting structure has good application prospects and huge development potential.
同时,随着机器学习的发展,神经网络(NNS)和支持向量机(SVM)等算法被应用于高分辨率遥感影像的分类,但他们都是浅层学习算法,不能很好的表达复杂函数。因此这些模型不能适应样本量大、复杂性强的语义分割问题。At the same time, with the development of machine learning, algorithms such as neural network (NNS) and support vector machine (SVM) have been applied to the classification of high-resolution remote sensing images, but they are all shallow learning algorithms and cannot express complex functions well . Therefore, these models cannot adapt to the semantic segmentation problem with large sample size and high complexity.
随之,研究者根据卷积神经网络(CNN)具有特征自动提取与自动分类的优势,将其与遥感领域进行结合,实现了种植结构的分类。但是,需要说明的是,目前的方法的数据源过于单一,对村庄中土地使用情况的分类质量不高,无法准确获取村庄中的土地使用情况的信息。Subsequently, according to the advantages of automatic feature extraction and automatic classification of convolutional neural network (CNN), the researchers combined it with the field of remote sensing to realize the classification of planting structures. However, it should be noted that the data source of the current method is too single, the classification quality of the land use in the village is not high, and the information of the land use in the village cannot be accurately obtained.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种种植结构的分类方法及装置,以解决现有技术中无法准确获取村庄中的土地使用情况的信息的问题。The purpose of the present invention is to provide a classification method and device for planting structures, so as to solve the problem in the prior art that the information of the land use situation in the village cannot be accurately obtained.
为实现上述目的,本发明的种植结构的分类方法的技术方案,包括以下步骤:For achieving the above object, the technical scheme of the classification method of the planting structure of the present invention comprises the following steps:
1)获取原始遥感图像数据;1) Obtain the original remote sensing image data;
2)对原始遥感图像数据进行预处理:2) Preprocessing the original remote sensing image data:
根据原始遥感图像数据,提取NDVI、AWEI、SAVI中至少一种种植结构指数;其中,NDVI为归一化差异植被指数,AWEI为自动水体提取指数,SAVI为土壤调节植被指数;According to the original remote sensing image data, extract at least one planting structure index among NDVI, AWEI, and SAVI; wherein, NDVI is the normalized difference vegetation index, AWEI is the automatic water extraction index, and SAVI is the soil adjustment vegetation index;
将提取的种植结构指数加入到原始遥感图像数据,得到影像;Add the extracted planting structure index to the original remote sensing image data to obtain an image;
对多个波段的影像进行融合;Fusion of images of multiple bands;
对融合后的影像进行标注,得到训练集;Label the fused images to obtain a training set;
3)构建网络模型;3) Build a network model;
4)将训练集输入到构建的网络模型进行训练,得到训练好的网络模型;4) Input the training set into the constructed network model for training to obtain a trained network model;
5)将待分类的遥感图像经过预处理后输入到训练好的网络模型进行种植结构的分类。5) Input the remote sensing images to be classified into the trained network model after preprocessing to classify the planting structure.
本发明的有益效果是:The beneficial effects of the present invention are:
本发明中通过对遥感数据进行预处理,也即通过对遥感数据进行种植结构指数的提取,将确定的种植结构指数加入到原始遥感图像数据中,并对多个波段的影像进行融合,也即通过增加训练用影像的空间分辨率和光谱丰富度,进而实现后续网络模型训练中参数的确定,对待测试遥感影像进行准确的分类。In the present invention, by preprocessing the remote sensing data, that is, by extracting the planting structure index from the remote sensing data, the determined planting structure index is added to the original remote sensing image data, and images of multiple bands are fused, that is By increasing the spatial resolution and spectral richness of the training images, the parameters in the subsequent network model training can be determined, and the remote sensing images to be tested can be accurately classified.
进一步的,所述融合采用G-S影像融合方法。Further, the fusion adopts the G-S image fusion method.
进一步的,所述归一化差异植被指数的公式为:Further, the formula of the normalized difference vegetation index is:
NDVI=(NIR-RED)/(NIR+RED)NDVI=(NIR-RED)/(NIR+RED)
式中,NIR为近红外波段反射率,RED为红色波段反射率。where NIR is the reflectance in the near-infrared band, and RED is the reflectance in the red band.
进一步的,所述自动水体提取指数的公式为:Further, the formula of the automatic water extraction index is:
AWEI=BLUE+2.5*GREEN-1.5*(NIR+SWIR1)-0.25*SWIR2AWEI=BLUE+2.5*GREEN-1.5*(NIR+SWIR1)-0.25*SWIR2
式中,BLUE为蓝光波段,GREEN为绿色波段,SWIR1和SWIR2为短波红外波段。where BLUE is the blue light band, GREEN is the green band, and SWIR1 and SWIR2 are the short-wave infrared band.
进一步的,所述土壤调节植被指数的公式为:Further, the formula for the soil adjustment vegetation index is:
式中,L为土壤调节因子对应植被覆盖情况,L取0.5。In the formula, L is the soil adjustment factor corresponding to the vegetation coverage, and L is taken as 0.5.
进一步的,所述构建的网络模型为U-Net网络模型,所述U-Net网络模型包括深度可分离卷积模块和激活函数。Further, the constructed network model is a U-Net network model, and the U-Net network model includes a depthwise separable convolution module and an activation function.
本发明还提供了一种种植结构的分类装置的技术方案,该装置包括处理器和存储器,所述处理器执行所述存储器存储的上述种植结构的分类方法的技术方案。The present invention also provides a technical solution of a planting structure classification device, the device includes a processor and a memory, and the processor executes the technical solution of the above-mentioned classification method of the planting structure stored in the memory.
附图说明Description of drawings
图1是本发明的种植结构的分类方法实施例的流程图;Fig. 1 is the flow chart of the classification method embodiment of planting structure of the present invention;
图2是本发明的U-Net网络模型的结构示意图;Fig. 2 is the structural representation of the U-Net network model of the present invention;
图3-a是常规U-Net训练准确率曲线;Figure 3-a is the conventional U-Net training accuracy curve;
图3-b是本发明的Separable-UNet训练准确率曲线;Fig. 3-b is the Separable-UNet training accuracy rate curve of the present invention;
图3-c是本发明的Mish-Separable-UNet训练准确率曲线;Fig. 3-c is the Mish-Separable-UNet training accuracy rate curve of the present invention;
图4-a是对原始遥感影响数据的七波段分类结果;Figure 4-a is the seven-band classification result of the original remote sensing impact data;
图4-b是本发明的在原始遥感影响数据的七波段中加入NDVI的分类结果;Figure 4-b is the classification result of adding NDVI to the seven bands of the original remote sensing influence data according to the present invention;
图4-c是本发明的在原始遥感影响数据的七波段中加入AWEI的分类结果;Figure 4-c is the classification result of adding AWEI to the seven bands of the original remote sensing impact data of the present invention;
图4-d是本发明的在原始遥感影响数据的七波段中加入SAVI的分类结果;Fig. 4-d is the classification result that adds SAVI in the seven bands of original remote sensing influence data of the present invention;
图4-e是本发明的在原始遥感影响数据的七波段中加入NDVI与AWEI的分类结果;Figure 4-e is the classification result of adding NDVI and AWEI to the seven bands of the original remote sensing influence data of the present invention;
图4-f是本发明的在原始遥感影响数据的七波段中加入NDVI与SAVI的分类结果;Fig. 4-f is the classification result of adding NDVI and SAVI in the seven bands of original remote sensing influence data of the present invention;
图4-g是本发明的在原始遥感影响数据的七波段中加入AWEI与SAVI的分类结果;Fig. 4-g is the classification result of adding AWEI and SAVI in the seven bands of original remote sensing influence data of the present invention;
图4-h是本发明的在原始遥感影响数据的七波段中加入三种种植结果指数的分类结果;Fig. 4-h is the classification result of adding three kinds of planting result indices in the seven bands of original remote sensing influence data of the present invention;
图5是本发明的种植结构的分类装置实施例的结构示意图。FIG. 5 is a schematic structural diagram of an embodiment of a device for classifying planting structures according to the present invention.
具体实施方式Detailed ways
下面结合附图及具体的实施例对本发明的方案进行具体的说明。The solution of the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
种植结构分类方法实施例:Example of planting structure classification method:
以某省某市某灌区为例,使用数据库中已经存储的该区5月某日的Landsat8遥感影像数据,对本发明的种植结构分类方法进行具体的介绍。Taking an irrigation area of a city in a certain province as an example, using the Landsat8 remote sensing image data of the area on a certain date in May, which has been stored in the database, the planting structure classification method of the present invention is specifically introduced.
本发明的种植结构分类方法,如图1所示,具体包括如下步骤:The planting structure classification method of the present invention, as shown in Figure 1, specifically comprises the following steps:
1)获取原始遥感图像数据;1) Obtain the original remote sensing image data;
本实施例中,原始遥感图像数据是获取的Landsat8商业多光谱卫星数据集。In this embodiment, the original remote sensing image data is the acquired Landsat8 commercial multispectral satellite data set.
2)对原始遥感图像数据进行预处理:2) Preprocessing the original remote sensing image data:
根据原始遥感图像数据,提取NDVI、AWEI、SAVI中至少一种种植结构指数;其中,NDVI为归一化差异植被指数,AWEI为自动水体提取指数,SAVI为土壤调节植被指数;According to the original remote sensing image data, extract at least one planting structure index among NDVI, AWEI, and SAVI; wherein, NDVI is the normalized difference vegetation index, AWEI is the automatic water extraction index, and SAVI is the soil adjustment vegetation index;
将提取的种植结构指数加入到原始遥感图像数据,得到影像;Add the extracted planting structure index to the original remote sensing image data to obtain an image;
对多个波段的影像进行融合;Fusion of images of multiple bands;
对融合后的影像进行标注,得到训练集;Label the fused images to obtain a training set;
其中,归一化差异植被指数(Normalized Difference Vegetation Index,NDVI)根据植被中叶绿素在可见光波段红色波段反射率较低绿色波段反射相对较高,以及在近红外波段至中红外波段范围反射率逐渐降低的特点,构建最具代表性的归一化差异植被指数,其公式如下:Among them, the Normalized Difference Vegetation Index (NDVI) is based on the fact that the chlorophyll in the vegetation has a relatively low reflectance in the red band of the visible light band and relatively high reflectance in the green band, and the reflectivity in the near-infrared band to the mid-infrared band gradually decreases. The most representative normalized difference vegetation index is constructed, and its formula is as follows:
NDVI=(NIR-RED)/(NIR+RED)NDVI=(NIR-RED)/(NIR+RED)
式中,NIR为近红外波段反射率,RED为红色波段反射率。where NIR is the reflectance in the near-infrared band, and RED is the reflectance in the red band.
自动水体提取指数(Automated Water Extraction Index,AWEI)改良了水体提取中存在的分类精度低、阈值选取相对不固定等因素,其公式如下:The Automated Water Extraction Index (AWEI) improves the factors such as low classification accuracy and relatively unfixed threshold selection in water extraction. Its formula is as follows:
AWEIsh=BLUE+2.5*GREEN-1.5*(NIR+SWIR1)-0.25*SWIR2AWEI sh =BLUE+2.5*GREEN-1.5*(NIR+SWIR1)-0.25*SWIR2
式中,BLUE为蓝光波段,GREEN为绿色波段,SWIR1和SWIR2为短波红外波段。where BLUE is the blue light band, GREEN is the green band, and SWIR1 and SWIR2 are the short-wave infrared band.
土壤调节植被指数(Soil Adjusted Vegetation Index,SAVI)引入土壤调节因子,提高归一化指数不同土壤背景下的识别能力,其公式如下:The Soil Adjusted Vegetation Index (SAVI) introduces a soil adjustment factor to improve the identification ability of the normalized index under different soil backgrounds. The formula is as follows:
式中,L为土壤调节因子对应植被覆盖情况,0为无植被,1为植被完全覆盖。本文根据研究区实地情况,选取L=0.5为最佳因子值。In the formula, L is the vegetation coverage corresponding to the soil adjustment factor, 0 is no vegetation, and 1 is complete vegetation coverage. In this paper, according to the actual situation in the study area, L=0.5 is selected as the optimal factor value.
需要说明的是,本实施例中基于不同指数特性选取NDVI、AWEI和SAVI三种种植指数加入1-7波段遥感影像提升光谱复杂度,其中NDVI和SAVI能有效区分植被间光谱差异及植被与土壤差异,AWEI具有高效识别水体的能力,同时选用这三种种植结构指数,能够明确区域的土地使用情况,为后续土地的合理规划,有重要的意义。It should be noted that in this embodiment, three planting indices, NDVI, AWEI and SAVI, are selected based on different index characteristics and added to the 1-7 band remote sensing image to improve the spectral complexity, wherein NDVI and SAVI can effectively distinguish the spectral difference between vegetation and vegetation and soil. Differences, AWEI has the ability to efficiently identify water bodies, and the selection of these three planting structure indices at the same time can clarify the land use situation in the region, which is of great significance for the rational planning of subsequent land.
作为其他实施方式,本发明中种植结构指数的选择也可以选择上述中的一种或者两种,获取指数影像图。但是需要说明的是,在选择其中的一种或两种时,如AWEI,其具有高效识别水体的能力,能够高效地区分出水体,但是对于植被的区分,却不明显。As another embodiment, the selection of the planting structure index in the present invention may also select one or both of the above to obtain an index image map. However, it should be noted that when one or two of them are selected, such as AWEI, it has the ability to efficiently identify water bodies and can efficiently distinguish water bodies, but the distinction between vegetation is not obvious.
利用G-S影像融合方法对多个波段的影像进行影像融合。The G-S image fusion method is used to fuse images of multiple bands.
需要指出的是,本实施例中的指数与遥感影像图的合成过程为:增加图像矩阵数据维度至3维,叠加多个波段影像及指数影像数据矩阵至3维矩阵中的第二维,生成三维多通道数据,实现指数与遥感影像图的合成。It should be pointed out that the synthesis process of the index and the remote sensing image map in this embodiment is as follows: increasing the dimension of the image matrix data to 3 dimensions, superimposing multiple band images and index image data matrices to the second dimension in the 3-dimensional matrix, and generating 3D multi-channel data, realize the synthesis of index and remote sensing image map.
然后,在上述合成的基础上,将合成的数据与全色波段影像采用G-S影像融合方法进行融合,实现提高数据空间分辨率和光谱复杂度的影像融合。对上述过程实现的融合数据进行切片输入网络,联合标签数据进行训练。Then, on the basis of the above synthesis, the synthesized data is fused with the panchromatic band image using the G-S image fusion method to achieve image fusion that improves the spatial resolution and spectral complexity of the data. The fusion data realized by the above process is sliced into the network and trained with the label data.
本实施例中,对影像的标注是根据已知数据或者实地调查后进行的标注。In this embodiment, the labeling of the image is performed according to known data or after field investigation.
本实施例中随机选取其中80%作为训练数据制作训练集用于训练网络,另外20%作为测试集数据用于结果评估;对训练用影像及验证集影像进行128*128像素切片并一一对应。In this embodiment, 80% of them are randomly selected as training data to make a training set for training the network, and the other 20% are used as test set data for result evaluation; 128*128 pixel slices are made for training images and validation set images and correspond one-to-one .
3)构建网络模型;3) Build a network model;
本实施例中的网络模型采用U-Net网络模型,其包括深度可分离卷积模块和激活函数,如图2所示;The network model in this embodiment adopts the U-Net network model, which includes a depthwise separable convolution module and an activation function, as shown in Figure 2;
其中,深度可分离卷积(depthwise separable convolution),由depthwise(DW)和pointwise(PW)两个部分结合起来,用来提取特征,相比常规的卷积操作,其参数数量和运算成本比较低。Among them, the depthwise separable convolution is a combination of depthwise (DW) and pointwise (PW) to extract features. Compared with conventional convolution operations, the number of parameters and operation cost are relatively low. .
本实施例中的采用深度可分离卷积,有效降低了参数计算量,其参数量由两部分相加得到,约为常规卷积计算量的三分之一。深度可分离卷积同时考虑通道和区域改变,而常规卷积先只考虑区域,然后再考虑通道,实现了通道和区域的分离。In this embodiment, the depthwise separable convolution is adopted, which effectively reduces the amount of parameter calculation. The depthwise separable convolution considers both channel and region changes, while the conventional convolution only considers the region first, and then considers the channel, realizing the separation of the channel and the region.
其中,Mish激活函数,其公式为:Mish=x*tanh(ln(1+ex)),式中,Tanh函数的公式为:Tanh x值域为(-1,1),该函数为过原点且穿越I、III象限的严格单调递增曲线。Among them, the Mish activation function, its formula is: Mish=x*tanh(ln(1+e x )), where the formula of the Tanh function is: The Tanh x value range is (-1, 1), and the function is a strictly monotonically increasing curve passing through the origin and passing through the I and III quadrants.
本实施例中的Mish激活函数,避免了由于封顶而导致的饱和,平滑的激活函数允许更好的信息深入神经网络,从而得到更好的准确性和泛化,进一步提高了运算效率及准确率。The Mish activation function in this embodiment avoids saturation due to capping, and the smooth activation function allows better information to penetrate deep into the neural network, thereby obtaining better accuracy and generalization, and further improving operational efficiency and accuracy. .
4)将训练集输入到构建的网络模型进行训练,得到训练好的网络模型;4) Input the training set into the constructed network model for training to obtain a trained network model;
5)将待分类的遥感图像经过预处理后输入到训练好的网络模型进行种植结构的分类。5) Input the remote sensing images to be classified into the trained network model after preprocessing to classify the planting structure.
将训练数据集输入网络模型进行训练,应用训练好的权重文件对测试集进行预测,得到分类结果。Input the training data set into the network model for training, apply the trained weight file to predict the test set, and obtain the classification result.
进一步的,为了评估本申请方法的准确性,还包括对测试结果进行评估的步骤,具体的:本实施例中,使用精度、召回率、二者调和均值和kappa系数作为评估方法的指标。这些指标是根据混淆矩阵计算的,其中精度的公式为:Further, in order to evaluate the accuracy of the method of the present application, the step of evaluating the test results is also included. Specifically: in this embodiment, the precision, the recall rate, the harmonic mean of the two, and the kappa coefficient are used as the indicators of the evaluation method. These metrics are calculated from a confusion matrix, where the formula for accuracy is:
其中,Cii表示正确分类的样本数量,Cij表示I类样品被误认为J类。where C ii indicates the number of correctly classified samples, and C ij indicates that class I samples are mistaken for class J.
Recall表示分类到一个类别的像素的平均正确比例,Recall的公式为:Recall represents the average correct proportion of pixels classified into a class, and the formula for Recall is:
此外,可通过计算F精确率和召回率的调和均值F1来进一步评估模型,F1值的公式为:In addition, the model can be further evaluated by calculating the harmonic mean F1 of F precision and recall, the formula for the F1 value is:
F1=2*(Precision*Recall)/(Precision+Recall) (3)F1=2*(Precision*Recall)/(Precision+Recall) (3)
kappa系数测量预测类与人工标签的一致性。kappa系数的公式为:The kappa coefficient measures the agreement of predicted classes with human labels. The formula for the kappa coefficient is:
为了进一步的验证本发明方法的有效性,本发明仅对U-Net及其改进模型进行对比分析。本发明分别对常规U-Net模型,添加深度可分离卷积模块的U-Net模型,及更改激活函数网络模型在同一测试集数据训练200批次得到,如图3-a至图3-c所示的曲线。In order to further verify the effectiveness of the method of the present invention, the present invention only compares and analyzes U-Net and its improved model. In the present invention, the conventional U-Net model, the U-Net model with the depth separable convolution module, and the modified activation function network model are obtained by training 200 batches of data in the same test set, as shown in Figure 3-a to Figure 3-c curve shown.
可见,相比传统U-Net模型,添加深度可分离卷积模块后,模型达到过拟合批次由100降至30,大幅减少计算量,提高拟合速率。而采用Mish激活函数后,因其平滑特性,准确率及计算速率有小幅提高。改进的网络模型实现了参数轻量化,权重文件体积由303MB减少至53MB,降低82.5%,大大节约生产成本。It can be seen that, compared with the traditional U-Net model, after adding the depth separable convolution module, the model achieves overfitting batches from 100 to 30, which greatly reduces the amount of calculation and improves the fitting rate. After using the Mish activation function, the accuracy and calculation rate are slightly improved due to its smoothness. The improved network model achieves lightweight parameters, and the weight file size is reduced from 303MB to 53MB, a reduction of 82.5%, which greatly saves production costs.
同时,本发明将原始遥感影像中的1-7七个波段分别与1种、2种、3种指数合成,之后使用G-S方法融合全色波段,进行了预测结果的验证,其预测结果如图4-a至图4-h所示。At the same time, the present invention synthesizes the seven bands 1-7 in the original remote sensing image with 1, 2, and 3 indices respectively, and then uses the G-S method to fuse the panchromatic bands to verify the prediction results. The prediction results are shown in the figure. 4-a to 4-h.
下表1对各方法的预测结果对比真实标签数据进行准确率统计,经调查林地面积大部分为枣树等经济作物,因此将小麦、林地和棉花归入作物类别。Table 1 below compares the prediction results of each method with the accuracy of the real label data. After investigation, most of the forest land area is economic crops such as jujube trees, so wheat, forest land and cotton are classified into crop categories.
表1Table 1
由表1所示,无指数加入的7波段融合影像整体准确率达到90.3%,增加指数影像参与影像融合准确率均高于无指数情况,添加指数使用改进U-Net网络进行训练方法对作物种植结构分类产生了积极效果。As shown in Table 1, the overall accuracy rate of the 7-band fusion image without index addition reaches 90.3%, and the accuracy rate of the image fusion with the addition of the index image is higher than that without the index. Structural classification had a positive effect.
在加入单种指数情况下,三种指数均表现出对整体准确率的提高,其中AWEI对种植结构分类影响最大,作物识别准确率提高2.4%,且由于其水体识别特性,村庄像元及水体识别均优于另两种指数达到81.4%和93.9%,效果明显整体准确率达到92.7%,为三种指数中最优。In the case of adding a single index, all three indexes show an improvement in the overall accuracy, among which AWEI has the greatest impact on the classification of planting structure, and the accuracy of crop identification is increased by 2.4%. The recognition is better than the other two indexes by 81.4% and 93.9%, and the overall accuracy rate is 92.7%, which is the best among the three indexes.
两种指数9波段融合影像条件下,AWEI指数仍起到重要作用,含有AWEI的训练影像分类效果均优于NDVI-SAVI 9波段融合影像。NDVI指数进一步提升了AWEI的作物识别能力,提高了作物间相互识别能力,小麦、棉花、林地三类作物准确率分别提升至85.2%、74.4%和87.5%。同时还保证了较高的水体和村庄识别精度分别达到81.8%和93.5%,仅水体识别准确率低于AWEI-SAVI方法,相差0.3%,整体准确率达到93.7%。Under the condition of two index 9-band fusion images, the AWEI index still plays an important role, and the classification effect of training images containing AWEI is better than that of NDVI-SAVI 9-band fusion images. The NDVI index further improved AWEI's crop identification ability and improved the mutual identification ability between crops. The accuracy rates of wheat, cotton, and woodland were increased to 85.2%, 74.4%, and 87.5%, respectively. At the same time, it also guarantees a high water body and village identification accuracy of 81.8% and 93.5%, respectively. Only the water body identification accuracy rate is lower than the AWEI-SAVI method, with a difference of 0.3%, and the overall accuracy rate reaches 93.7%.
10波段融合影像相较NDVI-AWEI 9波段融合影像方法整体准确率提升仅有0.1%,且作物分类识别度下降,SAVI对融合影像的提升有限,考虑生产成本条件下优先级较低。综上所述,NDVI-AWEI 9波段融合影像方法配合改进的U-Net网络模型分类种植结构方法具有优越性。Compared with the NDVI-AWEI 9-band fusion image method, the overall accuracy of the 10-band fusion image is improved by only 0.1%, and the recognition degree of crop classification is reduced. The improvement of the fusion image by SAVI is limited, and the priority is lower considering the production cost. To sum up, the NDVI-AWEI 9-band fusion image method combined with the improved U-Net network model has superiority in classifying the planting structure.
本发明通过计算kappa系数,精度Precision、召回率Recall及二者的调和均值F1,来评估各指数方法结合改进U-Net模型进行种植结构分类结果的可靠性及准确度,如下表2所示,对比无指数影像,含有指数的融合影像种植结构分类精度提高显著,kappa系数由0.868提高至含有指数的0.874以上,进一步提高预测类与人工标签的一致性。精度、召回及二者调和均值均有0.02以上提高。在新方法中NDVI-AWEI融合方法与三指数方法精度相同达到0.873,Kappa系数及均值F1较三指数方法更优,分别为0.886和0.872,能够提供准确稳定的作物种植结构分类结果。The present invention evaluates the reliability and accuracy of the results of planting structure classification by each index method combined with the improved U-Net model by calculating the kappa coefficient, Precision, Recall and the harmonic mean value F1 of the two, as shown in Table 2 below, Compared with the images without index, the classification accuracy of the fusion image with index improved significantly, and the kappa coefficient increased from 0.868 to more than 0.874 with index, which further improved the consistency between predicted classes and manual labels. The precision, recall and the harmonic mean of the two all improved by more than 0.02. In the new method, the NDVI-AWEI fusion method and the three-exponential method have the same accuracy of 0.873, and the Kappa coefficient and mean F1 are better than the three-exponential method, 0.886 and 0.872, respectively, which can provide accurate and stable crop planting structure classification results.
表2Table 2
种植结构分类装置实施例:Example of planting structure classification device:
本实施例提出的种植结构分类装置,如图5所示,包括处理器、存储器,存储器中存储有可在处理器上运行的计算机程序,处理器在执行计算机程序时实现上述种植结构分类方法实施例的方法。The planting structure classification device proposed in this embodiment, as shown in FIG. 5 , includes a processor and a memory. The memory stores a computer program that can be run on the processor. When the processor executes the computer program, the above-mentioned planting structure classification method is implemented. example method.
也就是说,以上种植结构分类方法实施例的方法应理解可由计算机程序指令实现种植结构分类方法的流程。可提供这些计算机程序指令到处理器,使得通过处理器执行这些指令产生用于实现上述方法流程所指定的功能。That is to say, it should be understood that the methods of the above embodiments of the planting structure classification method can be implemented by computer program instructions to implement the flow of the planting structure classification method. The computer program instructions may be provided to a processor such that execution by the processor of the instructions results in the implementation of the functions specified by the above-described method flows.
本实施例所指的处理器是指微处理器MCU或可编程逻辑器件FPGA等的处理装置。The processor referred to in this embodiment refers to a processing device such as a microprocessor MCU or a programmable logic device FPGA.
本实施例所指的存储器包括用于存储信息的物理装置,通常是将信息数字化后再以利用电、磁或者光学等方式的媒体加以存储。例如:利用电能方式存储信息的各式存储器,RAM、ROM等;利用磁能方式存储信息的的各式存储器,硬盘、软盘、磁带、磁芯存储器、磁泡存储器、U盘;利用光学方式存储信息的各式存储器,CD或DVD。当然,还有其他方式的存储器,例如量子存储器、石墨烯存储器等等。The memory referred to in this embodiment includes a physical device for storing information, and usually the information is digitized and then stored in an electrical, magnetic, or optical medium. For example: all kinds of memories that use electrical energy to store information, RAM, ROM, etc.; all kinds of memories that use magnetic energy to store information, hard disks, floppy disks, magnetic tapes, magnetic core memories, magnetic bubble memories, U disks; use optical methods to store information of all kinds of memory, CD or DVD. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so on.
通过上述存储器、处理器以及计算机程序构成的装置,在计算机中由处理器执行相应的程序指令来实现,处理器可以搭载各种操作系统,如windows操作系统、linux系统、android、iOS系统等。The device constituted by the above-mentioned memory, processor and computer program is realized by the processor executing corresponding program instructions in the computer, and the processor can be equipped with various operating systems, such as windows operating system, linux system, android, and iOS system.
作为其他实施方式,装置还可以包括显示器,显示器用于将测试结果展示出来,以供工作人员参考。As other implementation manners, the device may further include a display, and the display is used for displaying the test results for the reference of the staff.
以上所述仅为本发明的优选实施例,已经用一般性说明、具体实施方式对本发明作了详尽的描述,但并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种修改或改进。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的权利要求范围之内。The above are only preferred embodiments of the present invention, and the present invention has been described in detail by general description and specific embodiments, but is not intended to limit the present invention. For those skilled in the art, the present invention may have Various modifications or improvements. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the scope of the claims of the present invention.
Claims (7)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010526298.5A CN111814563B (en) | 2020-06-09 | 2020-06-09 | Method and device for classifying planting structures |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010526298.5A CN111814563B (en) | 2020-06-09 | 2020-06-09 | Method and device for classifying planting structures |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111814563A true CN111814563A (en) | 2020-10-23 |
| CN111814563B CN111814563B (en) | 2022-05-17 |
Family
ID=72846503
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010526298.5A Active CN111814563B (en) | 2020-06-09 | 2020-06-09 | Method and device for classifying planting structures |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111814563B (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112883892A (en) * | 2021-03-03 | 2021-06-01 | 青岛农业大学 | Soil type remote sensing classification identification method, device, equipment and storage medium |
| CN112991351A (en) * | 2021-02-23 | 2021-06-18 | 新华三大数据技术有限公司 | Remote sensing image semantic segmentation method and device and storage medium |
| CN113298086A (en) * | 2021-04-26 | 2021-08-24 | 自然资源部第一海洋研究所 | Red tide multispectral detection method based on U-Net network |
| CN115830441A (en) * | 2022-10-24 | 2023-03-21 | 中国农业银行股份有限公司 | A crop identification method, device, system and medium |
| CN116824377A (en) * | 2023-06-30 | 2023-09-29 | 南湖实验室 | Intelligent identification and positioning method of pine discolored standing trees using remote sensing images |
| CN117437475A (en) * | 2023-11-02 | 2024-01-23 | 清华大学 | Planting structure classification method, planting structure classification device, computer equipment and storage medium |
| CN117496281A (en) * | 2024-01-03 | 2024-02-02 | 环天智慧科技股份有限公司 | Crop remote sensing image classification method |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180189564A1 (en) * | 2016-12-30 | 2018-07-05 | International Business Machines Corporation | Method and system for crop type identification using satellite observation and weather data |
| CN110287869A (en) * | 2019-06-25 | 2019-09-27 | 吉林大学 | Crop classification method for high-resolution remote sensing images based on deep learning |
| CN110647932A (en) * | 2019-09-20 | 2020-01-03 | 河南工业大学 | Planting crop structure remote sensing image classification method and device |
-
2020
- 2020-06-09 CN CN202010526298.5A patent/CN111814563B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180189564A1 (en) * | 2016-12-30 | 2018-07-05 | International Business Machines Corporation | Method and system for crop type identification using satellite observation and weather data |
| CN110287869A (en) * | 2019-06-25 | 2019-09-27 | 吉林大学 | Crop classification method for high-resolution remote sensing images based on deep learning |
| CN110647932A (en) * | 2019-09-20 | 2020-01-03 | 河南工业大学 | Planting crop structure remote sensing image classification method and device |
Non-Patent Citations (3)
| Title |
|---|
| 王雅慧 等: "高分辨率多光谱遥感影像森林类型分类深度U-net优化方法", 《林业科学研究》 * |
| 许玥 等: "基于深度学习模型的遥感图像分割方法", 《计算机应用》 * |
| 赵文驰 等: "国产高分辨率遥感卫星融合方法比较", 《测绘与空间地理信息》 * |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112991351A (en) * | 2021-02-23 | 2021-06-18 | 新华三大数据技术有限公司 | Remote sensing image semantic segmentation method and device and storage medium |
| CN112991351B (en) * | 2021-02-23 | 2022-05-27 | 新华三大数据技术有限公司 | Remote sensing image semantic segmentation method and device and storage medium |
| CN112883892A (en) * | 2021-03-03 | 2021-06-01 | 青岛农业大学 | Soil type remote sensing classification identification method, device, equipment and storage medium |
| CN113298086A (en) * | 2021-04-26 | 2021-08-24 | 自然资源部第一海洋研究所 | Red tide multispectral detection method based on U-Net network |
| CN115830441A (en) * | 2022-10-24 | 2023-03-21 | 中国农业银行股份有限公司 | A crop identification method, device, system and medium |
| CN116824377A (en) * | 2023-06-30 | 2023-09-29 | 南湖实验室 | Intelligent identification and positioning method of pine discolored standing trees using remote sensing images |
| CN117437475A (en) * | 2023-11-02 | 2024-01-23 | 清华大学 | Planting structure classification method, planting structure classification device, computer equipment and storage medium |
| CN117437475B (en) * | 2023-11-02 | 2024-11-26 | 清华大学 | Planting structure classification method, device, computer equipment and storage medium |
| CN117496281A (en) * | 2024-01-03 | 2024-02-02 | 环天智慧科技股份有限公司 | Crop remote sensing image classification method |
| CN117496281B (en) * | 2024-01-03 | 2024-03-19 | 环天智慧科技股份有限公司 | Crop remote sensing image classification method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111814563B (en) | 2022-05-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111814563B (en) | Method and device for classifying planting structures | |
| Sadeghi-Tehran et al. | DeepCount: in-field automatic quantification of wheat spikes using simple linear iterative clustering and deep convolutional neural networks | |
| CN112288706B (en) | An automated karyotype analysis and abnormality detection method | |
| Jin et al. | Separating the structural components of maize for field phenotyping using terrestrial LiDAR data and deep convolutional neural networks | |
| CN110378909B (en) | Single wood segmentation method for laser point cloud based on Faster R-CNN | |
| CN107247971B (en) | An intelligent analysis method and system for ultrasound thyroid nodule risk index | |
| CN108416353B (en) | Method for quickly segmenting rice ears in field based on deep full convolution neural network | |
| CN117197450B (en) | A land parcel segmentation method based on SAM model | |
| CN106096627A (en) | The Polarimetric SAR Image semisupervised classification method that considering feature optimizes | |
| CN116883853B (en) | Remote sensing classification method of crop spatiotemporal information based on transfer learning | |
| Yan et al. | Identification and picking point positioning of tender tea shoots based on MR3P-TS model | |
| CN107808375B (en) | A rice disease image detection method integrating multiple contextual deep learning models | |
| CN112581450B (en) | Pollen detection method based on expansion convolution pyramid and multi-scale pyramid | |
| CN119027983B (en) | Method, device and electronic equipment for identifying images of livestock and poultry diseases | |
| CN119941731B (en) | Lung nodule analysis method, system, equipment and medium based on large model | |
| CN116912253B (en) | Lung cancer pathological image classification method based on multi-scale hybrid neural network | |
| CN114299379A (en) | A method for extracting vegetation coverage in shadow areas based on high dynamic images | |
| CN116385867A (en) | Ecological land block monitoring, identifying and analyzing method, system, medium, equipment and terminal | |
| CN117197656A (en) | Multi-mode pasture image feature extraction and recognition system | |
| Zhang et al. | A mapping approach for eucalyptus plantations canopy and single tree using high-resolution satellite images in Liuzhou, China | |
| CN106991449A (en) | A method of life scene reconstruction aided identification of blueberry varieties | |
| CN112215217B (en) | Digital image recognition method and device for simulating doctor to read film | |
| Tang et al. | Detecting tasseling rate of breeding maize using UAV-based RGB images and STB-YOLO model | |
| CN114565617B (en) | Breast mass image segmentation method and system based on pruned U-Net++ | |
| CN118429964A (en) | A method for identifying Panax notoginseng leaf phenotypes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |