[go: up one dir, main page]

CN111914611A - High score remote sensing monitoring method and system for urban green space - Google Patents

High score remote sensing monitoring method and system for urban green space Download PDF

Info

Publication number
CN111914611A
CN111914611A CN202010386282.9A CN202010386282A CN111914611A CN 111914611 A CN111914611 A CN 111914611A CN 202010386282 A CN202010386282 A CN 202010386282A CN 111914611 A CN111914611 A CN 111914611A
Authority
CN
China
Prior art keywords
remote sensing
net
model
urban green
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010386282.9A
Other languages
Chinese (zh)
Other versions
CN111914611B (en
Inventor
周艺
王丽涛
王世新
朱金峰
刘文亮
徐知宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202010386282.9A priority Critical patent/CN111914611B/en
Publication of CN111914611A publication Critical patent/CN111914611A/en
Application granted granted Critical
Publication of CN111914611B publication Critical patent/CN111914611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种城市绿地高分遥感监测方法和系统,该方法包括训练样本集构建步骤、多维特征空间构建步骤、U‑Net+模型构建步骤以及影像后处理步骤,通过构建多维特征空间、增强特征丰富度,同时构建U‑Net+深度学习模型,再结合影像后处理方法,提高监测方法的泛化性和鲁棒性,解决训练样本有限而易出现的过拟合问题,从而达到提高城市绿地高分辨率遥感监测的精度和时效性。

Figure 202010386282

The invention relates to a high-resolution remote sensing monitoring method and system for urban green space. The method includes a training sample set construction step, a multi-dimensional feature space construction step, a U-Net+ model construction step, and an image post-processing step. At the same time, the U-Net+ deep learning model is constructed, and the image post-processing method is combined to improve the generalization and robustness of the monitoring method, and solve the problem of over-fitting due to limited training samples, so as to improve the height of urban green space. Accuracy and timeliness of high-resolution remote sensing monitoring.

Figure 202010386282

Description

城市绿地高分遥感监测方法与系统High score remote sensing monitoring method and system for urban green space

技术领域technical field

本发明涉及遥感监测技术领域,特别是一种城市绿地高分遥感监测方法和系统。The invention relates to the technical field of remote sensing monitoring, in particular to a high-resolution remote sensing monitoring method and system for urban green space.

背景技术Background technique

城市绿地在城市生态系统中起着非常重要的作用,与人类健康、居民生活质量、生物多样性、社会安全息息相关。城市绿地的分布具有异质性和高度分散性,如何实现城市绿地的时空格局精确量化是当前研究的热点问题,对城市绿地规划和管理至关重要。Urban green space plays a very important role in the urban ecosystem and is closely related to human health, quality of life of residents, biodiversity, and social security. The distribution of urban green space is heterogeneous and highly dispersed. How to realize the precise quantification of the spatial and temporal pattern of urban green space is a hot issue in current research, which is crucial to the planning and management of urban green space.

目前已有大量的关于城市绿地分类提取的研究,随着遥感技术的发展,越来越多高分辨的遥感影像应用于城市绿地测绘和变化分析,例如SPOT,IKONOS,Quick-Bird,Worldview,高分系列等等。高分辨率遥感影像包含着丰富的地物信息,光谱特征明显,几何特征更加清晰,纹理细节也更丰富,能够在亚米级尺度识别城市街道树木,住宅区花园,提供更精细的城市绿地特征。At present, there have been a lot of studies on the classification and extraction of urban green space. With the development of remote sensing technology, more and more high-resolution remote sensing images are applied to urban green space mapping and change analysis, such as SPOT, IKONOS, Quick-Bird, Worldview, series, etc. High-resolution remote sensing images contain rich ground object information, with obvious spectral features, clearer geometric features, and richer texture details. It can identify urban street trees and residential gardens at the sub-meter scale, providing finer urban green space features. .

利用遥感进行分类的方法很多,传统遥感影像分类方法主要是基于像素分类和面向对象分类。基于像素的分类主要是参考像元的光谱信息,而高分辨率遥感影像因其空间分辨率较高,光谱信息相对匮乏,而形状、纹理等特征相对较多,这些信息应该在高分辨率遥感影像分类中发挥重要作用。面向对象的分类方法综合了地物的光谱特征、几何特征和纹理特征,其实验步骤是“分割-特征选取-分类”。面向对象方法没有固定的分割参数,若分割尺度过大,则会出现大量混合像元,若分割尺度过小,则形状信息缺失,其参数在分割阶段需要通过反复实验确定,这无疑在分类过程中增加了大量工作量。传统机器学习方法如支持向量机、决策树等方法不能从复杂的特征中充分学习,也无法适应数据量较大的高分辨率影像样本,得到的分类结果不能满足分类的需求。虽然传统的城市绿地遥感信息分类提取方法已经比较成熟,但由于其方法本身的局限性,需要人工进行最优分割参数选择和对象特征选取,精度相对较低。There are many classification methods using remote sensing. Traditional remote sensing image classification methods are mainly based on pixel classification and object-oriented classification. Pixel-based classification is mainly based on the spectral information of reference pixels, while high-resolution remote sensing images are relatively lacking in spectral information due to their high spatial resolution, while relatively many features such as shape and texture. important role in image classification. The object-oriented classification method integrates the spectral features, geometric features and texture features of ground objects, and its experimental steps are "segmentation-feature selection-classification". The object-oriented method has no fixed segmentation parameters. If the segmentation scale is too large, a large number of mixed pixels will appear. If the segmentation scale is too small, the shape information will be missing. increased workload. Traditional machine learning methods such as support vector machines and decision trees cannot fully learn from complex features, nor can they adapt to high-resolution image samples with a large amount of data, and the obtained classification results cannot meet the needs of classification. Although the traditional urban green space remote sensing information classification and extraction methods are relatively mature, due to the limitations of the methods themselves, the optimal segmentation parameters and object feature selection need to be manually selected, and the accuracy is relatively low.

目前深度学习方法在高分辨率遥感影像分类方面有很强的应用潜力,尤其是U-Net模型在遥感影像分类应用中也崭露头角。深度学习目前已被广泛应用于图像识别、目标检测和图像分类等多种不同的任务中,并且都取得了不错的效果。目前较常见的遥感图像分类的深度学习神经网络模型包括深度置信网络(DBN)、栈式自编码网络(SAE)和卷积神经网络(CNN)大量的实验和研究表明,卷积神经网络在图像分类和分割等领域表现出众,是应用最广泛的深度学习模型之一。但目前的研究中存在以下问题:At present, the deep learning method has strong application potential in the classification of high-resolution remote sensing images, especially the U-Net model has also emerged in the application of remote sensing image classification. Deep learning has been widely used in many different tasks such as image recognition, object detection and image classification, and has achieved good results. At present, the more common deep learning neural network models for remote sensing image classification include Deep Belief Network (DBN), Stacked Autoencoder Network (SAE) and Convolutional Neural Network (CNN). Fields such as classification and segmentation excel and are among the most widely used deep learning models. However, the current research has the following problems:

(1)现有的深度学习模型还存在很多缺陷。譬如模型的结构、模型的泛化性与鲁棒性、损失函数的计算模式等等;尤其是小尺度影像数据集会引起深度学习模型的过拟合、模型鲁棒性以及模型泛化能力变差等问题。(2)特征学习丰富度不足。高分二号遥感影像的波段数有限,光谱信息相对欠缺,特征的丰富度较少,在一定程度上限制了深度学习在特征学习上的丰富度。(3)模型错分问题。分类后的影像通常存在细小的错分区域,且地物的边界略平滑,因此有必要加入影像后处理方法来优化分类结果,得到更接近真实地物情况的分类结果。(1) There are still many defects in the existing deep learning models. For example, the structure of the model, the generalization and robustness of the model, the calculation mode of the loss function, etc.; especially the small-scale image data set will cause the overfitting of the deep learning model, the robustness of the model and the deterioration of the generalization ability of the model. And other issues. (2) The richness of feature learning is insufficient. Gaofen-2 remote sensing images have a limited number of bands, relatively lack of spectral information, and less richness of features, which limits the richness of deep learning in feature learning to a certain extent. (3) The problem of model misclassification. There are usually small misclassified areas in the classified images, and the boundaries of the objects are slightly smooth. Therefore, it is necessary to add image post-processing methods to optimize the classification results and obtain classification results that are closer to the real objects.

发明内容SUMMARY OF THE INVENTION

本发明针对现有技术存在的问题,提出一种城市绿地高分遥感监测方法,通过构建多维特征空间、增强特征丰富度,同时构建U-Net+深度学习模型,再结合影像后处理方法,提高监测方法的泛化性和鲁棒性,解决训练样本有限而易出现的过拟合问题,从而达到提高城市绿地高分辨率遥感监测的精度和时效性。本发明还涉及一种城市绿地高分遥感监测系统。Aiming at the problems existing in the prior art, the present invention proposes a high-resolution remote sensing monitoring method for urban green space. By constructing a multi-dimensional feature space, enhancing feature richness, and simultaneously constructing a U-Net+ deep learning model, combined with an image post-processing method, monitoring is improved. The generalization and robustness of the method can solve the problem of over-fitting due to limited training samples, so as to improve the accuracy and timeliness of high-resolution remote sensing monitoring of urban green space. The invention also relates to an urban green space high score remote sensing monitoring system.

本发明的技术方案如下:The technical scheme of the present invention is as follows:

一种城市绿地高分遥感监测方法,其特征在于,包括下述步骤:A high-score remote sensing monitoring method for urban green space, comprising the following steps:

训练样本集构建步骤,针对高分辨遥感影像特征,选择样本区域,构建训练样本数据集;The training sample set construction step, according to the high-resolution remote sensing image features, select the sample area, and construct the training sample data set;

多维特征空间构建步骤,将训练样本数据集进行数据增强、随机裁剪和特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的多维特征空间;通过构建归一化植被指数将其作为城市绿地的植被特征,构建nDSM数字表面模型将其作为城市绿地的空间特征,通过AC算法计算局部对比度特征图将其作为城市绿地的对比度特征,通过灰度共生矩阵获取图像纹理特征图将其作为城市绿地的纹理特征,并加入同期冬季影像将其作为城市绿地的物候特征;The multi-dimensional feature space construction step is to perform data enhancement, random cropping and feature calculation processing on the training sample data set to construct a multi-dimensional feature space including vegetation features, spatial features, contrast features, texture features and phenological features; by constructing a normalized vegetation index Taking it as the vegetation feature of urban green space, construct the nDSM digital surface model to take it as the spatial feature of urban green space, calculate the local contrast feature map through AC algorithm and take it as the contrast feature of urban green space, and obtain the image texture feature map through the gray level co-occurrence matrix. Take it as the texture feature of urban green space, and add the winter images of the same period as the phenological feature of urban green space;

U-Net+模型构建步骤,基于构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型;再将训练样本的多维特征数据加入 U-net+深度学习模型中进行训练,并在训练完成后进行城市绿地空间分布的预测,获取U-Net+ 模型预测结果;The U-Net+ model construction step is based on the constructed multi-dimensional feature space, facing the characteristics of urban green space, using the image edge zero padding improvement method, the batch normalization processing improvement method and the regularization improvement method to improve the U-net model in turn, and establish a face-to-face Urban green space and U-Net+ deep learning model based on multi-dimensional features; then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, and after the training is completed, predict the spatial distribution of urban green space to obtain U-Net+ Model prediction results;

影像后处理步骤,对U-Net+模型预测结果进行后处理,得到城市绿地高分辨率遥感监测成果。The image post-processing step is to post-process the prediction results of the U-Net+ model to obtain the high-resolution remote sensing monitoring results of urban green spaces.

优选地,所述训练样本集构建步骤,针对高分辨遥感影像特征,将部分典型区域作为样本区域,利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,记录遥感影像中不同类型植被位置的地表真实图像,构建训练样本数据集;所述多维特征空间构建步骤先利用数据增强的方法进行影像处理以突出有效光谱信息,然后利用随机裁剪的方法将标签图像以及数据增强后的遥感影像裁剪,并基于裁剪后的标签图像和遥感影像进行特征计算处理从而构建多维特征空间。Preferably, in the step of constructing the training sample set, according to the features of high-resolution remote sensing images, some typical regions are used as sample regions, and a visual interpretation method is used to extract samples of high-resolution remote sensing images, and the vector files obtained by visual interpretation are extracted. Convert elements to raster to obtain label images, record real surface images of different types of vegetation locations in remote sensing images, and construct training sample data sets; the multi-dimensional feature space construction step first uses data enhancement methods to perform image processing to highlight effective spectral information, Then, the label image and the data-enhanced remote sensing image are cropped by the method of random cropping, and feature calculation processing is performed based on the cropped label image and remote sensing image to construct a multi-dimensional feature space.

优选地,所述U-Net+模型构建步骤中的批归一化处理改进方式是在模型卷积层后面加入批归一化处理,再把归一化处理后的数据输入所述模型卷积层的下一层;所述正则化改进方式是在模型每一次反卷积后都加入具有特定丢弃神经元概率的dropout层。Preferably, the improved method of batch normalization processing in the U-Net+ model building step is to add batch normalization processing after the model convolution layer, and then input the normalized data into the model convolution layer. The next layer of ; the regularization improvement is to add a dropout layer with a specific probability of dropping neurons after each deconvolution of the model.

优选地,所述U-Net+模型构建步骤中将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,是将训练样本的遥感影像、多维特征数据和相应标签图像输入U-net+深度学习模型中,U-net+深度学习模型在编码部分提取出输入影像的特征,在解码部分恢复其空间位置与分辨率,在像素分类层对每一像素进行分类获得类别信息;将预测的分类图与输入的标签图在交叉熵函数中计算损失值,将损失值传递至U-net+深度学习模型中进行反向传播,逐层优化模型中的参数,当损失值达到一定阈值后,训练停止。Preferably, in the U-Net+ model building step, the multi-dimensional feature data of the training sample is added to the U-net+ deep learning model for training, which is to input the remote sensing image, multi-dimensional feature data and corresponding label image of the training sample into the U-net+ depth In the learning model, the U-net+ deep learning model extracts the features of the input image in the encoding part, restores its spatial position and resolution in the decoding part, and classifies each pixel in the pixel classification layer to obtain category information; Calculate the loss value in the cross entropy function with the input label map, pass the loss value to the U-net+ deep learning model for backpropagation, and optimize the parameters in the model layer by layer. When the loss value reaches a certain threshold, the training stops.

优选地,所述影像后处理步骤,利用全连接CRFs影像后处理方法对U-Net+模型预测结果进行后处理,结合原始高分辨遥感影像中所有像素之间的关系对分类结果进行处理,从而对预测结果优化,得到城市绿地高分辨率遥感监测成果。Preferably, in the image post-processing step, the fully-connected CRFs image post-processing method is used to post-process the prediction results of the U-Net+ model, and the classification results are processed in combination with the relationship between all pixels in the original high-resolution remote sensing image, so as to make The prediction results are optimized, and the high-resolution remote sensing monitoring results of urban green space are obtained.

一种城市绿地高分遥感监测系统,其特征在于,包括依次连接的训练样本集构建模块、多维特征空间构建模块、U-Net+模型构建模块和影像后处理模块,所述训练样本集构建模块和U-Net+模型构建模块相连接,An urban green space high score remote sensing monitoring system is characterized in that it includes a training sample set building module, a multi-dimensional feature space building module, a U-Net+ model building module and an image post-processing module connected in sequence, the training sample set building module and U-Net+ model building blocks are connected,

所述训练样本集构建模块,针对高分辨遥感影像特征,选择样本区域,构建训练样本数据集;The training sample set building module selects a sample area according to the features of high-resolution remote sensing images, and constructs a training sample data set;

所述多维特征空间构建模块,将训练样本集构建模块的训练样本数据集进行数据增强、随机裁剪和特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的多维特征空间;通过构建归一化植被指数将其作为城市绿地的植被特征,构建nDSM数字表面模型将其作为城市绿地的空间特征,通过AC算法计算局部对比度特征图将其作为城市绿地的对比度特征,通过灰度共生矩阵获取图像纹理特征图将其作为城市绿地的纹理特征,并加入同期冬季影像将其作为城市绿地的物候特征;The multi-dimensional feature space building module performs data enhancement, random cropping and feature calculation processing on the training sample data set of the training sample set building module, and constructs a multi-dimensional feature space including vegetation features, spatial features, contrast features, texture features and phenological features. ; By constructing the normalized vegetation index and taking it as the vegetation feature of urban green space, constructing nDSM digital surface model as the spatial feature of urban green space, calculating the local contrast feature map by AC algorithm and taking it as the contrast feature of urban green space, through grey The degree co-occurrence matrix is used to obtain the image texture feature map and use it as the texture feature of urban green space, and add the winter images of the same period to use it as the phenological feature of urban green space;

所述U-Net+模型构建模块,基于多维特征空间构建模块构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型;再将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,并在训练完成后进行城市绿地空间分布的预测,获取U-Net+模型预测结果;The U-Net+ model building module, based on the multi-dimensional feature space constructed by the multi-dimensional feature space building module, is oriented towards the features of urban green spaces, and sequentially uses the image edge zero-filling improvement method, the batch normalization processing improvement method and the regularization improvement method to improve. U-net model, establish a U-Net+ deep learning model based on multi-dimensional features for urban green space; then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, and after the training is completed, the urban green space is distributed. to obtain the prediction results of the U-Net+ model;

所述影像后处理模块,对U-Net+模型预测结果进行后处理,得到城市绿地高分辨率遥感监测成果。The image post-processing module performs post-processing on the prediction results of the U-Net+ model to obtain high-resolution remote sensing monitoring results of urban green spaces.

优选地,所述训练样本集构建模块,针对高分辨遥感影像特征,将部分典型区域作为样本区域,利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,记录遥感影像中不同类型植被位置的地表真实图像,构建训练样本数据集;所述多维特征空间构建模块先利用数据增强的方法进行影像处理以突出有效光谱信息,然后利用随机裁剪的方法将标签图像以及数据增强后的遥感影像裁剪,并基于裁剪后的标签图像和遥感影像进行特征计算处理从而构建多维特征空间。Preferably, in the training sample set building module, for the features of high-resolution remote sensing images, some typical areas are used as sample areas, and a visual interpretation method is used to extract samples of high-resolution remote sensing images, and the vector files obtained by visual interpretation are extracted. Convert elements to raster to obtain label images, record real surface images of different types of vegetation locations in remote sensing images, and construct training sample data sets; the multi-dimensional feature space building module first uses data enhancement methods to perform image processing to highlight effective spectral information, Then, the label image and the data-enhanced remote sensing image are cropped by the method of random cropping, and feature calculation processing is performed based on the cropped label image and remote sensing image to construct a multi-dimensional feature space.

优选地,所述U-Net+模型构建模块中的批归一化处理改进方式是在模型卷积层后面加入批归一化处理,再把归一化处理后的数据输入所述模型卷积层的下一层;所述正则化改进方式是在模型的每一次反卷积后都加入具有特定丢弃神经元概率的dropout层。Preferably, the improved method of batch normalization processing in the U-Net+ model building module is to add batch normalization processing after the model convolution layer, and then input the normalized data into the model convolution layer. The next layer of ; the regularization improvement is to add a dropout layer with a specific probability of dropping neurons after each deconvolution of the model.

优选地,所述U-Net+模型构建模块中将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,是将训练样本的遥感影像、多维特征数据和相应标签图像输入U-net+深度学习模型中,U-net+深度学习模型在编码部分提取出输入影像的特征,在解码部分恢复其空间位置与分辨率,在像素分类层对每一像素进行分类获得类别信息;将预测的分类图与输入的标签图在交叉熵函数中计算损失值,将损失值传递至U-net+深度学习模型中进行反向传播,逐层优化模型中的参数,当损失值达到一定阈值后,训练停止。Preferably, in the U-Net+ model building module, the multi-dimensional feature data of the training samples are added to the U-net+ deep learning model for training, and the remote sensing images, multi-dimensional feature data and corresponding label images of the training samples are input into the U-net+ deep learning model. In the learning model, the U-net+ deep learning model extracts the features of the input image in the encoding part, restores its spatial position and resolution in the decoding part, and classifies each pixel in the pixel classification layer to obtain category information; Calculate the loss value in the cross entropy function with the input label map, pass the loss value to the U-net+ deep learning model for backpropagation, and optimize the parameters in the model layer by layer. When the loss value reaches a certain threshold, the training stops.

优选地,所述影像后处理模块,利用全连接CRFs影像后处理方法对U-Net+模型预测结果进行后处理,结合原始高分辨遥感影像中所有像素之间的关系对分类结果进行处理,从而对预测结果优化,得到城市绿地高分辨率遥感监测成果。Preferably, the image post-processing module uses the fully connected CRFs image post-processing method to post-process the prediction results of the U-Net+ model, and processes the classification results in combination with the relationship between all the pixels in the original high-resolution remote sensing images, so as to make The prediction results are optimized, and the high-resolution remote sensing monitoring results of urban green space are obtained.

本发明的技术效果如下:The technical effect of the present invention is as follows:

本发明涉及一种城市绿地高分遥感监测方法,在样本区域内获取充足的训练样本数据集,再采用数据增强、随机裁剪和特征计算相结合的方法对训练样本数据集进行处理后构建特定的多维特征空间,在泛化网络模型的同时避免过拟合问题;通过构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的五类遥感影像特征的多维特征空间、增强特征丰富度,尤其是在前四个特征的基础上再结合物候特征,冬季落叶树凋零,采用高分遥感影像多时相遥感影像,结合植被的物候学理论将冬季影像加入深度学习模型中,用以区分常绿树和落叶树,对单夏季影像进行分类结果优化,为后续城市绿地高分遥感监测的准确性提供基础。基于构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型,提出基于多维特征的U-Net+深度学习模型的概念,实现对U-Net 模型的优化,并采用了一系列特定的改进方式,该U-Net+深度学习模型具有极高的稳定性和安全性,再将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,实现样本的训练和模型的训练,在训练完成后进行城市绿地空间分布的预测,获取U-Net+模型预测结果,再结合影像后处理方法得到城市绿地高分辨率遥感监测成果,提高了监测方法的泛化性和鲁棒性,解决训练样本有限而易出现的过拟合问题,从而达到提高城市绿地高分辨率遥感监测的精度和时效性。The present invention relates to a high-score remote sensing monitoring method for urban green space. Sufficient training sample data sets are obtained in a sample area, and a combination of data enhancement, random clipping and feature calculation is used to process the training sample data sets to construct a specific training sample data set. The multi-dimensional feature space avoids overfitting while generalizing the network model. In particular, on the basis of the first four characteristics, combined with phenological characteristics, deciduous trees wither in winter, high-resolution remote sensing images and multi-temporal remote sensing images are used, and winter images are added to the deep learning model in combination with the phenology theory of vegetation to distinguish evergreen trees. and deciduous trees, optimize the classification results of single summer images, and provide a basis for the accuracy of subsequent high-score remote sensing monitoring of urban green spaces. Based on the constructed multi-dimensional feature space, facing the characteristics of urban green space, the U-net model is improved by successively using the improved method of image edge zero padding, the improved method of batch normalization and the improved method of regularization, and a multi-dimensional feature-oriented urban green space is established. U-Net+ deep learning model, proposes the concept of U-Net+ deep learning model based on multi-dimensional features, realizes the optimization of U-Net model, and adopts a series of specific improvement methods. The U-Net+ deep learning model has extremely high performance. Then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, realize the training of the samples and the training of the model, and predict the spatial distribution of urban green space after the training is completed. Net+ model prediction results, combined with image post-processing methods to obtain high-resolution remote sensing monitoring results of urban green space, improve the generalization and robustness of monitoring methods, and solve the problem of over-fitting due to limited training samples. Accuracy and timeliness of high-resolution remote sensing monitoring of urban green space.

本发明还涉及一种城市绿地高分遥感监测系统,该系统与上述的城市绿地高分遥感监测方法相对应,可理解为是实现上述城市绿地高分遥感监测方法的系统,设置相应的训练样本集构建模块、多维特征空间构建模块、U-Net+模型构建模块和影像后处理模块,各模块相互协同工作,在构建充足的训练样本数据集后,通过构建多维特征空间、增强特征丰富度,同时建立面向城市绿地并基于多维特征的U-Net+深度学习模型,结合影像后处理方式,提高监测方法的泛化性和鲁棒性,进而提高城市绿地高分辨率遥感监测的精度。The present invention also relates to an urban green space high-score remote sensing monitoring system, which corresponds to the above-mentioned urban green space high-score remote sensing monitoring method, and can be understood as a system for implementing the above-mentioned urban green space high-score remote sensing monitoring method, and sets corresponding training samples Set building module, multi-dimensional feature space building module, U-Net+ model building module and image post-processing module, all modules work together with each other. A U-Net+ deep learning model based on multi-dimensional features is established for urban green space, combined with image post-processing, to improve the generalization and robustness of the monitoring method, thereby improving the accuracy of high-resolution remote sensing monitoring of urban green space.

附图说明Description of drawings

图1为本发明城市绿地高分遥感监测方法的流程图。FIG. 1 is a flow chart of the method for monitoring high-resolution remote sensing of urban green space according to the present invention.

图2为本发明城市绿地高分遥感监测方法的优选流程图。FIG. 2 is a preferred flow chart of the method for high-resolution remote sensing monitoring of urban green space of the present invention.

图3为训练样本集构建步骤中的数据集部分影像及对应标签图像。Fig. 3 is a partial image of the dataset and the corresponding label image in the step of constructing the training sample set.

图4a和图4b为U-Net+模型构建步骤中的图像边缘补零改进方式对比图。Figures 4a and 4b are comparison diagrams of the improved methods of zero-filling image edges in the U-Net+ model building step.

图5a和图5b为U-Net+模型构建步骤中的正则化改进方式的对比图。Figure 5a and Figure 5b are comparison diagrams of regularization improvements in the U-Net+ model building step.

图6为本发明基于多维特征的U-Net+深度学习模型的结构图。FIG. 6 is a structural diagram of the U-Net+ deep learning model based on multi-dimensional features of the present invention.

图7为对多维特征的U-Net+深度学习模型进行训练的流程图。FIG. 7 is a flowchart of training a U-Net+ deep learning model with multi-dimensional features.

图8为本发明U-Net+模型预测结果与真实地表对比图。FIG. 8 is a comparison diagram between the prediction result of the U-Net+ model of the present invention and the real surface.

图9为本发明城市绿地高分遥感监测系统的结构图。FIG. 9 is a structural diagram of an urban green space high score remote sensing monitoring system of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明进行说明。The present invention will be described below with reference to the accompanying drawings.

本发明涉及一种城市绿地高分遥感监测方法,其流程如图1所示,包括:训练样本集构建步骤,针对高分辨遥感影像特征,选择样本区域并在样本区域内构建训练样本数据集;多维特征空间构建步骤,将训练样本数据集进行数据增强、随机裁剪和特征计算处理,也就是说,对数据增强和随机裁剪处理后的训练样本数据集进行特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的五类特征的多维特征空间;通过构建归一化植被指数将其作为城市绿地的植被特征,构建nDSM数字表面模型将其作为城市绿地的空间特征,通过AC算法计算局部对比度特征图将其作为城市绿地的对比度特征,通过灰度共生矩阵获取图像纹理特征图将其作为城市绿地的纹理特征,并加入同期冬季影像将其作为城市绿地的物候特征;U-Net+模型构建步骤,基于构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型;再将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,并在训练完成后进行城市绿地空间分布的预测,获取U-Net+模型预测结果;影像后处理步骤,对U-Net+模型预测结果进行后处理,得到城市绿地高分辨率遥感监测成果。该方法通过构建多维特征空间、增强特征丰富度,同时构建U-Net+ 深度学习模型并进行样本训练和模型训练,再结合影像后处理方法,对预测结果优化,提高监测方法的泛化性和鲁棒性,解决训练样本有限而易出现的过拟合问题,从而达到提高城市绿地高分辨率遥感监测的精度和时效性。The present invention relates to a high-resolution remote sensing monitoring method for urban green space, the process of which is shown in FIG. 1 , including: a training sample set construction step, selecting a sample area according to the characteristics of high-resolution remote sensing images, and constructing a training sample data set within the sample area; The multi-dimensional feature space construction step is to perform data enhancement, random clipping and feature calculation processing on the training sample data set, that is to say, carry out feature calculation processing on the training sample data set after data enhancement and random clipping processing. The multi-dimensional feature space of five types of features including feature, contrast feature, texture feature and phenological feature; by constructing the normalized vegetation index as the vegetation feature of urban green space, and constructing the nDSM digital surface model as the spatial feature of urban green space, through The AC algorithm calculates the local contrast feature map and takes it as the contrast feature of the urban green space, obtains the image texture feature map through the gray level co-occurrence matrix and takes it as the texture feature of the urban green space, and adds the winter images of the same period as the phenological feature of the urban green space; U -Net+ model construction step, based on the constructed multi-dimensional feature space, facing the characteristics of urban green space, using the image edge zero padding improvement method, the batch normalization processing improvement method and the regularization improvement method to improve the U-net model in turn, and establish a city-oriented improvement method. Green space and U-Net+ deep learning model based on multi-dimensional features; then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, and after the training is completed, predict the spatial distribution of urban green space to obtain the U-Net+ model Prediction results; image post-processing step, post-processing the prediction results of the U-Net+ model to obtain high-resolution remote sensing monitoring results of urban green spaces. This method builds a multi-dimensional feature space, enhances feature richness, builds a U-Net+ deep learning model and performs sample training and model training, and then combines image post-processing methods to optimize the prediction results and improve the generalization and robustness of the monitoring method. It can solve the problem of over-fitting due to limited training samples, so as to improve the accuracy and timeliness of high-resolution remote sensing monitoring of urban green space.

图2为本发明城市绿地高分遥感监测方法的优选流程图。优选地,FIG. 2 is a preferred flow chart of the method for high-resolution remote sensing monitoring of urban green space of the present invention. Preferably,

一、训练样本集构建步骤:First, the training sample set construction steps:

针对高分辨遥感影像特征,将部分典型区域作为样本区域,利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,记录遥感影像中不同类型植被位置的地表真实图像,获取充足的训练样本数据集。According to the characteristics of high-resolution remote sensing images, some typical areas are used as sample areas, and the visual interpretation method is used to extract samples of high-resolution remote sensing images. The ground-truth images of the locations of different types of vegetation in the imagery are obtained to obtain sufficient training sample datasets.

如图2所示,高分辨率遥感数据输入后,进行样本区选择和目标解释。首先,根据城市绿地的空间形态、组织方式和植被类型,选择部分典型区域作为样本区域,也可称为是典型样本区域,如公园区选择几处典型区域、高尔夫球场选择几处典型区域、居民区选择几处典型区域等等。再利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,即记录遥感影像中不同类型植被位置的地表真实图像。如图3所示的数据集部分影像及对应标签图像,显示了标签为落叶树、常绿树、草地和非植被区的标签图像。As shown in Figure 2, after the high-resolution remote sensing data is input, sample area selection and target interpretation are performed. First, according to the spatial form, organization and vegetation type of urban green space, select some typical areas as sample areas, which can also be called typical sample areas. For example, select several typical areas for parks, select several typical areas for golf courses, residents Select several typical areas and so on. Then, the visual interpretation method is used to extract samples of high-resolution remote sensing images, and the vector files obtained by visual interpretation are converted into elements to obtain label images, that is, the real surface images that record the locations of different types of vegetation in remote sensing images. The partial images of the dataset and the corresponding label images shown in Figure 3 show the label images labeled as deciduous trees, evergreen trees, grasslands, and non-vegetated areas.

二、多维特征空间构建步骤:Second, the multi-dimensional feature space construction steps:

先利用数据增强的方法进行影像处理以突出有效光谱信息,然后利用随机裁剪的方法将标签图像以及数据增强后的遥感影像裁剪,并基于裁剪后的标签图像和遥感影像进行特征计算处理,构建多维特征空间。对训练样本数据集进行数据增强和随机裁剪,能够充足数据集,提高训练样本集构建的效率,在泛化网络模型的同时避免过拟合问题,并且是构建多维特征空间数据的基础前提。First, the data enhancement method is used for image processing to highlight the effective spectral information, and then the label image and the data-enhanced remote sensing image are cropped by the random cropping method, and feature calculation processing is performed based on the cropped label image and remote sensing image to construct a multi-dimensional feature space. Data enhancement and random cropping of the training sample data set can provide sufficient data sets, improve the efficiency of training sample set construction, avoid over-fitting problems while generalizing the network model, and are the basic prerequisite for constructing multi-dimensional feature space data.

首先,利用数据增强的方法进行影像处理,突出有效光谱信息,提升深度学习过程中的收敛效率。本发明采用min-max标准化法,也成离差标准化,将每个通道的像元值数据范围从区间[0,255]缩小至[0,1]。First, image processing is performed using data enhancement methods to highlight effective spectral information and improve the convergence efficiency in the deep learning process. The present invention adopts the min-max normalization method, which is also called dispersion normalization, and reduces the data range of the pixel value of each channel from the interval [0, 255] to [0, 1].

Figure RE-GDA0002699480330000071
Figure RE-GDA0002699480330000071

其中,xmin表示样本数据的最小值,xmax表示样本数据的最大值。Among them, x min represents the minimum value of the sample data, and x max represents the maximum value of the sample data.

其次,利用随机裁剪的方法,提高训练样本数量,保证深度学习的样本数量需求。设定裁剪尺寸的大小(如256×256),同时,根据设定数量,结合随机函数,设定裁剪起始位置点。根据起始位置点和裁剪尺寸,将目视解译后的样本区域和高分遥感影像裁剪出样本数据,从而大幅提升样本数据的数据量。Secondly, the random cropping method is used to increase the number of training samples to ensure the number of samples required for deep learning. Set the size of the cropping size (such as 256×256), and at the same time, according to the set quantity, combined with the random function, set the cropping starting position point. According to the starting point and cropping size, the sample area and high-resolution remote sensing image after visual interpretation are cropped out of the sample data, thereby greatly increasing the data volume of the sample data.

然后,根据数据增强和随机裁剪处理后的训练样本数据集进行特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的五类特征的多维特征空间。针对高分辨率遥感影像数据波段较少,模型特征学习丰富度有限的现状,本发明从五个不同的维度分别构建了五类遥感影像特征,即植被特征、空间特征、对比度特征、纹理特征和物候特征。即从光谱、空间、对比度、纹理和物候等五个方面,构建多维特征空间,从而丰富深度学习的信息。Then, according to the training sample dataset after data enhancement and random cropping, feature calculation processing is performed to construct a multi-dimensional feature space of five types of features including vegetation features, spatial features, contrast features, texture features and phenological features. In view of the current situation that the high-resolution remote sensing image data has fewer bands and the richness of model feature learning is limited, the present invention constructs five types of remote sensing image features from five different dimensions, namely, vegetation features, spatial features, contrast features, texture features and phenological characteristics. That is, from the five aspects of spectrum, space, contrast, texture and phenology, a multi-dimensional feature space is constructed to enrich the information of deep learning.

第一,通过构建归一化植被指数将其作为城市绿地的植被特征;第二,构建了nDSM数字表面模型将其作为城市绿地的空间特征;第三,通过AC算法计算局部对比度特征图,将其作为城市绿地的对比度特征;第四,通过灰度共生矩阵获取图像纹理特征图,将其作为城市绿地的纹理特征;第五,加入同期冬季影像将其作为城市绿地的物候特征。First, by constructing the normalized vegetation index as the vegetation feature of urban green space; second, by constructing the nDSM digital surface model as the spatial feature of urban green space; third, by calculating the local contrast feature map through the AC algorithm, the It is used as the contrast feature of urban green space; fourth, the image texture feature map is obtained through the gray level co-occurrence matrix, and it is used as the texture feature of urban green space; fifth, the winter images of the same period are added to use it as the phenological feature of urban green space.

(1)植被特征(1) Vegetation characteristics

植被特征我们选取归一化植被指数,它是衡量植被覆盖程度、检测植物生长状况等的重要指标,它可以综合相关的光谱信息,突出影像中的植被,而减少其中的非植被信息。We select the normalized vegetation index for vegetation characteristics, which is an important index to measure the degree of vegetation coverage and detect the growth status of plants. It can synthesize relevant spectral information, highlight the vegetation in the image, and reduce the non-vegetation information.

Figure RE-GDA0002699480330000072
Figure RE-GDA0002699480330000072

式中,NIR表示近红外波段的值,R表示红波段的值。当NDVI<0,该区域可能存在水、雪或者云等地物;当NDVI>0时,表明该区域可能覆盖有植被;当NDVI=0时,该区域极有可能覆盖有岩石或裸地等地物。In the formula, NIR represents the value in the near-infrared band, and R represents the value in the red band. When NDVI<0, there may be water, snow or clouds in the area; when NDVI>0, it means that the area may be covered with vegetation; when NDVI=0, the area is likely to be covered with rocks or bare land, etc. features.

(2)空间特征(2) Spatial features

本发明选择nDSM作为城市绿地的空间特征。nDSM是指消除了地形的影响,记录了所有地面物体相对于地面的高度信息的数据模型。数字表面模型(DSM,Digital SurfaceModel) 是指包含了地表建筑物、桥梁和树木等高度的地面高程模型。数字高程模型(DEM,Digital Elevation Model)是描述地表起伏形态特征的空间数据模型。将包含地物高度及地形起伏高度的DSM与DEM相减,即可得到只含地物高度的数据模型nDSM。计算公式如下:The present invention selects nDSM as the spatial feature of urban green space. nDSM refers to a data model that eliminates the influence of terrain and records the height information of all ground objects relative to the ground. Digital Surface Model (DSM, Digital Surface Model) refers to a ground elevation model that includes the heights of buildings, bridges, and trees on the ground. Digital Elevation Model (DEM, Digital Elevation Model) is a spatial data model that describes the morphological characteristics of surface relief. The data model nDSM containing only the height of the ground object can be obtained by subtracting the DSM including the height of the ground object and the height of the terrain relief from the DEM. Calculated as follows:

nDSM(i,j)=DSM(i,j)-DEM(i,j) (3)nDSM(i,j)=DSM(i,j)-DEM(i,j) (3)

其中:nDSM(i,j)表示nDSM在第i行第j列的高程值;DSM(i,j)表示DSM在第i行第j列的高程值;DEM(i,j)表示DEM在第i行第j列的高程值。Among them: nDSM(i,j) represents the elevation value of nDSM in row i, column j; DSM(i,j) represents the elevation value of DSM in row i, column j; DEM(i,j) represents the elevation value of DEM in row i and column j The elevation value of row i and column j.

(3)对比度特征(3) Contrast feature

AC算法是一种基于局部对比度的特征提取算法,它的基本思路是将图像从RGB颜色空间转换到LAB颜色空间中,然后对于图像中的每一个感知单元在不同尺寸大小的窗口上分别计算局部对比度值,再将得到的不同窗口尺寸大小下的多个局部对比度值相加,以此方式遍历整张图像得到最终的影像特征图。The AC algorithm is a feature extraction algorithm based on local contrast. Its basic idea is to convert the image from the RGB color space to the LAB color space, and then for each perceptual unit in the image, calculate the local image separately on windows of different sizes. Contrast value, and then add multiple local contrast values obtained under different window sizes, and traverse the entire image in this way to obtain the final image feature map.

举例来讲,假设内部区域R1,外部区域R2,计算内部区域R1和外部区域R2的局部对比度时,通过改变R2的大小实现多尺度显著性计算。感知单元R1可以是一个像素或一个像素块,其邻域为R2,(R1)R2所包含的所有像素的特征值的平均值作为(R1)R2的特征值。设像素p为R1和R2的中心,p所在位置局部对比度为:For example, assuming the inner region R1 and the outer region R2, when calculating the local contrast of the inner region R1 and the outer region R2, the multi-scale saliency calculation is realized by changing the size of R2. The perceptual unit R1 may be a pixel or a pixel block, and its neighborhood is R2, and the average value of the feature values of all pixels included in (R1)R2 is used as the feature value of (R1)R2. Let pixel p be the center of R1 and R2, and the local contrast at the position of p is:

Figure RE-GDA0002699480330000081
Figure RE-GDA0002699480330000081

其中N1和N2分别是R1和R2中像素的个数。vk是k这个位置的特征值或特征向量。由于AC方法采用Lab颜色空间,所以采用欧氏距离来计算特征距离。R1默认为一个像素,R2 边长为[L/8,L/2]之间的正方形区域,L为长宽中较小者。多个尺度的特征显著图通过直接相加得到完整的显著图。where N1 and N2 are the number of pixels in R1 and R2, respectively. v k is the eigenvalue or eigenvector of the position k. Since the AC method uses the Lab color space, the Euclidean distance is used to calculate the feature distance. R1 is one pixel by default, R2 is a square area between [L/8, L/2], and L is the smaller of the length and width. Feature saliency maps of multiple scales are directly added to obtain a complete saliency map.

(4)纹理特征(4) Texture features

图像的纹理特征能够更好地兼顾图像宏观性质与细微结构两个方面,是地物分类提取的重要特征。提取纹理特征的方法很多,如基于局部统计特性的特征、基于随机场模型的特征、基于空间频率的特征、分形特征等,应用最广泛的是基于灰值共生矩阵的特征。The texture feature of the image can better take into account both the macroscopic properties and the fine structure of the image, and is an important feature for the classification and extraction of ground objects. There are many methods for extracting texture features, such as features based on local statistical properties, features based on random field models, features based on spatial frequency, fractal features, etc. The most widely used feature is based on gray value co-occurrence matrix.

1973年提出的灰度共生矩阵方法,是目前公认的一种重要纹理分析方法。共生矩阵是距离和方向的函数,其描述在θ方向上,相隔d像元距离的一对像元,分别具有灰度值i和j 的出现概率,其元素可记为P(i,j|d,θ)。灰度共生矩阵的各元素计算公式如下:The gray-scale co-occurrence matrix method proposed in 1973 is an important texture analysis method currently recognized. The co-occurrence matrix is a function of distance and direction, which describes a pair of pixels separated by a distance of d pixels in the θ direction, with the probability of occurrence of gray values i and j respectively, and its elements can be recorded as P(i,j| d, θ). The calculation formula of each element of the gray level co-occurrence matrix is as follows:

Figure RE-GDA0002699480330000091
Figure RE-GDA0002699480330000091

假定待分析的图像在水平和垂直方向上各有Nx和Ny个像素。设Zx={1,2,…,Nx}为水平空间域,Zy={1,2,…,Ny}为垂直空间域。当距离为1,θ分别为0度、45度、90度、135度时的公式分别为:It is assumed that the image to be analyzed has Nx and Ny pixels in the horizontal and vertical directions respectively. Let Zx={1,2,...,Nx} be the horizontal space domain, and Zy={1,2,...,Ny} be the vertical space domain. When the distance is 1 and θ is 0 degrees, 45 degrees, 90 degrees, and 135 degrees, the formulas are:

Figure RE-GDA0002699480330000092
Figure RE-GDA0002699480330000092

Figure RE-GDA0002699480330000093
Figure RE-GDA0002699480330000093

Figure RE-GDA0002699480330000094
Figure RE-GDA0002699480330000094

Figure RE-GDA0002699480330000095
Figure RE-GDA0002699480330000095

其中,k、m和l、n分别表示在所选计算窗口中的变动,#表示使大括号成立的像素对数。Among them, k, m and l, n represent the variation in the selected calculation window, respectively, and # represents the number of pixel pairs that make the curly brackets true.

进行特征值的计算,根据上述公式(6-9)构造四个方向的共生矩阵,然后分别计算这四个共生矩阵的能量、熵、惯性矩、相关这4个纹理参数,得到图像的纹理特征。Carry out the calculation of eigenvalues, construct co-occurrence matrices in four directions according to the above formula (6-9), and then calculate the four texture parameters of energy, entropy, inertia moment, and correlation of these four co-occurrence matrices respectively, and obtain the texture features of the image. .

(5)物候特征(5) Phenological characteristics

冬季落叶树凋零,本发明采用高分遥感影像多时相遥感影像,结合植被的物候学理论将冬季影像加入深度学习模型中,用以区分常绿树和落叶树,对单夏季影像进行分类结果优化。Deciduous trees wither in winter, the present invention uses high-resolution remote sensing images and multi-temporal remote sensing images, and combines the phenology theory of vegetation to add winter images to the deep learning model to distinguish evergreen trees and deciduous trees, and optimize the classification results of single summer images.

三、U-Net+模型构建步骤:3. U-Net+ model construction steps:

基于构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型。Based on the constructed multi-dimensional feature space, facing the characteristics of urban green space, the U-net model is improved by successively using the improved method of image edge zero padding, the improved method of batch normalization and the improved method of regularization, and a multi-dimensional feature-oriented urban green space is established. U-Net+ deep learning model.

采用图像边缘填充策略,保证输入网络和输出网络的影像大小一致;为提高模型的泛化性和鲁棒性,添加Dropout正则化以及BN层,改进对U-Net模型,可以保证模型在避免过拟合的同时加快训练速度;为剔除图像边缘预测精度较低对损失函数计算结果的影响,改进交叉熵函数提高预测精度。The image edge filling strategy is adopted to ensure that the image sizes of the input network and output network are consistent; in order to improve the generalization and robustness of the model, Dropout regularization and BN layer are added to improve the U-Net model, which can ensure that the model avoids excessive The training speed is accelerated while fitting; in order to eliminate the influence of low image edge prediction accuracy on the calculation result of the loss function, the cross-entropy function is improved to improve the prediction accuracy.

同时,针对前四类特征,将高分辨遥感影像(真彩色/假彩色)与先前构建的植被、空间、对比度、纹理特征进行波段组合,得到4通道的多特征高分辨率影像。针对物候特征,冬季落叶植被进入休眠期,与夏季植被的光谱差异显著,结合植被物候学理论在模型中加入高分二号冬季影像,可以把冬季落叶树进行剔除,对夏季影像分类结果进行校正。将制作好的训练集输入至U-Net+深度学习模型中进行训练,得到最佳参数组合,在模型预测时采用膨胀预测获得分类结果。将获取的植被特征、空间特征、对比度特征、纹理特征和物候特征分类图进行投票法融合,进一步优选出最佳的比如北京主城区城市绿地预测结果。At the same time, for the first four types of features, high-resolution remote sensing images (true color/false color) are combined with previously constructed vegetation, space, contrast, and texture features to obtain 4-channel multi-feature high-resolution images. According to the phenological characteristics, the deciduous vegetation in winter has entered a dormant period, and the spectral difference is significantly different from that of the summer vegetation. Combined with the theory of vegetation phenology, adding the Gaofen-2 winter image to the model can eliminate the winter deciduous trees and correct the summer image classification results. Input the prepared training set into the U-Net+ deep learning model for training to obtain the best parameter combination, and use the inflation prediction to obtain the classification result during model prediction. The obtained vegetation features, spatial features, contrast features, texture features, and phenological feature classification maps are fused by voting method, and the best prediction results, such as urban green space in the main urban area of Beijing, are further optimized.

U-Net模型是一种改进的FCN结构,是目前扩展性最好的全连接神经网络。其结构清晰呈现字母U形,由左半边的压缩通道(Contracting Path)和右半边扩展通道(Expansive Path) 组成。U-Net巧妙的融合了编码-解码结构和跳跃网络的特点,压缩通道是一个编码器,用于逐层提取影像的特征。它采用2个卷积层和1个最大池化层的结构,每进行一次池化操作后特征图的维数就增加1倍。扩展通道是一个解码器,用于还原影像的位置信息,在扩展通道,先进行1次反卷积操作,使特征图的维数减半,然后拼接对应压缩通道裁剪得到的特征图,重新组成一个2倍大小的特征图,再采用2个卷积层进行特征提取,并重复这一结构。在最后的输出层,用2个卷积层将64维的特征图映射成2维的输出图。但是现有的U-Net模型还存在一些不足,本发明对原始的网络结构进行特定的改进,建立面向城市绿地并基于多维特征的U-Net+深度学习模型。The U-Net model is an improved FCN structure and is currently the most scalable fully connected neural network. Its structure clearly presents the letter U shape, which consists of the left half of the compression channel (Contracting Path) and the right half of the expansion channel (Expansive Path). U-Net cleverly combines the characteristics of the encoding-decoding structure and the skip network, and the compression channel is an encoder, which is used to extract the features of the image layer by layer. It adopts the structure of 2 convolutional layers and 1 maximum pooling layer, and the dimension of the feature map is doubled after each pooling operation. The expansion channel is a decoder, which is used to restore the position information of the image. In the expansion channel, a deconvolution operation is performed first to halve the dimension of the feature map, and then the feature map cropped by the corresponding compression channel is spliced and reconstituted. A feature map of 2 times the size, and then 2 convolutional layers are used for feature extraction, and the structure is repeated. In the final output layer, 2 convolutional layers are used to map the 64-dimensional feature map into a 2-dimensional output map. However, the existing U-Net model still has some deficiencies. The present invention makes specific improvements to the original network structure, and establishes a U-Net+ deep learning model based on multi-dimensional features for urban green space.

(1)图像边缘补零改进(1) Improvement of image edge zero padding

为保证输入图像与输出图像大小一致,采用图像边缘补零策略,如图4a和图4b所示对比图,图4a为valid方式,即没有补零的情况;图4b为same方式,即补零的情况。In order to ensure that the size of the input image and the output image are consistent, the image edge zero-filling strategy is adopted, as shown in Figure 4a and Figure 4b for comparison, Figure 4a is the valid mode, that is, there is no zero-padding; Figure 4b is the same mode, that is, zero-padding Case.

(2)批归一化处理方法改进(2) Improvement of batch normalization processing method

深度学习在训练的过程中不断更新参数,因此除了输入层以外,训练参数更新将导致其后层输入数据分布的变化,这极易导致梯度弥散的发生,训练的速度也会降低。由于网络训练参数繁多,为了防止梯度消失和梯度爆炸,本发明采用批归一化(BatchNormalization, BN)处理方法,在卷积层后面加入批归一化处理,然后再把归一化后的数据输入上述的卷积层的下一层,从而防止梯度消失和梯度爆炸,提高网络训练速度。这样做可以更好的将上层网络的输出连接到最后一层上采样的结果中。这里的归一化层是一个可学习、有参数的网络层,如式1所示。Deep learning continuously updates parameters during training, so in addition to the input layer, the update of training parameters will lead to changes in the distribution of input data in subsequent layers, which can easily lead to the occurrence of gradient dispersion and reduce the speed of training. Since there are many network training parameters, in order to prevent gradient disappearance and gradient explosion, the present invention adopts a batch normalization (BatchNormalization, BN) processing method, adding batch normalization processing after the convolution layer, and then the normalized data is processed. Input the next layer of the above-mentioned convolutional layer, thereby preventing the gradient from disappearing and exploding, and improving the training speed of the network. Doing so can better connect the output of the upper network to the result of the upsampling of the last layer. The normalization layer here is a learnable, parameterized network layer, as shown in Equation 1.

Figure RE-GDA0002699480330000101
Figure RE-GDA0002699480330000101

其中,

Figure RE-GDA0002699480330000102
为标准差归一化结果,即
Figure RE-GDA0002699480330000103
γ(k)、β(k)为学习参数,其中
Figure RE-GDA0002699480330000104
β(k)=E[x(k)]。in,
Figure RE-GDA0002699480330000102
Normalize the result for the standard deviation, that is
Figure RE-GDA0002699480330000103
γ (k) and β (k) are learning parameters, where
Figure RE-GDA0002699480330000104
β (k) = E[x (k) ].

(3)Dropout层改进(3) Dropout layer improvement

Dropout是深度学习中正则化的一种方法,是指在网络训练时,以一定的概率暂时丢弃神经元,防止模型对少量样本过度学习出现过拟合现象。如果在模型的每一次反卷积后都加入 dropout层,在一定程度上防止过拟合情况的发生,如图5a和图5b所示对比图,图5a为不添加Dropout层的原网络,图5b为添加Dropout层后的网络。Dropout is a method of regularization in deep learning, which means that during network training, neurons are temporarily discarded with a certain probability to prevent the model from over-learning and over-fitting with a small number of samples. If the dropout layer is added after each deconvolution of the model, the overfitting situation can be prevented to a certain extent, as shown in the comparison diagrams shown in Figure 5a and Figure 5b. Figure 5a shows the original network without adding the Dropout layer. Figure 5 5b is the network after adding the Dropout layer.

改进后的面向城市绿地并基于多维特征的U-Net+深度学习模型的结构如图6所示, U-Net+模型结构的输入图像尺寸为256×256×c(c表示加入了多源数据后的影像总波段数),首先进行了一次3×3卷积和BN操作(水平黑色剪头),将输入数据转换为32维的特征图,然后再重复采用1个最大池化层和2个卷积层的结构。扩展通道是一个解码器,它能够逐渐还原影像的细节信息和位置信息。在扩展通道,先进行了1次反卷积操作(竖向空心箭头),使特征图的维数减半,然后拼接对应压缩通道的特征图(斜线填充框),重新组成一个2倍维数的特征图,再采用2个卷积层,并重复这一结构,拼接操作能够使网络学习多尺度和不同层级的特征,增加网络健壮性,有利于提高分类精度;在最后的输出层,采用一个卷积核为 1×1大小的6维卷积层将上一层得到的特征图映射成6维输出特征图。图中所标的字母“D”代表在该位置的卷积层后添加丢弃神经元概率为0.5的Dropout层。线性激活函数使用了 Relu,池化都选择了max-pooling,优化器使用了Adam。The structure of the improved U-Net+ deep learning model based on multi-dimensional features for urban green space is shown in Figure 6. The input image size of the U-Net+ model structure is 256×256×c (c represents the multi-source data added. The total number of bands in the image), first a 3×3 convolution and BN operation (horizontal black crop) are performed to convert the input data into a 32-dimensional feature map, and then 1 max pooling layer and 2 volumes are repeated. Layered structure. The extended channel is a decoder that gradually restores the details and position of the image. In the expansion channel, a deconvolution operation (vertical hollow arrow) is first performed to halve the dimension of the feature map, and then the feature map corresponding to the compressed channel (slash filled box) is spliced to form a 2-fold dimension. The number of feature maps, and then two convolution layers are used, and this structure is repeated. The splicing operation can enable the network to learn multi-scale and different-level features, increase the robustness of the network, and help improve the classification accuracy; in the final output layer, A 6-dimensional convolutional layer with a convolution kernel of 1 × 1 is used to map the feature map obtained by the previous layer into a 6-dimensional output feature map. The letter "D" in the figure represents the addition of a dropout layer with a probability of dropping neurons of 0.5 after the convolutional layer at that position. The linear activation function uses Relu, the pooling selects max-pooling, and the optimizer uses Adam.

将训练样本的多维特征数据加入U-net+模型中进行训练。如图7所述的训练流程图,在模型训练中,共有350个256×256的遥感影像、多维特征数据和对应标签输入U-net+模型中,模型在编码部分提取出输入影像的特征,在解码部分恢复其空间位置与分辨率,在像素分类层对每一像素进行分类获得类别信息。将预测的分类图与输入的标签图在交叉熵函数中计算损失值(loss),将损失值传递至模型中的进行反向传播,逐层优化模型中的参数。Add the multi-dimensional feature data of the training samples to the U-net+ model for training. As shown in the training flow chart in Figure 7, in the model training, a total of 350 remote sensing images of 256×256, multi-dimensional feature data and corresponding labels are input into the U-net+ model. The decoding part restores its spatial position and resolution, and classifies each pixel in the pixel classification layer to obtain class information. Calculate the loss value (loss) in the cross entropy function between the predicted classification map and the input label map, pass the loss value to the model for back propagation, and optimize the parameters in the model layer by layer.

卷积层的激活函数采用RELU函数,卷积层的计算公式可表示为:The activation function of the convolutional layer adopts the RELU function, and the calculation formula of the convolutional layer can be expressed as:

Figure RE-GDA0002699480330000111
Figure RE-GDA0002699480330000111

其中,

Figure RE-GDA0002699480330000112
分别为第l层第k维的权重值与偏置项,
Figure RE-GDA0002699480330000113
为第l层第k维的特征图,
Figure RE-GDA0002699480330000114
表示第l-1层第k维的输出特征图,
Figure RE-GDA0002699480330000115
表示卷积运算。in,
Figure RE-GDA0002699480330000112
are the weight value and bias term of the k-th dimension of the l-th layer, respectively,
Figure RE-GDA0002699480330000113
is the feature map of the kth dimension of the lth layer,
Figure RE-GDA0002699480330000114
represents the output feature map of the kth dimension of the l-1th layer,
Figure RE-GDA0002699480330000115
Represents a convolution operation.

深度学习模型一般用损失函数计算地面真实数据与预测概率之间的loss值来量化两者之间的差距,当loss值越小,说明分类越准确。本发明使用分类交叉熵函数(Categorical Crossentropy)计算loss值,公式为:The deep learning model generally uses the loss function to calculate the loss value between the ground truth data and the predicted probability to quantify the gap between the two. When the loss value is smaller, the classification is more accurate. The present invention uses the Categorical Crossentropy function (Categorical Crossentropy) to calculate the loss value, and the formula is:

Figure RE-GDA0002699480330000116
Figure RE-GDA0002699480330000116

模型训练的过程即优化loss函数、缩小loss值的过程,即后向传播。本发明优选采用 Adam优化算法进行模型训练,逐层更新模型中的参数,Adam算法容易实现,计算效率高且内存需求低。当loss值达到一定阈值后,训练停止。The process of model training is the process of optimizing the loss function and reducing the loss value, that is, backward propagation. In the present invention, the Adam optimization algorithm is preferably used for model training, and the parameters in the model are updated layer by layer, and the Adam algorithm is easy to implement, with high computational efficiency and low memory requirements. When the loss value reaches a certain threshold, the training stops.

U-net+模型训练完成后,每层的参数都取得了最优值,此时再次使用测试集图像进行前向传播即为模型预测的过程。如图8所示的模型预测结果与真实地表对比。After the training of the U-net+ model is completed, the parameters of each layer have achieved the optimal value. At this time, the forward propagation of the test set image is used again, which is the process of model prediction. The model prediction results shown in Figure 8 are compared with the real surface.

四、影像后处理步骤:Four, image post-processing steps:

对U-Net+模型预测结果进行后处理,优选利用全连接CRFs影像后处理方法对U-Net+模型预测结果进行后处理,结合原始高分辨遥感影像中所有像素之间的关系对分类结果进行处理,从而对预测结果优化,得到城市绿地高分辨率遥感监测成果。Post-processing the prediction results of the U-Net+ model, preferably using the fully connected CRFs image post-processing method to post-process the prediction results of the U-Net+ model, and combine the relationship between all pixels in the original high-resolution remote sensing image to process the classification results. Therefore, the prediction results are optimized, and the high-resolution remote sensing monitoring results of urban green space are obtained.

全连接CRFs将图像中所有的像素两两连接,描述了每个像素与其他所有像素之间的关系,并用像素间的颜色和实际相对距离来衡量像素间的差距,同时鼓励差距大的像素分配不同标签,使全连接CRFs能够尽量在边界处分割,不容易造成边界的过度平滑。Fully connected CRFs connect all pixels in the image pairwise, describe the relationship between each pixel and all other pixels, and use the color and actual relative distance between pixels to measure the gap between pixels, while encouraging the allocation of pixels with large gaps Different labels enable the fully connected CRFs to be segmented at the boundary as much as possible, and it is not easy to cause excessive smoothing of the boundary.

在全连接CRFs中,预测的标签值x的能量定义为:In fully connected CRFs, the energy of the predicted label value x is defined as:

Figure RE-GDA0002699480330000121
Figure RE-GDA0002699480330000121

其中,i,j∈{1,2,...N,N为图像像素总个数},ψu(xi)为一元势能,ψp(xi,xj)为二元势能。一元势能ψu(xi)表示网络模型对图像中每一个像素i独立预测得到的类别概率分布图,它包含很多噪音和不连续性。二元势能ψp(xi,xj)表示一个全连接图,连接了图像中所有像素对,在实际应用过程中将原始影像提供的信息作为二元势能,它的模型表达示式为:Among them, i,j∈{1,2,...N,N is the total number of image pixels}, ψ u (x i ) is the unary potential energy, and ψ p (x i ,x j ) is the binary potential energy. The unary potential energy ψ u ( xi ) represents the class probability distribution map independently predicted by the network model for each pixel i in the image, which contains a lot of noise and discontinuity. The binary potential energy ψ p (x i ,x j ) represents a fully connected graph that connects all pixel pairs in the image. In the actual application process, the information provided by the original image is used as the binary potential energy. Its model expression is:

Figure RE-GDA0002699480330000122
Figure RE-GDA0002699480330000122

其中,u(xi,yj)为标签兼容项,其约束了像素间传导的条件,即只有在相同标签的条件下,能量才可以相互传导。ω(m)为权值参数,

Figure RE-GDA0002699480330000123
为特征函数,公式如下:Among them, u(x i , y j ) is a label compatible term, which constrains the condition of conduction between pixels, that is, only under the condition of the same label, energy can be transmitted to each other. ω (m) is the weight parameter,
Figure RE-GDA0002699480330000123
is the characteristic function, the formula is as follows:

Figure RE-GDA0002699480330000124
Figure RE-GDA0002699480330000124

其中,Ii,Ij为颜色向量,pi,pj为位置向量。特征函数

Figure RE-GDA0002699480330000125
以特征的形式表示了不同像素之前的“亲密度”,第一项被称作表面核,第二项被称作平滑核。在全连接CRFs进行影像后处理的实际过程中,一元势能是概率分布图,是每一个像素的类别分配概率,由网络模型输出的特征图经过softmax函数运算后得到的结果;二元势能中的位置信息和颜色信息是由原始影像提供。Among them, I i , I j are color vectors, and p i , p j are position vectors. Characteristic Function
Figure RE-GDA0002699480330000125
The "closeness" before different pixels is represented in the form of features, the first term is called the surface kernel and the second term is called the smooth kernel. In the actual process of image post-processing with fully connected CRFs, the unary potential energy is the probability distribution map, which is the class distribution probability of each pixel. The feature map output by the network model is the result obtained by the softmax function; Position information and color information are provided by the original image.

使用全连接CRFs方法对U-Net+模型预测结果进行后处理。全连接CRFs能够结合原始影像中所有像素之间的关系对分类结果进行处理,改善错分现象,细化地物边缘,优化预测结果,从而形成高精度的城市绿地高分遥感监测成果。The U-Net+ model prediction results are post-processed using the fully connected CRFs method. Fully connected CRFs can combine the relationship between all pixels in the original image to process the classification results, improve the misclassification phenomenon, refine the edge of objects, optimize the prediction results, and form high-precision remote sensing monitoring results of urban green space.

本发明还涉及一种城市绿地高分遥感监测系统,该系统与上述的城市绿地高分遥感监测方法相对应,可理解为是实现上述城市绿地高分遥感监测方法的系统,该系统结构如图9所示,包括依次连接的训练样本集构建模块、多维特征空间构建模块、U-Net+模型构建模块和影像后处理模块,训练样本集构建模块和U-Net+模型构建模块相连接,各模块相互协同工作,在构建充足的训练样本数据集后,通过构建多维特征空间、增强特征丰富度,同时建立面向城市绿地并基于多维特征的U-Net+深度学习模型,结合影像后处理方式,提高监测方法的泛化性和鲁棒性,进而提高城市绿地高分辨率遥感监测的精度。The present invention also relates to an urban green space high-score remote sensing monitoring system, which corresponds to the above-mentioned urban green space high-score remote sensing monitoring method, and can be understood as a system for realizing the above-mentioned urban green space high-score remote sensing monitoring method. The system structure is shown in the figure As shown in 9, it includes a training sample set building module, a multi-dimensional feature space building module, a U-Net+ model building module, and an image post-processing module that are connected in sequence. The training sample set building module and the U-Net+ model building module are connected. Working together, after constructing sufficient training sample data sets, by constructing a multi-dimensional feature space, enhancing feature richness, and establishing a U-Net+ deep learning model based on multi-dimensional features for urban green space, combined with image post-processing methods, improve monitoring methods The generalization and robustness of the method can improve the accuracy of high-resolution remote sensing monitoring of urban green space.

其中,训练样本集构建模块,针对高分辨遥感影像特征,选择样本区域并在样本区域内构建训练样本数据集;多维特征空间构建模块,将训练样本集构建模块的训练样本数据集进行数据增强、随机裁剪和特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的多维特征空间;通过构建归一化植被指数将其作为城市绿地的植被特征,构建nDSM数字表面模型将其作为城市绿地的空间特征,通过AC算法计算局部对比度特征图将其作为城市绿地的对比度特征,通过灰度共生矩阵获取图像纹理特征图将其作为城市绿地的纹理特征,并加入同期冬季影像将其作为城市绿地的物候特征;U-Net+模型构建模块,基于多维特征空间构建模块构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型;再将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,并在训练完成后进行城市绿地空间分布的预测,获取U-Net+模型预测结果;影像后处理模块,对U-Net+模型预测结果进行后处理,得到城市绿地高分辨率遥感监测成果。Among them, the training sample set building module selects a sample area and constructs a training sample data set in the sample area according to the features of high-resolution remote sensing images; the multi-dimensional feature space building module performs data enhancement, Random cropping and feature calculation processing to construct a multi-dimensional feature space including vegetation features, spatial features, contrast features, texture features and phenological features; by constructing the normalized vegetation index as the vegetation feature of urban green space, the nDSM digital surface model is constructed to As the spatial feature of urban green space, the local contrast feature map is calculated by the AC algorithm as the contrast feature of urban green space, and the image texture feature map is obtained through the grayscale co-occurrence matrix as the texture feature of urban green space. It is used as the phenological feature of urban green space; the U-Net+ model building module, based on the multi-dimensional feature space constructed by the multi-dimensional feature space building module, is oriented towards the characteristics of urban green space, and successively uses the image edge zero-filling improvement method and batch normalization processing improvement method. And the regularization improvement method to improve the U-net model, establish a U-Net+ deep learning model based on multi-dimensional features for urban green space; then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, and after the training is completed Then, predict the spatial distribution of urban green space, and obtain the prediction results of the U-Net+ model; the image post-processing module performs post-processing on the prediction results of the U-Net+ model, and obtains the high-resolution remote sensing monitoring results of urban green space.

优选地,所述训练样本集构建模块,针对高分辨遥感影像特征,将部分典型区域作为样本区域,利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,记录遥感影像中不同类型植被位置的地表真实图像,构建训练样本数据集,如图3所示的数据集部分影像及对应标签图像。多维特征空间构建模块先利用数据增强的方法进行影像处理以突出有效光谱信息,然后利用随机裁剪的方法将标签图像以及数据增强后的遥感影像裁剪,并基于裁剪后的标签图像和遥感影像进行特征计算处理,构建多维特征空间。Preferably, in the training sample set building module, for the features of high-resolution remote sensing images, some typical areas are used as sample areas, and a visual interpretation method is used to extract samples of high-resolution remote sensing images, and the vector files obtained by visual interpretation are extracted. Convert elements to raster to obtain label images, record real surface images of different types of vegetation locations in remote sensing images, and build a training sample dataset, as shown in Figure 3. Partial images of the dataset and corresponding label images. The multi-dimensional feature space building module first uses the data enhancement method to process the image to highlight the effective spectral information, and then uses the random cropping method to crop the label image and the data-enhanced remote sensing image, and perform feature based on the cropped label image and remote sensing image. Computational processing to construct a multidimensional feature space.

优选地,所述U-Net+模型构建模块中图像边缘补零改进方式如图4b所示;批归一化处理(BN)改进方式是在模型卷积层后面加入批归一化处理,再把归一化处理后的数据输入所述模型卷积层的下一层,从而将上层网络的输出连接到最后一层上采样的结果中,防止梯度消失和梯度爆炸,提高网络训练速度;正则化改进方式(dropout)如图5b所示,是在模型的每一次反卷积后都加入具有特定丢弃神经元概率的dropout层。Preferably, the improved method of zero-filling image edges in the U-Net+ model building module is shown in Figure 4b; the improved method of batch normalization (BN) is to add batch normalization after the model convolution layer, and then The normalized data is input into the next layer of the convolutional layer of the model, so that the output of the upper layer network is connected to the result of the upsampling of the last layer, so as to prevent gradient disappearance and gradient explosion, and improve network training speed; regularization An improvement (dropout), shown in Figure 5b, is to add a dropout layer with a specific probability of dropping neurons after each deconvolution of the model.

优选地,基于多维特征的U-Net+深度学习模型的结构如图6所示,训练流程如图7所示,所述U-Net+模型构建模块中将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,是将训练样本的多维特征数据中的遥感影像、多维特征数据和相应标签图像输入U-net+ 深度学习模型中,U-net+深度学习模型在编码部分提取出输入影像的特征,在解码部分恢复其空间位置与分辨率,在像素分类层对每一像素进行分类获得类别信息;将预测的分类图与输入的标签图在交叉熵函数中计算损失值,将损失值传递至U-net+深度学习模型中进行反向传播,逐层优化模型中的参数,当损失值达到一定阈值后,训练停止。Preferably, the structure of the multi-dimensional feature-based U-Net+ deep learning model is shown in Figure 6, and the training process is shown in Figure 7. In the U-Net+ model building module, the multi-dimensional feature data of the training sample is added to the U-net+ depth For training in the learning model, the remote sensing images, multi-dimensional feature data and corresponding label images in the multi-dimensional feature data of the training samples are input into the U-net+ deep learning model, and the U-net+ deep learning model extracts the features of the input image in the coding part. , restore its spatial position and resolution in the decoding part, and classify each pixel in the pixel classification layer to obtain category information; calculate the loss value of the predicted classification map and the input label map in the cross entropy function, and pass the loss value to Backpropagation is performed in the U-net+ deep learning model, and the parameters in the model are optimized layer by layer. When the loss value reaches a certain threshold, the training stops.

优选地,所述影像后处理模块,利用全连接CRFs影像后处理方法对U-Net+模型预测结果进行后处理,结合原始高分辨遥感影像中所有像素之间的关系对分类结果进行处理,从而对预测结果优化,得到城市绿地高分辨率遥感监测成果。Preferably, the image post-processing module uses the fully connected CRFs image post-processing method to post-process the prediction results of the U-Net+ model, and processes the classification results in combination with the relationship between all the pixels in the original high-resolution remote sensing images, so as to make The prediction results are optimized, and the high-resolution remote sensing monitoring results of urban green space are obtained.

应当指出,以上所述具体实施方式可以使本领域的技术人员更全面地理解本发明创造,但不以任何方式限制本发明创造。因此,尽管本说明书参照附图和实施例对本发明创造已进行了详细的说明,但是,本领域技术人员应当理解,仍然可以对本发明创造进行修改或者等同替换,总之,一切不脱离本发明创造的精神和范围的技术方案及其改进,其均应涵盖在本发明创造专利的保护范围当中。It should be pointed out that the above-mentioned specific embodiments can make those skilled in the art understand the present invention more comprehensively, but do not limit the present invention in any way. Therefore, although this specification has described the invention in detail with reference to the accompanying drawings and embodiments, those skilled in the art should understand that the invention can still be modified or equivalently replaced. The technical solutions and improvements of the spirit and scope shall be covered by the protection scope of the invention patent.

Claims (10)

1.一种城市绿地高分遥感监测方法,其特征在于,包括下述步骤:1. an urban green space high score remote sensing monitoring method, is characterized in that, comprises the following steps: 训练样本集构建步骤,针对高分辨遥感影像特征,选择样本区域,构建训练样本数据集;The training sample set construction step, according to the high-resolution remote sensing image features, select the sample area, and construct the training sample data set; 多维特征空间构建步骤,将训练样本数据集进行数据增强、随机裁剪和特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的多维特征空间;通过构建归一化植被指数将其作为城市绿地的植被特征,构建nDSM数字表面模型将其作为城市绿地的空间特征,通过AC算法计算局部对比度特征图将其作为城市绿地的对比度特征,通过灰度共生矩阵获取图像纹理特征图将其作为城市绿地的纹理特征,并加入同期冬季影像将其作为城市绿地的物候特征;The multi-dimensional feature space construction step is to perform data enhancement, random cropping and feature calculation processing on the training sample data set to construct a multi-dimensional feature space including vegetation features, spatial features, contrast features, texture features and phenological features; by constructing a normalized vegetation index Taking it as the vegetation feature of urban green space, construct the nDSM digital surface model to take it as the spatial feature of urban green space, calculate the local contrast feature map through AC algorithm and take it as the contrast feature of urban green space, and obtain the image texture feature map through the gray level co-occurrence matrix. Take it as the texture feature of urban green space, and add the winter images of the same period as the phenological feature of urban green space; U-Net+模型构建步骤,基于构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型;再将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,并在训练完成后进行城市绿地空间分布的预测,获取U-Net+模型预测结果;The U-Net+ model construction step is based on the constructed multi-dimensional feature space, facing the characteristics of urban green space, using the image edge zero padding improvement method, the batch normalization processing improvement method and the regularization improvement method to improve the U-net model in turn, and establish a face-to-face Urban green space and U-Net+ deep learning model based on multi-dimensional features; then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, and after the training is completed, predict the spatial distribution of urban green space to obtain U-Net+ Model prediction results; 影像后处理步骤,对U-Net+模型预测结果进行后处理,得到城市绿地高分辨率遥感监测成果。The image post-processing step is to post-process the prediction results of the U-Net+ model to obtain the high-resolution remote sensing monitoring results of urban green spaces. 2.根据权利要求1所述的城市绿地高分遥感监测方法,其特征在于,所述训练样本集构建步骤,针对高分辨遥感影像特征,将部分典型区域作为样本区域,利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,记录遥感影像中不同类型植被位置的地表真实图像,构建训练样本数据集;所述多维特征空间构建步骤先利用数据增强的方法进行影像处理以突出有效光谱信息,然后利用随机裁剪的方法将标签图像以及数据增强后的遥感影像裁剪,并基于裁剪后的标签图像和遥感影像进行特征计算处理从而构建多维特征空间。2. The high-resolution remote sensing monitoring method for urban green space according to claim 1, wherein, in the training sample set construction step, for high-resolution remote sensing image features, some typical areas are used as sample areas, and a visual interpretation method is used. Extracting samples of high-scoring remote sensing images, converting the vector files obtained by visual interpretation into elements to raster to obtain label images, recording real surface images of different types of vegetation locations in remote sensing images, and constructing training sample datasets; the multi-dimensional features The spatial construction step first uses the data enhancement method to process the image to highlight the effective spectral information, and then uses the random cropping method to crop the label image and the data-enhanced remote sensing image, and perform feature calculation processing based on the cropped label image and remote sensing image. Thereby, a multi-dimensional feature space is constructed. 3.根据权利要求1或2所述的城市绿地高分遥感监测方法,其特征在于,所述U-Net+模型构建步骤中的批归一化处理改进方式是在模型卷积层后面加入批归一化处理,再把归一化处理后的数据输入所述模型卷积层的下一层;所述正则化改进方式是在模型每一次反卷积后都加入具有特定丢弃神经元概率的dropout层。3. urban green space high score remote sensing monitoring method according to claim 1 and 2, is characterized in that, the batch normalization processing improvement mode in described U-Net+ model building step is to add batch normalization after model convolution layer. Normalization processing, and then input the normalized data into the next layer of the model convolution layer; the regularization improvement method is to add dropout with a specific probability of dropping neurons after each deconvolution of the model. Floor. 4.根据权利要求3所述的城市绿地高分遥感监测方法,其特征在于,所述U-Net+模型构建步骤中将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,是将训练样本的遥感影像、多维特征数据和相应标签图像输入U-net+深度学习模型中,U-net+深度学习模型在编码部分提取出输入影像的特征,在解码部分恢复其空间位置与分辨率,在像素分类层对每一像素进行分类获得类别信息;将预测的分类图与输入的标签图在交叉熵函数中计算损失值,将损失值传递至U-net+深度学习模型中进行反向传播,逐层优化模型中的参数,当损失值达到一定阈值后,训练停止。4. The high-resolution remote sensing monitoring method for urban green space according to claim 3, wherein in the U-Net+ model building step, the multi-dimensional feature data of the training sample is added to the U-net+ deep learning model for training, which is to The remote sensing images, multi-dimensional feature data and corresponding label images of the training samples are input into the U-net+ deep learning model. The U-net+ deep learning model extracts the features of the input image in the encoding part, and restores its spatial position and resolution in the decoding part. The pixel classification layer classifies each pixel to obtain category information; calculates the loss value in the cross entropy function between the predicted classification map and the input label map, and transfers the loss value to the U-net+ deep learning model for backpropagation. The layer optimizes the parameters in the model, and when the loss value reaches a certain threshold, the training stops. 5.根据权利要求1或2所述的城市绿地高分遥感监测方法,其特征在于,所述影像后处理步骤,利用全连接CRFs影像后处理方法对U-Net+模型预测结果进行后处理,结合原始高分辨遥感影像中所有像素之间的关系对分类结果进行处理,从而对预测结果优化,得到城市绿地高分辨率遥感监测成果。5. The high-resolution remote sensing monitoring method for urban green space according to claim 1 or 2, wherein the image post-processing step utilizes a fully connected CRFs image post-processing method to perform post-processing on the U-Net+ model prediction results, combined with The relationship between all pixels in the original high-resolution remote sensing image is used to process the classification results, so as to optimize the prediction results and obtain the high-resolution remote sensing monitoring results of urban green space. 6.一种城市绿地高分遥感监测系统,其特征在于,包括依次连接的训练样本集构建模块、多维特征空间构建模块、U-Net+模型构建模块和影像后处理模块,所述训练样本集构建模块和U-Net+模型构建模块相连接,6. An urban green space high score remote sensing monitoring system, characterized in that it comprises a training sample set building module, a multi-dimensional feature space building module, a U-Net+ model building module and an image post-processing module that are connected in sequence, and the training sample set builds The module is connected to the U-Net+ model building module, 所述训练样本集构建模块,针对高分辨遥感影像特征,选择样本区域,构建训练样本数据集;The training sample set building module selects a sample area according to the features of high-resolution remote sensing images, and constructs a training sample data set; 所述多维特征空间构建模块,将训练样本集构建模块的训练样本数据集进行数据增强、随机裁剪和特征计算处理,构建包括植被特征、空间特征、对比度特征、纹理特征和物候特征的多维特征空间;通过构建归一化植被指数将其作为城市绿地的植被特征,构建nDSM数字表面模型将其作为城市绿地的空间特征,通过AC算法计算局部对比度特征图将其作为城市绿地的对比度特征,通过灰度共生矩阵获取图像纹理特征图将其作为城市绿地的纹理特征,并加入同期冬季影像将其作为城市绿地的物候特征;The multi-dimensional feature space building module performs data enhancement, random cropping and feature calculation processing on the training sample data set of the training sample set building module, and constructs a multi-dimensional feature space including vegetation features, spatial features, contrast features, texture features and phenological features. ; By constructing the normalized vegetation index and taking it as the vegetation feature of urban green space, constructing nDSM digital surface model as the spatial feature of urban green space, calculating the local contrast feature map by AC algorithm and taking it as the contrast feature of urban green space, through grey The degree co-occurrence matrix is used to obtain the image texture feature map and use it as the texture feature of urban green space, and add the winter images of the same period to use it as the phenological feature of urban green space; 所述U-Net+模型构建模块,基于多维特征空间构建模块构建的所述多维特征空间,面向城市绿地特征,依次利用图像边缘补零改进方式、批归一化处理改进方式以及正则化改进方式改进U-net模型,建立面向城市绿地并基于多维特征的U-Net+深度学习模型;再将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,并在训练完成后进行城市绿地空间分布的预测,获取U-Net+模型预测结果;The U-Net+ model building module, based on the multi-dimensional feature space constructed by the multi-dimensional feature space building module, is oriented towards the features of urban green spaces, and sequentially uses the image edge zero-filling improvement method, the batch normalization processing improvement method and the regularization improvement method to improve. U-net model, establish a U-Net+ deep learning model based on multi-dimensional features for urban green space; then add the multi-dimensional feature data of the training samples to the U-net+ deep learning model for training, and after the training is completed, the urban green space is distributed. to obtain the prediction results of the U-Net+ model; 所述影像后处理模块,对U-Net+模型预测结果进行后处理,得到城市绿地高分辨率遥感监测成果。The image post-processing module performs post-processing on the prediction results of the U-Net+ model to obtain high-resolution remote sensing monitoring results of urban green spaces. 7.根据权利要求6所述的城市绿地高分遥感监测系统,其特征在于,所述训练样本集构建模块,针对高分辨遥感影像特征,将部分典型区域作为样本区域,利用目视解译方法进行高分遥感影像的样本提取,将目视解译得到的矢量文件进行要素转栅格获得标签图像,记录遥感影像中不同类型植被位置的地表真实图像,构建训练样本数据集;所述多维特征空间构建模块先利用数据增强的方法进行影像处理以突出有效光谱信息,然后利用随机裁剪的方法将标签图像以及数据增强后的遥感影像裁剪,并基于裁剪后的标签图像和遥感影像进行特征计算处理从而构建多维特征空间。7. The high-resolution remote sensing monitoring system for urban green space according to claim 6, wherein the training sample set building module, for high-resolution remote sensing image features, takes part of typical areas as sample areas, and uses a visual interpretation method. Extracting samples of high-scoring remote sensing images, converting the vector files obtained by visual interpretation into elements to raster to obtain label images, recording real surface images of different types of vegetation locations in remote sensing images, and constructing training sample datasets; the multi-dimensional features The spatial building module first uses the data enhancement method to process the image to highlight the effective spectral information, and then uses the random cropping method to crop the label image and the data-enhanced remote sensing image, and perform feature calculation processing based on the cropped label image and remote sensing image. Thereby, a multi-dimensional feature space is constructed. 8.根据权利要求6或7所述的城市绿地高分遥感监测系统,其特征在于,所述U-Net+模型构建模块中的批归一化处理改进方式是在模型卷积层后面加入批归一化处理,再把归一化处理后的数据输入所述模型卷积层的下一层;所述正则化改进方式是在模型的每一次反卷积后都加入具有特定丢弃神经元概率的dropout层。8. The high-resolution remote sensing monitoring system for urban green space according to claim 6 or 7, wherein the batch normalization processing improvement method in the U-Net+ model building module is to add batch normalization after the model convolution layer. Normalization processing, and then input the normalized data into the next layer of the model convolution layer; the regularization improvement method is to add a certain probability of discarding neurons after each deconvolution of the model. dropout layer. 9.根据权利要求8所述的城市绿地高分遥感监测系统,其特征在于,所述U-Net+模型构建模块中将训练样本的多维特征数据加入U-net+深度学习模型中进行训练,是将训练样本的遥感影像、多维特征数据和相应标签图像输入U-net+深度学习模型中,U-net+深度学习模型在编码部分提取出输入影像的特征,在解码部分恢复其空间位置与分辨率,在像素分类层对每一像素进行分类获得类别信息;将预测的分类图与输入的标签图在交叉熵函数中计算损失值,将损失值传递至U-net+深度学习模型中进行反向传播,逐层优化模型中的参数,当损失值达到一定阈值后,训练停止。9. The high-resolution remote sensing monitoring system for urban green space according to claim 8, characterized in that, in the U-Net+ model building module, the multi-dimensional feature data of the training sample is added to the U-net+ deep learning model for training. The remote sensing images, multi-dimensional feature data and corresponding label images of the training samples are input into the U-net+ deep learning model. The U-net+ deep learning model extracts the features of the input image in the encoding part, and restores its spatial position and resolution in the decoding part. The pixel classification layer classifies each pixel to obtain category information; calculates the loss value in the cross entropy function between the predicted classification map and the input label map, and transfers the loss value to the U-net+ deep learning model for backpropagation. The layer optimizes the parameters in the model, and when the loss value reaches a certain threshold, the training stops. 10.根据权利要求6或7所述的城市绿地高分遥感监测系统,其特征在于,所述影像后处理模块,利用全连接CRFs影像后处理方法对U-Net+模型预测结果进行后处理,结合原始高分辨遥感影像中所有像素之间的关系对分类结果进行处理,从而对预测结果优化,得到城市绿地高分辨率遥感监测成果。10. The high-resolution remote sensing monitoring system for urban green space according to claim 6 or 7, wherein the image post-processing module utilizes a fully connected CRFs image post-processing method to post-process the U-Net+ model prediction results, combined with The relationship between all pixels in the original high-resolution remote sensing image is used to process the classification results, so as to optimize the prediction results and obtain the high-resolution remote sensing monitoring results of urban green space.
CN202010386282.9A 2020-05-09 2020-05-09 High score remote sensing monitoring method and system for urban green space Active CN111914611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010386282.9A CN111914611B (en) 2020-05-09 2020-05-09 High score remote sensing monitoring method and system for urban green space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010386282.9A CN111914611B (en) 2020-05-09 2020-05-09 High score remote sensing monitoring method and system for urban green space

Publications (2)

Publication Number Publication Date
CN111914611A true CN111914611A (en) 2020-11-10
CN111914611B CN111914611B (en) 2022-11-15

Family

ID=73237554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010386282.9A Active CN111914611B (en) 2020-05-09 2020-05-09 High score remote sensing monitoring method and system for urban green space

Country Status (1)

Country Link
CN (1) CN111914611B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580484A (en) * 2020-12-14 2021-03-30 中国农业大学 Corn straw coverage identification method and device based on deep learning remote sensing image
CN112651145A (en) * 2021-02-05 2021-04-13 河南省航空物探遥感中心 Urban diversity index analysis and visual modeling based on remote sensing data inversion
CN112990024A (en) * 2021-03-18 2021-06-18 深圳博沃智慧科技有限公司 Method for monitoring urban raise dust
CN112990365A (en) * 2021-04-22 2021-06-18 宝略科技(浙江)有限公司 Training method of deep learning model for semantic segmentation of remote sensing image
CN113011294A (en) * 2021-03-08 2021-06-22 中国科学院空天信息创新研究院 Method, computer equipment and medium for identifying circular sprinkling irrigation land based on remote sensing image
CN113705326A (en) * 2021-07-02 2021-11-26 重庆交通大学 Urban construction land identification method based on full convolution neural network
CN113822220A (en) * 2021-10-09 2021-12-21 海南长光卫星信息技术有限公司 A building detection method and system
CN114283286A (en) * 2021-12-30 2022-04-05 北京航天泰坦科技股份有限公司 Remote sensing image segmentation method, device and electronic device
CN114359636A (en) * 2022-01-10 2022-04-15 北京理工大学重庆创新中心 Method for calibrating hyperspectral features by adopting global-local channel attention module
CN114445743A (en) * 2022-01-20 2022-05-06 大连东软教育科技集团有限公司 Automatic detection method, system and storage medium for class front row seating rate
CN114463343A (en) * 2021-12-20 2022-05-10 山东华宇航天空间技术有限公司 Method and device for automatically extracting contour of coastal zone culture factory
CN114529721A (en) * 2022-02-08 2022-05-24 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage identification method based on deep learning
CN115410045A (en) * 2022-09-18 2022-11-29 吉林大学第一医院 Voxel visualization method and device, electronic equipment and readable storage medium
CN115761518A (en) * 2023-01-10 2023-03-07 云南瀚哲科技有限公司 Crop classification method based on remote sensing image data
CN115775355A (en) * 2022-09-09 2023-03-10 南京逐鹿景观工程有限公司 A kind of phenological monitoring method of germplasm resource bank of woody ornamental plants
CN118627750A (en) * 2024-07-04 2024-09-10 苏州市中遥数字科技有限公司 A multi-dimensional image processing system based on high-resolution remote sensing data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446992A (en) * 2018-10-30 2019-03-08 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN109919206A (en) * 2019-02-25 2019-06-21 武汉大学 A land cover classification method for remote sensing images based on fully atrous convolutional neural network
JP2019220176A (en) * 2018-06-15 2019-12-26 大学共同利用機関法人情報・システム研究機構 Image processing device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019220176A (en) * 2018-06-15 2019-12-26 大学共同利用機関法人情報・システム研究機構 Image processing device and method
CN109446992A (en) * 2018-10-30 2019-03-08 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN109919206A (en) * 2019-02-25 2019-06-21 武汉大学 A land cover classification method for remote sensing images based on fully atrous convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI ZHANG等: "A Comparative Study of U-Nets with Various Convolution Components for Building Extraction", 《 2019 JOINT URBAN REMOTE SENSING EVENT (JURSE)》 *
曹留霞: "基于深度学习算法的绿地信息提取及应用研究", 《中国优秀硕士学位论文全文数据库基础科学辑(月刊)》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580484A (en) * 2020-12-14 2021-03-30 中国农业大学 Corn straw coverage identification method and device based on deep learning remote sensing image
CN112580484B (en) * 2020-12-14 2024-03-29 中国农业大学 Method and device for identifying corn stalk coverage in remote sensing images based on deep learning
CN112651145A (en) * 2021-02-05 2021-04-13 河南省航空物探遥感中心 Urban diversity index analysis and visual modeling based on remote sensing data inversion
CN112651145B (en) * 2021-02-05 2021-09-10 河南省航空物探遥感中心 Urban diversity index analysis and visual modeling based on remote sensing data inversion
CN113011294A (en) * 2021-03-08 2021-06-22 中国科学院空天信息创新研究院 Method, computer equipment and medium for identifying circular sprinkling irrigation land based on remote sensing image
CN113011294B (en) * 2021-03-08 2023-11-07 中国科学院空天信息创新研究院 Method, computer equipment and media for identifying circular sprinkler irrigation fields based on remote sensing images
CN112990024A (en) * 2021-03-18 2021-06-18 深圳博沃智慧科技有限公司 Method for monitoring urban raise dust
CN112990024B (en) * 2021-03-18 2024-03-26 深圳博沃智慧科技有限公司 Urban dust monitoring method
CN112990365A (en) * 2021-04-22 2021-06-18 宝略科技(浙江)有限公司 Training method of deep learning model for semantic segmentation of remote sensing image
CN113705326A (en) * 2021-07-02 2021-11-26 重庆交通大学 Urban construction land identification method based on full convolution neural network
CN113705326B (en) * 2021-07-02 2023-12-15 重庆交通大学 A method for identifying urban construction land based on fully convolutional neural network
CN113822220A (en) * 2021-10-09 2021-12-21 海南长光卫星信息技术有限公司 A building detection method and system
CN114463343A (en) * 2021-12-20 2022-05-10 山东华宇航天空间技术有限公司 Method and device for automatically extracting contour of coastal zone culture factory
CN114283286A (en) * 2021-12-30 2022-04-05 北京航天泰坦科技股份有限公司 Remote sensing image segmentation method, device and electronic device
CN114359636B (en) * 2022-01-10 2025-04-18 北京理工大学重庆创新中心 A method for calibrating hyperspectral features using global-local channel attention module
CN114359636A (en) * 2022-01-10 2022-04-15 北京理工大学重庆创新中心 Method for calibrating hyperspectral features by adopting global-local channel attention module
CN114445743A (en) * 2022-01-20 2022-05-06 大连东软教育科技集团有限公司 Automatic detection method, system and storage medium for class front row seating rate
CN114529721B (en) * 2022-02-08 2024-05-10 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage recognition method based on deep learning
CN114529721A (en) * 2022-02-08 2022-05-24 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage identification method based on deep learning
CN115775355A (en) * 2022-09-09 2023-03-10 南京逐鹿景观工程有限公司 A kind of phenological monitoring method of germplasm resource bank of woody ornamental plants
CN115410045A (en) * 2022-09-18 2022-11-29 吉林大学第一医院 Voxel visualization method and device, electronic equipment and readable storage medium
CN115761518A (en) * 2023-01-10 2023-03-07 云南瀚哲科技有限公司 Crop classification method based on remote sensing image data
CN118627750A (en) * 2024-07-04 2024-09-10 苏州市中遥数字科技有限公司 A multi-dimensional image processing system based on high-resolution remote sensing data

Also Published As

Publication number Publication date
CN111914611B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN111914611B (en) High score remote sensing monitoring method and system for urban green space
CN113421269B (en) Real-time semantic segmentation method based on double-branch deep convolutional neural network
CN111259828B (en) Recognition method based on multi-features of high-resolution remote sensing images
CN110544251B (en) Dam crack detection method based on multi-migration learning model fusion
CN111523521B (en) Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN107292339B (en) A high-resolution landform classification method for UAV low-altitude remote sensing images based on feature fusion
CN111639719B (en) Footprint image retrieval method based on space-time motion and feature fusion
CN112232280A (en) Hyperspectral image classification method based on self-encoder and 3D depth residual error network
CN102902956B (en) A kind of ground visible cloud image identifying processing method
CN112101271A (en) Hyperspectral remote sensing image classification method and device
CN117765373A (en) Lightweight road crack detection method and system with self-adaptive crack size
Ma et al. A novel adaptive hybrid fusion network for multiresolution remote sensing images classification
CN110309780A (en) Rapid Supervision and Recognition of House Information in High Resolution Images Based on BFD-IGA-SVM Model
CN112200090B (en) Hyperspectral image classification method based on cross-grouping space-spectral feature enhancement network
CN106096655B (en) A Convolutional Neural Network Based Aircraft Detection Method in Optical Remote Sensing Images
CN110399840A (en) A Fast Lawn Semantic Segmentation and Boundary Detection Method
CN110309781A (en) Remote sensing recognition method for house damage based on multi-scale spectral texture adaptive fusion
CN109858557B (en) Novel semi-supervised classification method for hyperspectral image data
CN112800968B (en) HOG blocking-based feature histogram fusion method for identifying identity of pigs in drinking area
CN111291826A (en) Pixel-by-pixel classification of multi-source remote sensing images based on correlation fusion network
CN112861802B (en) Fully automated crop classification method based on spatiotemporal deep learning fusion technology
CN113420794A (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN110516648B (en) Identification method of ramie plant number based on UAV remote sensing and pattern recognition
CN109829507B (en) Aerial photography of high-voltage transmission line environmental detection methods
CN114529730A (en) Convolutional neural network ground material image classification method based on LBP (local binary pattern) features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant