WO2023000159A1 - Procédé, appareil et dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et support - Google Patents
Procédé, appareil et dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et support Download PDFInfo
- Publication number
- WO2023000159A1 WO2023000159A1 PCT/CN2021/107289 CN2021107289W WO2023000159A1 WO 2023000159 A1 WO2023000159 A1 WO 2023000159A1 CN 2021107289 W CN2021107289 W CN 2021107289W WO 2023000159 A1 WO2023000159 A1 WO 2023000159A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- remote sensing
- classification
- model
- segmentation model
- sensing images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Definitions
- the invention relates to the field of classification of remote sensing images, in particular to a method, device, equipment and medium for semi-supervised classification of high-resolution remote sensing images.
- remote sensing satellites As the most important part of remote sensing satellite products, high-resolution remote sensing images are widely used in agricultural production estimation, agricultural risk assessment, mineral investigation, land and resources investigation and other fields. In recent years, high-resolution remote sensing satellites have been launched rapidly, and image data sources have increased dramatically, providing abundant data resources for later applications.
- the processing of remote sensing data generally includes pre-processing and advanced analysis. In post-application, image classification is the basic research to understand the coverage of ground objects.
- the resolution of high-resolution remote sensing images is better than 1 meter (ie > 1 meter), which can clearly distinguish vegetation, water bodies, buildings and other ground objects.
- Using high-resolution remote sensing images to classify ground features can grasp the details of ground features and understand the types of ground features.
- There are many methods for remote sensing image classification which can be divided into supervised classification, unsupervised classification and semi-supervised classification from the perspective of using training samples.
- Most conventional supervised classification methods start from the spectral characteristics of remote sensing images, and consider the spectral differences between different object types to distinguish pixels.
- there is not much spectral information and usually only includes four bands of RGB and near-infrared, which cannot contain rich spectral information. After classification using spectral classification methods, the classification accuracy is low.
- the present invention is proposed to provide a semi-supervised classification method, device, equipment and medium for high-resolution remote sensing images that overcome the above problems or at least partially solve the above problems.
- a semi-supervised classification method for high-resolution remote sensing images comprising:
- the classification model is used to classify the remote sensing images to be classified.
- the preprocessing of the remote sensing images includes:
- the preparation of a ground object classification sample set according to the processed remote sensing images includes:
- the processed remote sensing image is sliced, and the marked grid corresponding to the position is cut out at the same time, so as to obtain a group of pictures and labels of the same size as a sample set of ground object classification.
- the Unet++ network adopts the method of multi-level upsampling and layer-skip connection to extract multi-layer features.
- the Unet++ network includes a downsampling layer, an upsampling layer, and an intermediate layer for feature extraction of the downsampling layer ;
- the downsampling layer is added with the feature extraction part of the EfficientB4 model.
- the construction of a threshold segmentation model based on the near-infrared band includes:
- the ground feature classification sample set is matched with the near-infrared band threshold histogram, and the threshold area of water body and vegetation in the near-infrared band threshold histogram is selected to construct a threshold segmentation model.
- the model fusion of the semantic segmentation model and the threshold segmentation model to obtain a classification model includes:
- the classification result determined to be correct is used as a new object classification sample set, and the semantic segmentation model is continuously trained by using the transfer learning method to obtain a classification model.
- the embodiment of the present invention also provides a semi-supervised classification device for high-resolution remote sensing images, including:
- the image processing module is used for preprocessing the remote sensing images
- the sample set making module is used to make a sample set of ground object classification according to the processed remote sensing image
- the first model building module is used to build a remote sensing image semantic segmentation model based on the Unet++ network, and trains the semantic segmentation model through the feature classification sample set;
- the second model building block is used to build a threshold segmentation model based on the near-infrared band
- a model fusion module configured to perform model fusion on the semantic segmentation model and the threshold segmentation model to obtain a classification model
- An image classification module configured to use the classification model to classify remote sensing images to be classified.
- the embodiment of the present invention also provides a semi-supervised classification device for high-resolution remote sensing images, including a processor and a memory, wherein, when the processor executes the computer program stored in the memory, the above-mentioned A semi-supervised classification method for high-resolution remote sensing images.
- An embodiment of the present invention also provides a computer-readable storage medium for storing a computer program, wherein, when the computer program is executed by a processor, the above-mentioned method for semi-supervised classification of high-resolution remote sensing images as provided in the embodiment of the present invention is implemented .
- a semi-supervised classification method for high-resolution remote sensing images includes: preprocessing the remote sensing images; making a classification sample set based on the processed remote sensing images; The remote sensing image semantic segmentation model of the Unet++ network, and the semantic segmentation model is trained through the object classification sample set; the threshold segmentation model based on the near-infrared band is constructed; the semantic segmentation model and the threshold segmentation model are model-fused to obtain the classification model; use The classification model classifies the remote sensing images to be classified.
- the present invention builds a remote sensing image semantic segmentation model based on the Unet++ network and a threshold segmentation model based on the near-infrared band, and then uses a multi-model fusion method to fuse the texture information of the remote sensing image with the spectral information of the near-infrared band, and then perform high-resolution High-rate remote sensing image classification can improve classification accuracy.
- the present invention also provides corresponding devices, equipment, and computer-readable storage media for the semi-supervised classification method of high-resolution remote sensing images, which further makes the above method more practical.
- the device, equipment, and computer-readable storage media have corresponding The advantages.
- Fig. 1 shows the flow chart of the semi-supervised classification method of high-resolution remote sensing image provided by the embodiment of the present invention
- FIG. 2 shows a schematic diagram of a semi-supervised classification method for high-resolution remote sensing images provided by an embodiment of the present invention
- Fig. 3 shows the structural representation of the Unet++ network that the embodiment of the present invention provides
- Fig. 4 shows the schematic structural diagram of the convolution block in the Unet++ network provided by the embodiment of the present invention
- Fig. 5 shows a schematic structural diagram of the residual block in the Unet++ network provided by the embodiment of the present invention
- Fig. 6 shows the near-infrared band threshold histogram provided by the embodiment of the present invention
- FIG. 7 shows a display diagram of classification results provided by an embodiment of the present invention.
- Fig. 8 shows a schematic structural diagram of an apparatus for semi-supervised classification of high-resolution remote sensing images provided by an embodiment of the present invention.
- the present invention provides a semi-supervised classification method for high-resolution remote sensing images, as shown in Figure 1 and Figure 2, comprising the following steps:
- the remote sensing image semantic segmentation model based on the Unet++ network and a threshold segmentation model based on the near-infrared band, and then using a multi-model fusion method, the remote sensing
- the texture information of the image and the spectral information of the near-infrared band are fused, and then the classification of high-resolution remote sensing images can improve the classification accuracy.
- step S101 performs preprocessing on remote sensing images, which may specifically include: performing panchromatic and multispectral image processing on remote sensing images Fusion, radiation correction, atmospheric correction, geometric correction and other processing to obtain a high-resolution remote sensing image with four bands (RGB and near-infrared).
- Step S102 is based on the processed remote sensing images.
- Object classification sample set which may specifically include:
- the processed remote sensing image is sliced; specifically, the remote sensing image is cut into small slices for training.
- the slice size can be set to 512 ⁇ 512, and the slices do not overlap; at the same time, the label of the corresponding position is cut out.
- a set of images and labels of the same size is obtained as a sample set for object classification.
- the Unet++ network adopts the method of multi-level upsampling and layer-skip connection to extract multi-layer features. It should be noted that the Unet network is a commonly used segmentation network model in semantic segmentation.
- downsampling is performed through convolution, layer after layer features are extracted, and then upsampling is performed, and the difference between downsampling and upsampling is
- the feature connection of the model because the shape of this model structure diagram is similar to U-shaped, so it is named Unet, where the process of downsampling is an encoding process, and upsampling is a process of decoding; while the Unet++ network is based on Unet.
- the method of multi-level upsampling and layer-skip connection is adopted to extract features of more layers.
- the Unet++ network can specifically include a downsampling layer, an upsampling layer, and an intermediate layer for feature extraction of the downsampling layer ;
- the feature extraction part of the EfficientB4 model is added to the downsampling layer. That is to say, on the basis of Unet, the present invention adds the feature extraction part of the EfficientB4 model into the encoding process of the Unet++ network, improves the network structure, and extracts more features.
- step S103 builds a remote sensing image semantic segmentation model based on the Unet++ network, which may specifically include the following steps:
- a convolution block includes a convolution layer, BatchNormalization (BN) layer, and a LeakyRelU activation function;
- the downsampling layer is the part of feature extraction. It is the same as the Unet network.
- four downsampling layers are used as the downsampling layer of the Unet++ network.
- the downsampling layer of the network is obtained from EfficientB4 , that is, the 342nd, 154th, 92nd, and 30th layers of EfficientB4 are used as the four layers conv4, conv3, conv2, and conv1 of the downsampling in Unet++;
- the middle layer is a further feature extraction for the downsampling layer, and the features of different levels are not given.
- conv4 it is coded as deconv4, and then extracted three times to obtain the three-level feature layer deconv4_up1, deconv4_up2, deconv4_up3, then extract the features of conv4 as deconv3, and extract the features deconv3_up1, deconv3_up2, add deconv3, conv3 and deconv4_up1 to obtain uconv3, the Uconv3 is encoded as deconv2, and the feature deconv2_up1 is extracted, and then deconv2, conv2, deconv4_up2, deconv3_up1 are added to obtain uconv2, uconv2 is encoded as deconv1, and then conv1, deconv1, deconv2_up1, deconv3_up2, deconv4_up3 are added to obtain uconv1, and encoded It is uconv0, and finally make a convolution of uconv0, reduce the feature to 1 dimension,
- the loss function using the dice loss between the network prediction result and the real label as the loss function.
- step S104 constructs a threshold segmentation model based on the near-infrared band, which may specifically include: using the threshold segmentation method to obtain processed remote sensing images The threshold histogram of the near-infrared band; match the object classification sample set with the threshold histogram of the near-infrared band, and select the threshold area of water and vegetation in the threshold histogram of the near-infrared band to build a threshold segmentation model.
- band 4 is used to judge water bodies and vegetation.
- the method of threshold segmentation is used to extract only band 4 in the high-resolution remote sensing image, and display the histogram.
- the abscissa of the histogram ranges from 0 to 255, as shown in Figure 6.
- the training samples and The histogram is matched, and a section of the water body in the histogram is selected as the reflectance part of the water body in the image, usually on the left, to record the threshold area of the water body.
- the threshold area of the vegetation is selected.
- the data to be classified is within the threshold of water body or vegetation, it is marked as water body or vegetation. In this way, making full use of the characteristics of the near-infrared band can increase the classification accuracy of specific targets to a certain extent.
- step S105 performs model fusion of the semantic segmentation model and the threshold segmentation model to obtain the classification model, which may specifically include: when the semantic segmentation model When the output result of the threshold segmentation model and the output result of the threshold segmentation model are both water bodies or vegetation, or when the output result of the semantic segmentation model is bare soil or impermeable surface and the output result of the threshold segmentation model is other, it is determined that the classification result is correct; The classification result judged to be correct is used as a new object classification sample set, and the transfer learning method is used to continue training the semantic segmentation model to obtain the classification model.
- semantic segmentation is used to classify remote sensing images, and at the same time, for some types of ground objects such as water bodies, vegetation, etc.
- the near-infrared band features are used to extract, and the two results are fused at the decision level, and then the semi-supervised classification method is used to increase samples, and Classification again can obtain higher classification accuracy.
- step S105 uses a decision-level fusion method.
- the output result of the semantic segmentation model is a two-dimensional matrix, and each position represents the category of the pixel, including four categories in total.
- the result of threshold segmentation is also a two-dimensional matrix. Each location represents the category of the pixel. Unlike semantic segmentation, it only contains three categories, water body, vegetation, and others. For each pixel, when the result of semantic segmentation is bare soil or impermeable surface, the result of threshold segmentation is other, or both are water bodies or vegetation, the classification is considered correct. When the classification results are different, this part is not classified, and the classified one is used as a new training sample, and the transfer learning method is used to continue training the semantic segmentation model to obtain the final classification result.
- step S106 uses the classification model to classify the remote sensing images to be classified, which may specifically include: first, slice the remote sensing images to be classified, slice The size is the same as step 1, and then each slice is input into the classification model for classification, after expansion, corrosion and other post-processing operations, and then the classification results of the last slices are stitched together, coordinate information is added, and a tiff file is generated, an example is shown in Figure 7 shown.
- the above-mentioned semi-supervised classification method for high-resolution remote sensing images mainly considers the texture features of high-resolution remote sensing images and the physical characteristics of the near-infrared band, and uses the improved Unet++ semantic segmentation network and threshold segmentation In a combined way, the semi-supervised classification method is used to expand the training samples, and the semantic segmentation model is trained again to obtain higher classification accuracy, and finally the classification results are generated after classification.
- an embodiment of the present invention also provides a semi-supervised classification device for high-resolution remote sensing images. Since the problem-solving principle of the device is similar to the aforementioned semi-supervised classification method for high-resolution remote sensing images, the device's For the implementation, please refer to the implementation of the semi-supervised classification method for high-resolution remote sensing images, and the repetition will not be repeated.
- the high-resolution remote sensing image semi-supervised classification device provided by the embodiment of the present invention, as shown in Figure 8, specifically includes:
- An image processing module configured to preprocess remote sensing images
- a sample set making module 12 configured to make a sample set for classification of features according to the processed remote sensing images
- the first model building module 13 is used to build a remote sensing image semantic segmentation model based on the Unet++ network, and trains the semantic segmentation model through the feature classification sample set;
- the second model construction module 14 is used to construct a threshold segmentation model based on the near-infrared band
- the model fusion module 15 is used to carry out model fusion with the semantic segmentation model and the threshold segmentation model to obtain the classification model;
- the image classification module 16 is configured to use a classification model to classify the remote sensing image to be classified.
- the texture information of the remote sensing images and the spectral information in the near-infrared band can be fused through the interaction of the above six modules, and then the high-resolution Remote sensing image classification improves classification accuracy.
- the embodiment of the present invention also discloses a semi-supervised classification device for high-resolution remote sensing images, including a processor and a memory; wherein, when the processor executes the computer program stored in the memory, the high-resolution remote sensing image disclosed in the foregoing embodiments is realized.
- Image semi-supervised classification method including a processor and a memory; wherein, when the processor executes the computer program stored in the memory, the high-resolution remote sensing image disclosed in the foregoing embodiments is realized.
- the present invention also discloses a computer-readable storage medium for storing a computer program; when the computer program is executed by a processor, the aforementioned semi-supervised classification method for high-resolution remote sensing images is realized.
- RAM random access memory
- ROM read-only memory
- EEPROM electrically programmable ROM
- EEPROM electrically erasable programmable ROM
- registers hard disk, removable disk, CD-ROM, or any other Any other known storage medium.
- a semi-supervised classification method for high-resolution remote sensing images includes: preprocessing the remote sensing images; making a sample set of ground object classification according to the processed remote sensing images; constructing a remote sensing image based on the Unet++ network Image semantic segmentation model, and train the semantic segmentation model through the object classification sample set; build a threshold segmentation model based on the near-infrared band; fuse the semantic segmentation model and the threshold segmentation model to obtain a classification model; use the classification model to treat classification classification of remote sensing images.
- the present invention also provides corresponding devices, equipment, and computer-readable storage media for the semi-supervised classification method of high-resolution remote sensing images, which further makes the above method more practical.
- the device, equipment, and computer-readable storage media have corresponding The advantages.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un procédé, un appareil et un dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et un support. Le procédé comprend : le prétraitement d'une image de télédétection ; la production d'un ensemble d'échantillons de classification orientée objet en fonction de l'image de télédétection traitée ; la construction d'un modèle de segmentation sémantique d'image de télédétection sur la base d'un réseau U-Net++, et l'entraînement du modèle de segmentation sémantique au moyen de l'ensemble d'échantillons de classification orientée objet ; la construction d'un modèle de segmentation de valeur de seuil sur la base d'une bande proche infrarouge ; la réalisation d'une fusion de modèle sur le modèle de segmentation sémantique et le modèle de segmentation de valeur de seuil, de sorte à acquérir un modèle de classification ; et la classification, à l'aide du modèle de classification, d'une image de télédétection à classifier. Ainsi, un modèle de segmentation sémantique d'image de télédétection basé sur un réseau U-Net++ et un modèle de segmentation de valeur de seuil sur la base d'une bande proche infrarouge sont construits, puis, des informations de texture d'une image de télédétection et des informations spectrales de la bande proche infrarouge sont fusionnées au moyen d'un procédé de fusion multi-modèle, et ensuite, une image de télédétection à haute résolution est classée, de manière à améliorer la précision de la classification.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2021/107289 WO2023000159A1 (fr) | 2021-07-20 | 2021-07-20 | Procédé, appareil et dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et support |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2021/107289 WO2023000159A1 (fr) | 2021-07-20 | 2021-07-20 | Procédé, appareil et dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et support |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023000159A1 true WO2023000159A1 (fr) | 2023-01-26 |
Family
ID=84979665
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2021/107289 Ceased WO2023000159A1 (fr) | 2021-07-20 | 2021-07-20 | Procédé, appareil et dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et support |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2023000159A1 (fr) |
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115965954A (zh) * | 2023-03-16 | 2023-04-14 | 北京市农林科学院信息技术研究中心 | 秸秆类型识别方法、装置、电子设备及存储介质 |
| CN115995005A (zh) * | 2023-03-22 | 2023-04-21 | 航天宏图信息技术股份有限公司 | 基于单期高分辨率遥感影像的农作物的提取方法和装置 |
| CN116051999A (zh) * | 2023-02-06 | 2023-05-02 | 北京数慧时空信息技术有限公司 | 遥感影像困难样本挖掘方法 |
| CN116129278A (zh) * | 2023-04-10 | 2023-05-16 | 牧马人(山东)勘察测绘集团有限公司 | 一种基于遥感影像的土地利用分类识别系统 |
| CN116168301A (zh) * | 2023-04-25 | 2023-05-26 | 耕宇牧星(北京)空间科技有限公司 | 一种基于嵌套编码器网络的农田施肥栅格检测方法 |
| CN116343058A (zh) * | 2023-03-16 | 2023-06-27 | 同济大学 | 基于全局协同融合的多光谱和全色卫星影像地表分类方法 |
| CN116503597A (zh) * | 2023-04-26 | 2023-07-28 | 杭州芸起科技有限公司 | 一种跨域裸地语义分割网络构建方法、装置及存储介质 |
| CN117095299A (zh) * | 2023-10-18 | 2023-11-21 | 浙江省测绘科学技术研究院 | 破碎化耕作区的粮食作物提取方法、系统、设备及介质 |
| CN117110217A (zh) * | 2023-10-23 | 2023-11-24 | 安徽农业大学 | 一种立体化水质监测方法及系统 |
| CN117349462A (zh) * | 2023-12-06 | 2024-01-05 | 自然资源陕西省卫星应用技术中心 | 一种遥感智能解译样本数据集生成方法 |
| CN117671519A (zh) * | 2023-12-14 | 2024-03-08 | 上海勘测设计研究院有限公司 | 大区域遥感影像地物提取方法及系统 |
| CN117935079A (zh) * | 2024-01-29 | 2024-04-26 | 珠江水利委员会珠江水利科学研究院 | 一种遥感影像融合方法、系统及可读存储介质 |
| CN118470092A (zh) * | 2024-06-04 | 2024-08-09 | 航天宏图信息技术股份有限公司 | 作物种植面积提取方法、装置、设备及介质 |
| CN118691909A (zh) * | 2024-08-26 | 2024-09-24 | 鹏城实验室 | 多源遥感数据融合分类方法、装置、设备和存储介质 |
| CN118711019A (zh) * | 2024-08-27 | 2024-09-27 | 中国四维测绘技术有限公司 | 训练样本集的处理方法、电子设备及存储介质 |
| CN118710906A (zh) * | 2024-06-27 | 2024-09-27 | 湖南大学 | 基于区域对比学习的半监督遥感图像语义分割方法与系统 |
| CN119006833A (zh) * | 2024-10-25 | 2024-11-22 | 中国石油大学(华东) | 基于视觉大模型的遥感图像分割方法、系统、设备及介质 |
| CN119672536A (zh) * | 2024-12-12 | 2025-03-21 | 华中科技大学 | 基于分类器和深度学习的光伏场站精细化识别方法及装置 |
| CN119785117A (zh) * | 2024-12-31 | 2025-04-08 | 西安电子科技大学 | 基于自训练的无源域适应遥感影像场景分类方法及系统 |
| CN119851133A (zh) * | 2024-12-31 | 2025-04-18 | 中国海洋大学 | 基于高分辨率全球模式改进城市绿地生物源排放的方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108960345A (zh) * | 2018-08-08 | 2018-12-07 | 广东工业大学 | 一种遥感图像的融合方法、系统及相关组件 |
| WO2020240477A1 (fr) * | 2019-05-31 | 2020-12-03 | Thales Canada Inc. | Procédé et dispositif de traitement pour entraîner un réseau neuronal |
| CN112560577A (zh) * | 2020-11-13 | 2021-03-26 | 空间信息产业发展股份有限公司 | 一种基于语义分割的遥感图像地物分类方法 |
| CN112613516A (zh) * | 2020-12-11 | 2021-04-06 | 北京影谱科技股份有限公司 | 用于航拍视频数据的语义分割方法 |
-
2021
- 2021-07-20 WO PCT/CN2021/107289 patent/WO2023000159A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108960345A (zh) * | 2018-08-08 | 2018-12-07 | 广东工业大学 | 一种遥感图像的融合方法、系统及相关组件 |
| WO2020240477A1 (fr) * | 2019-05-31 | 2020-12-03 | Thales Canada Inc. | Procédé et dispositif de traitement pour entraîner un réseau neuronal |
| CN112560577A (zh) * | 2020-11-13 | 2021-03-26 | 空间信息产业发展股份有限公司 | 一种基于语义分割的遥感图像地物分类方法 |
| CN112613516A (zh) * | 2020-12-11 | 2021-04-06 | 北京影谱科技股份有限公司 | 用于航拍视频数据的语义分割方法 |
Cited By (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116051999A (zh) * | 2023-02-06 | 2023-05-02 | 北京数慧时空信息技术有限公司 | 遥感影像困难样本挖掘方法 |
| CN115965954A (zh) * | 2023-03-16 | 2023-04-14 | 北京市农林科学院信息技术研究中心 | 秸秆类型识别方法、装置、电子设备及存储介质 |
| CN116343058A (zh) * | 2023-03-16 | 2023-06-27 | 同济大学 | 基于全局协同融合的多光谱和全色卫星影像地表分类方法 |
| CN116343058B (zh) * | 2023-03-16 | 2025-06-27 | 同济大学 | 基于全局协同融合的多光谱和全色卫星影像地表分类方法 |
| CN115995005A (zh) * | 2023-03-22 | 2023-04-21 | 航天宏图信息技术股份有限公司 | 基于单期高分辨率遥感影像的农作物的提取方法和装置 |
| CN115995005B (zh) * | 2023-03-22 | 2023-08-01 | 航天宏图信息技术股份有限公司 | 基于单期高分辨率遥感影像的农作物的提取方法和装置 |
| CN116129278A (zh) * | 2023-04-10 | 2023-05-16 | 牧马人(山东)勘察测绘集团有限公司 | 一种基于遥感影像的土地利用分类识别系统 |
| CN116168301A (zh) * | 2023-04-25 | 2023-05-26 | 耕宇牧星(北京)空间科技有限公司 | 一种基于嵌套编码器网络的农田施肥栅格检测方法 |
| CN116503597A (zh) * | 2023-04-26 | 2023-07-28 | 杭州芸起科技有限公司 | 一种跨域裸地语义分割网络构建方法、装置及存储介质 |
| CN117095299B (zh) * | 2023-10-18 | 2024-01-26 | 浙江省测绘科学技术研究院 | 破碎化耕作区的粮食作物提取方法、系统、设备及介质 |
| CN117095299A (zh) * | 2023-10-18 | 2023-11-21 | 浙江省测绘科学技术研究院 | 破碎化耕作区的粮食作物提取方法、系统、设备及介质 |
| CN117110217B (zh) * | 2023-10-23 | 2024-01-12 | 安徽农业大学 | 一种立体化水质监测方法及系统 |
| CN117110217A (zh) * | 2023-10-23 | 2023-11-24 | 安徽农业大学 | 一种立体化水质监测方法及系统 |
| CN117349462B (zh) * | 2023-12-06 | 2024-03-12 | 自然资源陕西省卫星应用技术中心 | 一种遥感智能解译样本数据集生成方法 |
| CN117349462A (zh) * | 2023-12-06 | 2024-01-05 | 自然资源陕西省卫星应用技术中心 | 一种遥感智能解译样本数据集生成方法 |
| CN117671519A (zh) * | 2023-12-14 | 2024-03-08 | 上海勘测设计研究院有限公司 | 大区域遥感影像地物提取方法及系统 |
| CN117935079A (zh) * | 2024-01-29 | 2024-04-26 | 珠江水利委员会珠江水利科学研究院 | 一种遥感影像融合方法、系统及可读存储介质 |
| CN118470092A (zh) * | 2024-06-04 | 2024-08-09 | 航天宏图信息技术股份有限公司 | 作物种植面积提取方法、装置、设备及介质 |
| CN118710906A (zh) * | 2024-06-27 | 2024-09-27 | 湖南大学 | 基于区域对比学习的半监督遥感图像语义分割方法与系统 |
| CN118691909A (zh) * | 2024-08-26 | 2024-09-24 | 鹏城实验室 | 多源遥感数据融合分类方法、装置、设备和存储介质 |
| CN118711019A (zh) * | 2024-08-27 | 2024-09-27 | 中国四维测绘技术有限公司 | 训练样本集的处理方法、电子设备及存储介质 |
| CN119006833A (zh) * | 2024-10-25 | 2024-11-22 | 中国石油大学(华东) | 基于视觉大模型的遥感图像分割方法、系统、设备及介质 |
| CN119672536A (zh) * | 2024-12-12 | 2025-03-21 | 华中科技大学 | 基于分类器和深度学习的光伏场站精细化识别方法及装置 |
| CN119785117A (zh) * | 2024-12-31 | 2025-04-08 | 西安电子科技大学 | 基于自训练的无源域适应遥感影像场景分类方法及系统 |
| CN119851133A (zh) * | 2024-12-31 | 2025-04-18 | 中国海洋大学 | 基于高分辨率全球模式改进城市绿地生物源排放的方法 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2023000159A1 (fr) | Procédé, appareil et dispositif de classification semi-surveillée pour une image de télédétection à haute résolution, et support | |
| CN113516084B (zh) | 高分辨率遥感影像半监督分类方法、装置、设备及介质 | |
| Zhu et al. | Deep learning meets SAR: Concepts, models, pitfalls, and perspectives | |
| Song et al. | Spatiotemporal satellite image fusion using deep convolutional neural networks | |
| CN113609889B (zh) | 基于敏感特征聚焦感知的高分辨遥感影像植被提取方法 | |
| Chang et al. | Multisensor satellite image fusion and networking for all-weather environmental monitoring | |
| CN113420759B (zh) | 一种基于深度学习的抗遮挡与多尺度死鱼识别系统与方法 | |
| CN113312993B (zh) | 一种基于PSPNet的遥感数据土地覆盖分类方法 | |
| CN115131637B (zh) | 基于生成对抗网络的多级特征时空遥感图像融合方法 | |
| CN115984714B (zh) | 一种基于双分支网络模型的云检测方法 | |
| CN113887472A (zh) | 基于级联颜色及纹理特征注意力的遥感图像云检测方法 | |
| Xia et al. | Submesoscale oceanic eddy detection in SAR images using context and edge association network | |
| CN113239736A (zh) | 一种基于多源遥感数据的土地覆盖分类标注图获取方法、存储介质及系统 | |
| CN116343058B (zh) | 基于全局协同融合的多光谱和全色卫星影像地表分类方法 | |
| CN112446256A (zh) | 一种基于深度isa数据融合的植被类型识别方法 | |
| CN118279130A (zh) | 一种多模态的红外到可见光图像转换方法 | |
| CN115346136B (zh) | 一种基于特征融合的遥感图像目标检测方法 | |
| CN118097362B (zh) | 一种基于语义感知学习的多模态图像融合方法 | |
| CN119964027A (zh) | 基于sar波模式影像的冰山信息提取方法、设备及产品 | |
| CN115100091A (zh) | 一种sar图像转光学图像的转换方法及装置 | |
| Han et al. | Dual-level contextual attention generative adversarial network for reconstructing SAR wind speeds in tropical cyclones | |
| Gao | A method for face image inpainting based on generative adversarial networks | |
| He et al. | Object‐Based Distinction between Building Shadow and Water in High‐Resolution Imagery Using Fuzzy‐Rule Classification and Artificial Bee Colony Optimization | |
| CN115965876A (zh) | 一种目标定位和提取方法、装置及计算机可读存储介质 | |
| CN115861818A (zh) | 基于注意力机制联合卷积神经网络的细小水体提取方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21950425 Country of ref document: EP Kind code of ref document: A1 |