CN111476199A - Identification method of power transmission and transformation project cemetery based on high-definition aerial survey images - Google Patents
Identification method of power transmission and transformation project cemetery based on high-definition aerial survey images Download PDFInfo
- Publication number
- CN111476199A CN111476199A CN202010336683.3A CN202010336683A CN111476199A CN 111476199 A CN111476199 A CN 111476199A CN 202010336683 A CN202010336683 A CN 202010336683A CN 111476199 A CN111476199 A CN 111476199A
- Authority
- CN
- China
- Prior art keywords
- cemetery
- identification
- model
- power transmission
- aerial survey
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于高清航测影像的输变电工程坟地识别方法,包括获取遥感影像数据;对遥感影像数据进行预处理得到训练数据;构建坟地识别神经网络模型;采用训练数据对坟地识别神经网络模型进行训练得到坟地识别模型;采用坟地识别模型对待识别的遥感影像数据进行识别得到识别结果;对识别结果进行优化处理得到最终的坟地识别结果。本发明提供的这种基于高清航测影像的输变电工程坟地识别方法,通过创新的构建坟地识别模型的方式,实现坟地的精准、可靠和高效识别;因此本发明方法的可靠性高、效率高且准确性高。
The invention discloses a method for identifying a cemetery in a power transmission and transformation project based on high-definition aerial survey images. The method includes obtaining remote sensing image data; preprocessing the remote sensing image data to obtain training data; The network model is trained to obtain the cemetery identification model; the cemetery identification model is used to identify the remote sensing image data to be identified to obtain the identification result; the identification result is optimized to obtain the final cemetery identification result. The high-definition aerial survey image-based burial ground identification method for power transmission and transformation projects provided by the present invention realizes accurate, reliable and efficient identification of burial grounds through an innovative method of constructing a burial ground identification model; therefore, the method of the present invention has high reliability and high efficiency and high accuracy.
Description
技术领域technical field
本发明属于图像处理领域,具体涉及一种基于高清航测影像的输变电工程坟地识别方法。The invention belongs to the field of image processing, and in particular relates to a method for identifying a power transmission and transformation project graveyard based on high-definition aerial survey images.
背景技术Background technique
随着经济技术的发展和人们生活水平的提高,电能已经成为了人们生产和生活中必不可少的二次能源,给人们的生产和生活带来了无尽的便利。而随着社会的发展,电力系统的建设和输变电工程选址选线就显得尤为重要。With the development of economy and technology and the improvement of people's living standards, electric energy has become an indispensable secondary energy in people's production and life, bringing endless convenience to people's production and life. With the development of society, the construction of power system and the selection of location and line of power transmission and transformation projects are particularly important.
在我国的传统观念中,对于“坟”的概念非常敏感。如果输变电工程选址选线(如迁坟、施工时影响坟地、坟地及其周围施工、高压线跨越坟地)时,涉及到坟地及周围施工等情况,极有可能会引起居民和电力系统员工之间的冲突。而且即便成功施工和架线,那么后期的设备维护和检修也非常困难。因此,在电力选线时,尽可能的避开坟地是首要选择。In the traditional concept of our country, the concept of "grave" is very sensitive. If the site selection and line selection of the power transmission and transformation project (such as relocation of the cemetery, construction affecting the cemetery, the cemetery and its surrounding construction, high-voltage lines crossing the cemetery) involves the cemetery and surrounding construction, etc., it is very likely that residents and power system employees will be affected. conflict between. And even if the construction and wiring are successful, the equipment maintenance and repair in the later stage is also very difficult. Therefore, when selecting power lines, avoiding graveyards as much as possible is the first choice.
目前,在输变电工工程选址选线过程中,一贯的做法是:先设定若干种方案,然后采用人工标定和标识的方式在地图上标注坟地区域和信息,然后再根据标定和标注的信息进行调整,直至选出最终的方案。At present, in the process of site selection and line selection of power transmission and transformation engineering, the consistent practice is to set several schemes first, and then use manual calibration and identification to mark the cemetery area and information on the map, and then according to the calibration and marking methods Information is adjusted until a final solution is selected.
但是,采用人为标定和标注的方式,不仅费时费力,精确度不高,而且还存在漏判和误判的可能性,可靠性高也不高。However, the manual calibration and labeling method is not only time-consuming and labor-intensive, and the accuracy is not high, but also there is the possibility of missed judgment and misjudgment, and the reliability is high and not high.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种可靠性高、效率高且准确性高的基于高清航测影像的输变电工程坟地识别方法。The purpose of the present invention is to provide a power transmission and transformation engineering graveyard identification method based on high-definition aerial survey images with high reliability, high efficiency and high accuracy.
本发明提供的这种用于基于高清航测影像的输变电工程坟地识别方法,包括如下步骤:The method for identifying a power transmission and transformation project cemetery based on high-definition aerial survey images provided by the present invention includes the following steps:
S1.获取遥感影像数据;S1. Obtain remote sensing image data;
S2.对步骤S1获取的遥感影像数据进行预处理,从而得到训练数据;S2. Preprocess the remote sensing image data obtained in step S1 to obtain training data;
S3.构建坟地识别神经网络模型;S3. Build a neural network model for cemetery identification;
S4.采用步骤S2得到的训练数据对步骤S3构建的坟地识别神经网络模型进行训练,从而得到坟地识别模型;S4. using the training data obtained in step S2 to train the cemetery identification neural network model constructed in step S3, thereby obtaining a cemetery identification model;
S5.采用步骤S4得到的坟地识别模型,对待识别的遥感影像数据进行识别,从而得到识别结果;S5. Use the cemetery identification model obtained in step S4 to identify the remote sensing image data to be identified, thereby obtaining an identification result;
S6.对步骤S5得到的识别结果进行优化处理,从而得到最终的坟地识别结果。S6. Optimizing the identification result obtained in step S5, so as to obtain the final identification result of the cemetery.
步骤S2所述的对步骤S1获取的遥感影像数据进行预处理,从而得到训练数据,具体为对步骤S1获取的遥感影像数据进行数据增强并扩大样本数据,从而得到训练数据。The step S2 preprocesses the remote sensing image data obtained in the step S1 to obtain training data, specifically, performing data enhancement on the remote sensing image data obtained in the step S1 and expanding the sample data to obtain the training data.
步骤S3所述的构建坟地识别神经网络模型,具体为搭建深度学习框架TenserFlow,从而构建坟墓地物识别的神经网络模型。The building of the neural network model for cemetery identification in step S3 is specifically building a deep learning framework TensorFlow, so as to build a neural network model for identifying cemetery objects.
所述的神经网络模型为Deeplabv3+模型。The neural network model is the Deeplabv3+ model.
步骤S6所述优化处理,具体包括噪声点去除处理、平滑处理和断点连接处理。The optimization processing in step S6 specifically includes noise point removal processing, smoothing processing and breakpoint connection processing.
所述的基于高清航测影像的输变电工程坟地识别方法,具体还包括如下步骤:The method for identifying a power transmission and transformation project cemetery based on high-definition aerial survey images further includes the following steps:
采用模型压缩技术对建立的识别模型进行压缩,从而提高模型的效率。Model compression technology is used to compress the established recognition model, so as to improve the efficiency of the model.
本发明提供的这种基于高清航测影像的输变电工程坟地识别方法,通过创新的构建坟地识别模型的方式,实现坟地的精准、可靠和高效识别;因此本发明方法的可靠性高、效率高且准确性高。The method for identifying cemeteries in power transmission and transformation projects based on high-definition aerial survey images provided by the present invention realizes accurate, reliable and efficient identification of cemeteries through an innovative method of constructing a cemetery identification model; therefore, the method of the present invention has high reliability and high efficiency. and high accuracy.
附图说明Description of drawings
图1为本发明方法的方法流程示意图。FIG. 1 is a schematic flow chart of the method of the present invention.
图2为本发明方法中的Deeplabv3+模型的整体架构示意图。FIG. 2 is a schematic diagram of the overall architecture of the Deeplabv3+ model in the method of the present invention.
图3为本发明方法中的Deeplabv3+模型中的空洞卷积示意图。FIG. 3 is a schematic diagram of hole convolution in the Deeplabv3+ model in the method of the present invention.
图4为本发明方法中的Deeplabv3+模型中的Decoder部分示意图。FIG. 4 is a schematic diagram of part of the Decoder in the Deeplabv3+ model in the method of the present invention.
图5为本发明方法中的Deeplabv3+模型中的改进型Xception网络示意图。FIG. 5 is a schematic diagram of the improved Xception network in the Deeplabv3+ model in the method of the present invention.
图6为本发明方法的实施效果示意图。FIG. 6 is a schematic diagram of the implementation effect of the method of the present invention.
具体实施方式Detailed ways
如图1所示为本发明方法的方法流程示意图:本发明提供的这种基于高清航测影像的输变电工程坟地识别方法,包括如下步骤:1 is a schematic flow chart of the method of the method of the present invention: this high-definition aerial survey image-based power transmission and transformation engineering graveyard identification method provided by the present invention includes the following steps:
S1.获取遥感影像数据;S1. Obtain remote sensing image data;
S2.对步骤S1获取的遥感影像数据进行预处理,从而得到训练数据;具体为对步骤S1获取的遥感影像数据进行数据增强并扩大样本数据,从而得到训练数据;S2. Preprocessing the remote sensing image data obtained in step S1 to obtain training data; specifically, performing data enhancement on the remote sensing image data obtained in step S1 and expanding sample data to obtain training data;
在具体实施时,可以进行色彩增强、几何变换等操作,从而扩大样本数据量;During specific implementation, operations such as color enhancement and geometric transformation can be performed to expand the amount of sample data;
S3.构建坟地识别神经网络模型;具体为搭建深度学习框架TenserFlow,从而构建坟墓地物识别的神经网络模型;而且采用Deeplabv3+模型作为神经网络模型;S3. Build a neural network model for cemetery recognition; specifically, build a deep learning framework TensorFlow to build a neural network model for cemetery object recognition; and use the Deeplabv3+ model as the neural network model;
在具体实施时,模型框架采用Deeplabv3+模型框架;模型的Backbone采用Xception;采用ASPP进行特征提取;采用双线性插值上采样进行采样;采用滑动窗口进行预测;In the specific implementation, the model framework adopts the Deeplabv3+ model framework; the Backbone of the model adopts Xception; ASPP is used for feature extraction; bilinear interpolation is used for upsampling for sampling; sliding window is used for prediction;
DeepLabv3+模型的整体架构如图2所示:Encoder的主体是带有空洞卷积的DCNN。其中,DCNN可以采用常用的分类网络如ResNet,然后是带有空洞卷积的空间金字塔池化模块(Atrous Spatial Pyramid Pooling,ASPP)),主要是为了引入多尺度信息;相比DeepLabv3,v3+引入了Decoder模块,其将底层特征与高层特征进一步融合,提升分割边界准确度。The overall architecture of the DeepLabv3+ model is shown in Figure 2: the main body of the Encoder is a DCNN with atrous convolution. Among them, DCNN can use a commonly used classification network such as ResNet, and then a spatial pyramid pooling module with atrous convolution (Atrous Spatial Pyramid Pooling, ASPP)), mainly to introduce multi-scale information; compared with DeepLabv3, v3+ introduced Decoder module, which further fuses low-level features with high-level features to improve the accuracy of segmentation boundaries.
在DeepLab中,将输入图片与输出特征图的尺度之比记为output_stride,这里的DCNN可以是任意的分类网络,一般又称为backbone,如采用ResNet网络。In DeepLab, the ratio of the scale of the input image to the output feature map is recorded as output_stride. The DCNN here can be any classification network, generally called backbone, such as the ResNet network.
空洞卷积(Atrous Convolution)是DeepLab模型的关键之一,它可以在不改变特征图大小的同时控制感受野,这有利于提取多尺度信息。空洞卷积如图3所示,其中rate(r)控制着感受野的大小,r越大感受野越大。Atrous Convolution is one of the keys of the DeepLab model, which can control the receptive field without changing the size of the feature map, which is beneficial to extract multi-scale information. The atrous convolution is shown in Figure 3, where rate(r) controls the size of the receptive field, and the larger the r, the larger the receptive field.
在DeepLab中,采用空间金字塔池化(ASPP)模块来进一步提取多尺度信息,这里是采用不同rate的空洞卷积来实现这一点。ASPP主要是为了抓取多尺度信息,这对于分割准确度至关重要。ASPP模块主要包含以下几个部分:In DeepLab, the Spatial Pyramid Pooling (ASPP) module is used to further extract multi-scale information, and here atrous convolutions with different rates are used to achieve this. ASPP is mainly to capture multi-scale information, which is crucial for segmentation accuracy. The ASPP module mainly includes the following parts:
(1)一个1×1卷积层,以及三个3x3的空洞卷积,对于output_stride=16,其rate为(6,12,18),若output_stride=8,rate加倍(这些卷积层的输出channel数均为256,并且含有BN层);(1) A 1×1 convolutional layer, and three 3x3 hole convolutions, for output_stride=16, the rate is (6, 12, 18), if output_stride=8, the rate is doubled (the output of these convolutional layers is The number of channels is 256 and contains BN layer);
(2)一个全局平均池化层得到image-level特征,然后送入1x1卷积层(输出256个channel),并双线性插值到原始大小;(2) A global average pooling layer obtains image-level features, which are then sent to a 1x1 convolutional layer (outputting 256 channels), and bilinearly interpolated to the original size;
(3)将部分(1)和部分(2)得到的4个不同尺度的特征在channel维度concat在一起,然后送入1x1的卷积进行融合并得到256-channel的新特征。(3) Concat the four features of different scales obtained in part (1) and part (2) together in the channel dimension, and then send it into a 1x1 convolution for fusion and obtain a new feature of 256-channel.
对于DeepLabv3,经过ASPP模块得到的特征图的output_stride为8或者16,其经过1x1的分类层后直接双线性插值到原始图片大小,这是一种非常暴力的decoder方法,特别是output_stride=16。然而这并不利于得到较精细的分割结果,故DeepLabv3+模型中借鉴了EncoderDecoder结构,引入了新的Decoder模块,如图4所示。首先将encoder得到的特征双线性插值得到4x的特征,然后与encoder中对应大小的低级特征concat,如ResNet中的Conv2层,由于encoder得到的特征数只有256,而低级特征维度可能会很高,为了防止encoder得到的高级特征被弱化,先采用1x1卷积对低级特征进行降维(paper中输出维度为48)。两个特征concat后,再采用3x3卷积进一步融合特征,最后再双线性插值得到与原始图片相同大小的分割预测。For DeepLabv3, the output_stride of the feature map obtained by the ASPP module is 8 or 16, which is directly bilinearly interpolated to the original image size after passing through the 1x1 classification layer. This is a very violent decoder method, especially output_stride=16. However, this is not conducive to obtaining finer segmentation results, so the DeepLabv3+ model draws on the EncoderDecoder structure and introduces a new Decoder module, as shown in Figure 4. First, bilinearly interpolate the features obtained by the encoder to obtain 4x features, and then concat with the low-level features of the corresponding size in the encoder, such as the Conv2 layer in ResNet, because the number of features obtained by the encoder is only 256, and the low-level feature dimension may be very high , in order to prevent the high-level features obtained by the encoder from being weakened, 1x1 convolution is used to reduce the dimension of the low-level features (the output dimension in the paper is 48). After the two features are concat, 3x3 convolution is used to further fuse the features, and finally bilinear interpolation is used to obtain a segmentation prediction of the same size as the original image.
DeepLabv3所采用的backbone是ResNet网络,在v3+模型中尝试了改进的Xception。Xception网络主要采用Depthwise Separable Convolution,这使得Xception计算量更小(如图5)。改进的Xception主要体现在以下几点:The backbone used by DeepLabv3 is the ResNet network, and an improved Xception is tried in the v3+ model. The Xception network mainly uses Depthwise Separable Convolution, which makes Xception less computationally intensive (as shown in Figure 5). The improved Xception is mainly reflected in the following points:
1、参考MSRA的修改,增加了更多的层;1. Referring to the modification of MSRA, more layers have been added;
2、所有的最大池化层使用stride=2的depthwise separable convolutions替换,这样可以改成空洞卷积;2. All max pooling layers are replaced with depthwise separable convolutions with stride=2, which can be changed to hole convolution;
3、在3x3 depthwise convolution后增加BN和ReLU。3. Add BN and ReLU after 3x3 depthwise convolution.
采用改进的Xception网络作为backbone,DeepLab网络分割效果上有一定的提升。Using the improved Xception network as the backbone, the DeepLab network segmentation effect has been improved to a certain extent.
S4.采用步骤S2得到的训练数据对步骤S3构建的坟地识别神经网络模型进行训练,从而得到坟地识别模型;S4. using the training data obtained in step S2 to train the cemetery identification neural network model constructed in step S3, thereby obtaining a cemetery identification model;
S5.采用步骤S4得到的坟地识别模型,对待识别的遥感影像数据进行识别,从而得到识别结果;S5. Use the cemetery identification model obtained in step S4 to identify the remote sensing image data to be identified, thereby obtaining an identification result;
S6.对步骤S5得到的识别结果进行优化处理,从而得到最终的坟地识别结果;在具体实施时,优化处理包括噪声点去除处理、平滑处理和断点连接处理等。S6. Perform optimization processing on the identification result obtained in step S5, thereby obtaining the final burial ground identification result; in specific implementation, the optimization processing includes noise point removal processing, smoothing processing, and breakpoint connection processing.
此外,在模型训练好后,可以采用模型压缩技术对建立的识别模型进行压缩,从而提高模型的效率;具体可以采用蒸馏、剪枝等方式对模型进行压缩。In addition, after the model is trained, model compression technology can be used to compress the established recognition model, thereby improving the efficiency of the model; specifically, distillation, pruning and other methods can be used to compress the model.
图6为本发明方法的具体实施示意图。图6(a)和图6(b)为一对;其中图6(a)为未识别的原始图片,图6(b)中在上部偏左部分,用实心区域标示出了识别出的坟地部分;图6(c)和图6(d)为一对;其中图6(d)为未识别的原始图片,图6(d)中用实心区域标示出了识别出的坟地部分。从图6中可以看出,本发明方法的实施效果较好,识别准确率较高。FIG. 6 is a schematic diagram of a specific implementation of the method of the present invention. Fig. 6(a) and Fig. 6(b) are a pair; Fig. 6(a) is the original unrecognized image, and in Fig. 6(b), in the upper left part, the identified graveyard is marked with a solid area Fig. 6(c) and Fig. 6(d) are a pair; Fig. 6(d) is the original unrecognized image, and the identified burial area is marked with a solid area in Fig. 6(d). It can be seen from FIG. 6 that the implementation effect of the method of the present invention is good, and the recognition accuracy rate is high.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010336683.3A CN111476199A (en) | 2020-04-26 | 2020-04-26 | Identification method of power transmission and transformation project cemetery based on high-definition aerial survey images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010336683.3A CN111476199A (en) | 2020-04-26 | 2020-04-26 | Identification method of power transmission and transformation project cemetery based on high-definition aerial survey images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111476199A true CN111476199A (en) | 2020-07-31 |
Family
ID=71756033
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010336683.3A Pending CN111476199A (en) | 2020-04-26 | 2020-04-26 | Identification method of power transmission and transformation project cemetery based on high-definition aerial survey images |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111476199A (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112489019A (en) * | 2020-12-01 | 2021-03-12 | 合肥工业大学 | Method for rapidly identifying chopped fibers in GFRC image based on deep learning |
| CN112560716A (en) * | 2020-12-21 | 2021-03-26 | 浙江万里学院 | High-resolution remote sensing image water body extraction method based on low-level feature fusion |
| CN112950616A (en) * | 2021-03-23 | 2021-06-11 | 湖南三湘绿谷生态科技有限公司 | Method and system for quickly detecting and checking wind-water fire hazard points |
| CN113205280A (en) * | 2021-05-28 | 2021-08-03 | 广西大学 | Electric vehicle charging station site selection method for Liqun guided attention inference network |
| CN114492720A (en) * | 2020-10-23 | 2022-05-13 | 杭州海康威视数字技术股份有限公司 | Method and device for constructing neural network model |
| CN114550010A (en) * | 2022-01-21 | 2022-05-27 | 扬州大学 | Ditch and pit proportion aerial survey method for rice and shrimp breeding system |
| CN115457388A (en) * | 2022-09-06 | 2022-12-09 | 湖南经研电力设计有限公司 | Power transmission and transformation remote sensing image ground feature identification method and system based on deep learning optimization |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109919206A (en) * | 2019-02-25 | 2019-06-21 | 武汉大学 | A land cover classification method for remote sensing images based on fully atrous convolutional neural network |
| CN110032928A (en) * | 2019-02-27 | 2019-07-19 | 成都数之联科技有限公司 | A kind of satellite remote-sensing image identifying water boy method suitable for Color-sensitive |
| CN110310289A (en) * | 2019-06-17 | 2019-10-08 | 北京交通大学 | Lung tissue image segmentation method based on deep learning |
| CN110852393A (en) * | 2019-11-14 | 2020-02-28 | 吉林高分遥感应用研究院有限公司 | Remote sensing image segmentation method and system |
-
2020
- 2020-04-26 CN CN202010336683.3A patent/CN111476199A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109919206A (en) * | 2019-02-25 | 2019-06-21 | 武汉大学 | A land cover classification method for remote sensing images based on fully atrous convolutional neural network |
| CN110032928A (en) * | 2019-02-27 | 2019-07-19 | 成都数之联科技有限公司 | A kind of satellite remote-sensing image identifying water boy method suitable for Color-sensitive |
| CN110310289A (en) * | 2019-06-17 | 2019-10-08 | 北京交通大学 | Lung tissue image segmentation method based on deep learning |
| CN110852393A (en) * | 2019-11-14 | 2020-02-28 | 吉林高分遥感应用研究院有限公司 | Remote sensing image segmentation method and system |
Non-Patent Citations (1)
| Title |
|---|
| LIANG-CHIEH CHEN ET AL.: "Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation", 《ARXIV:1802.02611V3 [CS.CV]》, pages 1 - 18 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114492720A (en) * | 2020-10-23 | 2022-05-13 | 杭州海康威视数字技术股份有限公司 | Method and device for constructing neural network model |
| CN112489019A (en) * | 2020-12-01 | 2021-03-12 | 合肥工业大学 | Method for rapidly identifying chopped fibers in GFRC image based on deep learning |
| CN112560716A (en) * | 2020-12-21 | 2021-03-26 | 浙江万里学院 | High-resolution remote sensing image water body extraction method based on low-level feature fusion |
| CN112560716B (en) * | 2020-12-21 | 2024-05-28 | 浙江万里学院 | High-resolution remote sensing image water body extraction method based on low-level feature fusion |
| CN112950616A (en) * | 2021-03-23 | 2021-06-11 | 湖南三湘绿谷生态科技有限公司 | Method and system for quickly detecting and checking wind-water fire hazard points |
| CN113205280A (en) * | 2021-05-28 | 2021-08-03 | 广西大学 | Electric vehicle charging station site selection method for Liqun guided attention inference network |
| CN113205280B (en) * | 2021-05-28 | 2023-06-23 | 广西大学 | A site selection method for electric vehicle charging stations based on Lie group guided attention reasoning network |
| CN114550010A (en) * | 2022-01-21 | 2022-05-27 | 扬州大学 | Ditch and pit proportion aerial survey method for rice and shrimp breeding system |
| CN115457388A (en) * | 2022-09-06 | 2022-12-09 | 湖南经研电力设计有限公司 | Power transmission and transformation remote sensing image ground feature identification method and system based on deep learning optimization |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111476199A (en) | Identification method of power transmission and transformation project cemetery based on high-definition aerial survey images | |
| CN111950453A (en) | Optional-shape text recognition method based on selective attention mechanism | |
| CN111489324A (en) | Cervical cancer lesion diagnosis method fusing multi-modal prior pathology depth features | |
| CN119027960A (en) | A multimodal large language model for fine-grained visual perception of remote sensing images | |
| CN115482491B (en) | A Transformer-Based Bridge Defect Identification Method and System | |
| CN115984603A (en) | Method and system for fine classification of urban green spaces based on GF-2 and open map data | |
| CN116543165B (en) | Remote sensing image fruit tree segmentation method based on dual-channel composite depth network | |
| CN116228792A (en) | A medical image segmentation method, system and electronic device | |
| CN110853057A (en) | Aerial image segmentation method based on global and multi-scale fully convolutional network | |
| CN110956196A (en) | An automatic identification method of window-to-wall ratio of urban buildings | |
| CN118887685A (en) | A text detection method for medical bill images based on feature enhancement and multi-scale feature fusion | |
| CN118072026B (en) | Panoramic image segmentation method and system based on multi-scale context | |
| CN119068080A (en) | Method, electronic device and computer program product for generating an image | |
| CN118154954A (en) | A method for infrared dim small target detection integrating background prior information | |
| CN108932474A (en) | A kind of remote sensing image cloud based on full convolutional neural networks compound characteristics sentences method | |
| CN117370498B (en) | Unified modeling method for 3D open vocabulary detection and closed caption generation | |
| CN118470702A (en) | Construction log generation method, medium and system based on language big model | |
| CN116994164B (en) | A Joint Learning Method for Multimodal Aerial Image Fusion and Object Detection | |
| CN116580190B (en) | Hidden landslide target recognition and semantic segmentation method, system and electronic equipment | |
| CN115527118A (en) | A Method of Target Detection in Remote Sensing Image Fusion Attention Mechanism | |
| CN119762499B (en) | A Method and System for Road Extraction from Remote Sensing Images Based on VMamba and Channel Attention | |
| CN120198672A (en) | Weakly supervised image semantic segmentation method based on feature modulation and cross-modal alignment | |
| CN119579546A (en) | A road crack detection method and system | |
| CN116778346B (en) | Pipeline identification method and system based on improved self-attention mechanism | |
| CN119295943B (en) | Building attribute extraction method based on high-resolution hyperspectral remote sensing image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200731 |