[go: up one dir, main page]

CN112381161A - Neural network training method - Google Patents

Neural network training method Download PDF

Info

Publication number
CN112381161A
CN112381161A CN202011296897.9A CN202011296897A CN112381161A CN 112381161 A CN112381161 A CN 112381161A CN 202011296897 A CN202011296897 A CN 202011296897A CN 112381161 A CN112381161 A CN 112381161A
Authority
CN
China
Prior art keywords
data
training
category
neural network
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011296897.9A
Other languages
Chinese (zh)
Other versions
CN112381161B (en
Inventor
林淑强
尚占锋
张永光
林修明
欧阳天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guotou Intelligent Information Technology Co.,Ltd.
Original Assignee
Xiamen Meiya Pico Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meiya Pico Information Co Ltd filed Critical Xiamen Meiya Pico Information Co Ltd
Priority to CN202011296897.9A priority Critical patent/CN112381161B/en
Publication of CN112381161A publication Critical patent/CN112381161A/en
Application granted granted Critical
Publication of CN112381161B publication Critical patent/CN112381161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明涉及一种神经网络训练方法,包括以下步骤:S1、初步训练,对类别数据不均衡的训练样本数据进行深度学习神经网络训练,得到一个初步最佳训练模型;S2、根据该初步最佳训练模型对训练样本数据进行处理;S3、二次训练,用经过S2处理后的数据在该初步最佳训练模型基础上继续迭代训练至神经网络训练模型收敛。该方法利用DBSCAN聚类结果及已有标签,指导神经网络训练过程中每个batch的数据采样,通过类别间数据的均衡性,单类别内部数据特征多样性,提高了算法模型的收敛速度,提高了算法模型泛化性能。

Figure 202011296897

The invention relates to a neural network training method, comprising the following steps: S1, preliminary training, performing deep learning neural network training on training sample data with unbalanced category data to obtain a preliminary optimal training model; S2, according to the preliminary optimal training model The training model processes the training sample data; S3, secondary training, uses the data processed by S2 to continue iterative training on the basis of the preliminary optimal training model until the neural network training model converges. This method uses the DBSCAN clustering results and existing labels to guide the data sampling of each batch in the neural network training process. Through the balance of data between categories and the diversity of data features within a single category, the convergence speed of the algorithm model is improved, and the The generalization performance of the algorithm model.

Figure 202011296897

Description

Neural network training method
Technical Field
The invention relates to a deep learning algorithm, in particular to a neural network training method for improving the class imbalance of training sample data.
Background
In the deep learning neural network training process, an important step is gradient descent, namely, updating of weight parameters in the network, and common updating modes include the following three modes: 1. traversing all training data sets to calculate a primary loss function, calculating the gradient of the loss function to each parameter, and updating the gradient, wherein the method is called batch gradient descent; 2. the loss function is calculated once every time a data sample is trained, and then gradient updating parameters are solved, wherein the method is called random gradient descent; 3. dividing a training data set into a plurality of small data batches, calculating a loss function according to the batches, and updating parameters, wherein the method is called mini-batch gradient descent. All samples in the method 1 are trained once, so that the method has the defects of high calculation amount overhead and low calculation speed, and each sample in the method 2 updates parameters, so that the method has the advantages of high speed and poor convergence performance, so that a mini-batch gradient descent method is generally adopted in deep learning training at present.
The data class imbalance means that sample data of each class in the data set is greatly unbalanced. The problem is often encountered in deep learning algorithm training, particularly a classification algorithm model, the algorithm model trained by data with unbalanced classes has poor generalization performance, has serious bias in prediction reasoning, and often cannot be seen by general index parameters for measuring model performance in training, such as a two-classification model with a positive-negative sample ratio of 99:1 and extreme class imbalance, and the algorithm model has 99% prediction accuracy and 100% recall rate even if all data are predicted as positive samples.
The method solves the problem of data class imbalance, the traditional method utilizes a sample sampling method to relieve the data imbalance, the method mainly comprises random under-sampling (RUS) and random over-sampling (ROS) to ensure the balance among data classes, and mini-batch is carried out through random sampling during model training. The above conventional method has two disadvantages: 1. random sampling easily changes sample data distribution to cause model overfitting; 2. there is no way to ensure the data category balance in each batch, convergence is slow, and the generalization effect of the algorithm model is poor.
Disclosure of Invention
The present invention is directed to a neural network training method for improving class imbalance of training sample data, so as to solve the above problems. Therefore, the invention adopts the following specific technical scheme:
a neural network training method, comprising the steps of:
s1, performing preliminary training, namely performing deep learning neural network training on training sample data with unbalanced class data to obtain a preliminary optimal training model;
s2, processing the training sample data according to the initial optimal training model, wherein the specific process is as follows:
s21, extracting the feature vectors of all pictures in each category according to the primary optimal training model
Figure BDA0002785662710000021
Wherein
Figure BDA0002785662710000022
M represents a marked category label, and id-n represents a picture id number;
s22, feature vector is paired by using clustering algorithm DBSCAN
Figure BDA0002785662710000023
Carrying out category internal feature clustering according to each label category to obtain data clustering result of each category
Figure BDA0002785662710000024
Wherein,
Figure BDA0002785662710000025
a in the graph represents a marked class label and is called a first-level classification label, id-n represents a picture id number, and i represents a class label of DBSCAN clustering and is called a second-level classification label;
s23, obtaining the internal clustering condition of each category picture according to the data clustering result
Figure BDA0002785662710000026
S24, setting a sampling strategy of the deep learning neural network training process batch: from
Figure BDA0002785662710000031
Extracting batch samples from all the types of pictures, wherein the pictures in each batch meet the data balance of two-level classification: the data volume of each class between different class classes of the first class needs to meet the balance; data in the same first-level classification category accords with DBSCAN clustering distribution, and data quantity balance among the second-level classification categories is met;
and S3, secondary training, and continuing iterative training on the basis of the initial optimal training model by using the data processed by the S2 until the neural network training model converges.
Further, the amount of data between different classes of training sample data differs by more than 4 times.
Further, the sample data size of each batch is 0.01% to 1% of the training sample data size.
Further, the sample data size per batch is 256 or 512.
Further, the epsilon parameter of DBSCAN is 0.6 and the minPts parameter is 2.
Further, the data amount between each category of the primary classification and the data amount between each category of the secondary classification in each batch are within 10%.
By adopting the technical scheme, the invention has the beneficial effects that: the clustering algorithm DBSCAN is adopted to cluster all data samples according to categories respectively to obtain the distribution condition of the data characteristics in each category, the number of the clustering categories does not need to be appointed in advance, the introduction of artificial prejudice is avoided, the clustering effect is better, meanwhile, the convergence speed of the algorithm model is improved by balancing the diversity of the data in the single category among the categories in each batch, and the model is ensured to have good generalization performance.
Drawings
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures. Elements in the figures are not drawn to scale and like reference numerals are generally used to indicate like elements.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a sample diagram of a batch.
Detailed Description
The invention will now be further described with reference to the accompanying drawings and detailed description.
As shown in fig. 1, a neural network training method includes the following steps:
and S1, performing preliminary training, namely performing deep learning neural network training on training sample data with unbalanced class data to obtain a preliminary optimal training model. Here, the class data imbalance means that the data amounts between different classes of training sample data are greatly different (for example, different by 4 times or more), that is, the data amount of the most classes is different by at least 4 times from the data amount of the least classes.
And S2, processing the training sample data according to the initial optimal training model. Specifically, the clustering result is used for guiding the sampling of batch in the neural network training process: the data in each batch not only needs to satisfy the balance of the data among the categories, but also needs to satisfy the characteristic diversity of the data in a single category. The specific process of S2 is as follows:
s21, extracting the feature vectors of all pictures in each category according to the primary optimal training model
Figure BDA0002785662710000041
Wherein
Figure BDA0002785662710000042
Where M denotes a tagged category tag and id-n denotes a picture id number.
S22, feature vector is paired by using clustering algorithm DBSCAN
Figure BDA0002785662710000043
Carrying out category internal feature clustering according to each label category to obtain data clustering result of each category
Figure BDA0002785662710000044
Wherein,
Figure BDA0002785662710000045
and A in the graph represents a labeled class label and is called a first-level classification label, id-n represents a picture id number, and i represents a class label of a DBSCAN cluster and is called a second-level classification label. Wherein the epsilon parameter of DBSCAN is 0.6 and the minPts parameter is 2.
S23, obtaining the internal clustering condition of each category picture according to the data clustering result
Figure BDA0002785662710000051
S24, setting a sampling strategy of the deep learning neural network training process batch: from
Figure BDA0002785662710000052
Extracting batch samples from all the types of pictures, wherein the pictures in each batch meet the data balance of two-level classification: the data volume of each class between different class classification classes needs to meet the balance, generally the phase difference is required to be within 10 percent, and the best quantity is the same; data in the same first-class classification category accords with DBSCAN clustering distribution, data quantity balance among the second-class classification categories is met, phase difference is generally required to be within 10%, and the data quantity is preferably the same, so that data diversity in a single category is guaranteed. For example, there are 4 classes (first-class classes) for the training sample data, each class has 2, 4, and 3 second-class classes after being clustered by DBSCAN, and assuming that each Batch needs 256 samples, the number of samples needed for each class in one Batch is 256/4-64, and the number of samples needed for each class in a single class is 64/2-32, 64/4-16, 64/4-16, and 64/3-21.3 (non-integer, and only one class is rounded up and down), as shown in fig. 2. Preferably, the data amount of each batch is the number of training samplesAccording to the amount of 0.01% to 1%.
And S5, secondary training, and continuing iterative training on the basis of the initial optimal training model by using the data processed by the S2 until the neural network training model converges.
Experimental testing
1) An algorithm model: backbone: a 10-class neural network of googleNet and fully-connected layers, wherein the 10 classes are airlane, automobile, bird, cat, deer, dog, frog, horse, hip, and struck, respectively;
2) training sample data: the data set was from the cifar-10, 10 classes (airlane, automobile, bird, cat, deer, dog, frog, horse, ship, struck) with a simulated class imbalance ratio of 4: 1(5000:750), i.e. 5000 for 7 classes (airplan, automobile, bird, cat, deer, dog, frog) and 750 for 3 classes (horse, ship, truck);
3) test data: 10 categories, 1000 sheets each;
4) experimental hardware: 4 GTX 1080Ti GPU video cards;
5) the experimental process comprises the following steps: the size of the batch is 512, 600 batches are operated, and the accuracy acc and the loss value under the test set are calculated;
6) grouping experiments:
experiment 1, training by adopting the existing neural network training method, randomly segmenting training data according to the size of batch 512, and operating 600 batches;
experiment 2: the invention is used for improving the neural network training method of the data class imbalance, and particularly relates to training
The process is as follows:
first stage (preliminary training):
sampling the data of each category of the first 300 lots, ensuring that the data of each category in one lot are consistent, namely 512/10 (51) of each category, randomly distributing the rest 2 categories to 2 categories, and storing a model with the highest accuracy (test set);
second stage (secondary training):
data processing: taking out the model with the highest accuracy in the first stage (for test data, removing a full connection layer, and only reserving a backbone network for extracting picture characteristics), extracting the picture characteristics of all training samples, and clustering the data of 10 classes by using DBSCAN respectively, wherein one batch ensures that the data of each class in the first stage are consistent in number, and also ensures that the data of the subclass (secondary classification) in the class are balanced after the data of each class is clustered by the DBSCAN in a single class;
and (3) secondary training: using the processed data, the training continues for 300 lots based on the best model in the first stage.
7) The results of the experiment are shown in table 1:
TABLE 1
Figure BDA0002785662710000071
As can be seen from Table 1, the method of the present invention can improve the convergence rate and accuracy of model training.
In conclusion, in deep learning neural network training with unbalanced class data, the method provided by the invention guides data sampling of each batch in the neural network training process by using the clustering result of the DBSCAN and the existing label, and improves the convergence speed of the algorithm model and the generalization performance of the algorithm model through the balance of data among classes and the diversity of data characteristics in a single class (especially ensuring the number and distribution of difficult samples). The training method can be widely applied to class unbalanced AI scene training, and the landing application of artificial intelligence under various complex scenes is promoted.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1.一种神经网络训练方法,其特征在于,包括以下步骤:1. a neural network training method, is characterized in that, comprises the following steps: S1、初步训练,对类别数据不均衡的训练样本数据进行深度学习神经网络训练,得到一个初步最佳训练模型;S1. Preliminary training, perform deep learning neural network training on the training sample data with unbalanced category data, and obtain a preliminary optimal training model; S2、根据该初步最佳训练模型对训练样本数据进行处理,具体过程如下:S2. Process the training sample data according to the preliminary optimal training model, and the specific process is as follows: S21、根据该初步最佳训练模型提取每个类别所有图片的特征向量
Figure FDA0002785662700000011
其中
Figure FDA0002785662700000012
中M表示已标记类别标签,id-n表示图片id编号;
S21. Extract feature vectors of all pictures of each category according to the preliminary optimal training model
Figure FDA0002785662700000011
in
Figure FDA0002785662700000012
where M represents the marked category label, and id-n represents the image id number;
S22、用聚类算法DBSCAN对特征向量
Figure FDA0002785662700000013
按每个标签类别进行类别内部特征聚类,得到每个类别的数据聚类结果
Figure FDA0002785662700000014
其中,
Figure FDA0002785662700000015
中A表示已标记类别标签,称为一级分类标签,id-n表示图片id编号,i表示DBSCAN聚类的类别标签,称为二级分类标签;
S22. Use the clustering algorithm DBSCAN to classify the feature vector
Figure FDA0002785662700000013
Perform intra-category feature clustering for each label category to obtain the data clustering results of each category
Figure FDA0002785662700000014
in,
Figure FDA0002785662700000015
A represents the marked category label, which is called the primary classification label, id-n represents the image id number, and i represents the category label of DBSCAN clustering, which is called the secondary classification label;
S23、根据数据聚类结果得到每个类别图片的内部聚类情况
Figure FDA0002785662700000016
S23. Obtain the internal clustering situation of each category of pictures according to the data clustering results
Figure FDA0002785662700000016
S24、设定深度学习神经网络训练过程batch的采样策略:从
Figure FDA0002785662700000017
中所有类别图片抽取batch样本,每个batch中的图片满足两级分类的数据均衡性:不同一级分类类别间的每个类别数据量要满足均衡;同一个一级分类类别内的数据符合DBSCAN聚类分布,满足二级分类类别间的数据量均衡性;
S24. Set the batch sampling strategy of the deep learning neural network training process: from
Figure FDA0002785662700000017
Batch samples are extracted from all categories of pictures in the batch, and the pictures in each batch meet the data balance of two-level classification: the amount of data of each category between different first-level classification categories must meet the balance; the data in the same first-level classification category conforms to DBSCAN Cluster distribution, to meet the balance of data volume between secondary classification categories;
S3、二次训练,用经过S2处理后的数据在该初步最佳训练模型基础上继续迭代训练至神经网络训练模型收敛。S3, secondary training, using the data processed by S2 to continue iterative training on the basis of the preliminary optimal training model until the neural network training model converges.
2.如权利要求1所述的神经网络训练方法,其特征在于,训练样本数据的不同类别之间的数据量相差4倍以上。2 . The neural network training method according to claim 1 , wherein the data amount between different categories of the training sample data differs by more than 4 times. 3 . 3.如权利要求1所述的神经网络训练方法,其特征在于,DBSCAN的epsilon参数为0.6和minPts参数为2。3. neural network training method as claimed in claim 1, is characterized in that, the epsilon parameter of DBSCAN is 0.6 and minPts parameter is 2. 4.如权利要求1所述的神经网络训练方法,其特征在于,每个batch的样本数据量为训练样本数据量的0.01%至1%。4. The neural network training method according to claim 1, wherein the sample data volume of each batch is 0.01% to 1% of the training sample data volume. 5.如权利要求4所述的神经网络训练方法,其特征在于,每个batch的样本数据量为256或512。5 . The neural network training method according to claim 4 , wherein the amount of sample data in each batch is 256 or 512. 6 . 6.如权利要求1所述的神经网络训练方法,其特征在于,每个batch中一级分类的每个类别之间数据量和二级分类的每个类别之间的数据量均相差在10%以内。6. The neural network training method according to claim 1, wherein the amount of data between each category of the first-level classification and the amount of data between each category of the second-level classification in each batch differ by 10%. % or less.
CN202011296897.9A 2020-11-18 2020-11-18 Neural network training method Active CN112381161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011296897.9A CN112381161B (en) 2020-11-18 2020-11-18 Neural network training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011296897.9A CN112381161B (en) 2020-11-18 2020-11-18 Neural network training method

Publications (2)

Publication Number Publication Date
CN112381161A true CN112381161A (en) 2021-02-19
CN112381161B CN112381161B (en) 2022-08-30

Family

ID=74585149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011296897.9A Active CN112381161B (en) 2020-11-18 2020-11-18 Neural network training method

Country Status (1)

Country Link
CN (1) CN112381161B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387457A (en) * 2021-12-27 2022-04-22 腾晖科技建筑智能(深圳)有限公司 Face intra-class interval optimization method based on parameter adjustment
CN115358373A (en) * 2022-08-19 2022-11-18 中国人民解放军战略支援部队信息工程大学 Defense method for resisting attack based on cross entropy
CN116432728A (en) * 2021-12-31 2023-07-14 深圳云天励飞技术股份有限公司 Neural network model training method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921208A (en) * 2018-06-20 2018-11-30 天津大学 The aligned sample and modeling method of unbalanced data based on deep learning
CN109816092A (en) * 2018-12-13 2019-05-28 北京三快在线科技有限公司 Deep neural network training method, device, electronic equipment and storage medium
CN110298451A (en) * 2019-06-10 2019-10-01 上海冰鉴信息科技有限公司 A kind of equalization method and device of the lack of balance data set based on Density Clustering
CN110443281A (en) * 2019-07-05 2019-11-12 重庆信科设计有限公司 Adaptive oversampler method based on HDBSCAN cluster
US20190385045A1 (en) * 2018-06-14 2019-12-19 Dell Products L.P. Systems And Methods For Generalized Adaptive Storage Endpoint Prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190385045A1 (en) * 2018-06-14 2019-12-19 Dell Products L.P. Systems And Methods For Generalized Adaptive Storage Endpoint Prediction
CN108921208A (en) * 2018-06-20 2018-11-30 天津大学 The aligned sample and modeling method of unbalanced data based on deep learning
CN109816092A (en) * 2018-12-13 2019-05-28 北京三快在线科技有限公司 Deep neural network training method, device, electronic equipment and storage medium
CN110298451A (en) * 2019-06-10 2019-10-01 上海冰鉴信息科技有限公司 A kind of equalization method and device of the lack of balance data set based on Density Clustering
CN110443281A (en) * 2019-07-05 2019-11-12 重庆信科设计有限公司 Adaptive oversampler method based on HDBSCAN cluster

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387457A (en) * 2021-12-27 2022-04-22 腾晖科技建筑智能(深圳)有限公司 Face intra-class interval optimization method based on parameter adjustment
CN116432728A (en) * 2021-12-31 2023-07-14 深圳云天励飞技术股份有限公司 Neural network model training method and device and electronic equipment
CN115358373A (en) * 2022-08-19 2022-11-18 中国人民解放军战略支援部队信息工程大学 Defense method for resisting attack based on cross entropy

Also Published As

Publication number Publication date
CN112381161B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN112381161B (en) Neural network training method
US20190279088A1 (en) Training method, apparatus, chip, and system for neural network model
CN113887480B (en) Burma language image text recognition method and device based on multi-decoder joint learning
WO2020073951A1 (en) Method and apparatus for training image recognition model, network device, and storage medium
CN111429340A (en) Cyclic image translation method based on self-attention mechanism
CN116089883B (en) A training method used to improve the distinction between old and new categories in incremental learning of existing categories
CN114299362B (en) A small sample image classification method based on k-means clustering
CN113643230A (en) Continuous learning method and system for biomacromolecular particle identification by cryo-electron microscopy
CN119670916B (en) Federated learning method and device based on feature comparison optimization and classifier dynamic integration
CN113971644A (en) Image identification method and device based on data enhancement strategy selection
CN116561622A (en) Federal learning method for class unbalanced data distribution
CN113961725A (en) A kind of label automatic labeling method and system, equipment and storage medium
CN112818941A (en) Cultural relic fragment microscopic image classification method, system, equipment and storage medium based on transfer learning
CN111860601B (en) Method and device for predicting type of large fungi
CN117593600B (en) Long-tail learning data enhancement method based on Mosaic fusion
CN116451124B (en) A method for identifying unbalanced radiation source signals based on decoupled representation learning
CN115797779B (en) Unknown ship target type recognition method in remote sensing images under cross-domain scenarios
CN117422942A (en) Model training method, image classification device, and storage medium
CN112488188A (en) Feature selection method based on deep reinforcement learning
CN114897051B (en) Citrus disease degree identification method, device, equipment and storage medium
KR20190078710A (en) Image classfication system and mehtod
CN115861699A (en) Long-tail distribution image classification method based on multi-objective optimization
CN109978058A (en) Determine the method, apparatus, terminal and storage medium of image classification
CN114005009B (en) Training method and device of target detection model based on RS loss
CN114706971A (en) Biomedical document type determination method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 361000 Fujian Province Xiamen City Torch High-tech Industrial Development Zone Software Park Phase II Qianpu East Road 188, 19th Floor

Patentee after: Guotou Intelligent Information Technology Co.,Ltd.

Country or region after: China

Address before: Unit 102-402, No. 12, guanri Road, phase II, Xiamen Software Park, Fujian Province, 361000

Patentee before: XIAMEN MEIYA PICO INFORMATION Co.,Ltd.

Country or region before: China