WO2025194748A1 - Procédé et appareil d'évaluation esthétique d'un visage sur la base d'un méta-apprentissage, dispositif et support - Google Patents
Procédé et appareil d'évaluation esthétique d'un visage sur la base d'un méta-apprentissage, dispositif et supportInfo
- Publication number
- WO2025194748A1 WO2025194748A1 PCT/CN2024/124262 CN2024124262W WO2025194748A1 WO 2025194748 A1 WO2025194748 A1 WO 2025194748A1 CN 2024124262 W CN2024124262 W CN 2024124262W WO 2025194748 A1 WO2025194748 A1 WO 2025194748A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- meta
- learner
- facial
- auxiliary
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0985—Hyperparameter optimisation; Meta-learning; Learning-to-learn
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- the embodiments of the present application relate to the field of image recognition, and in particular to a method, apparatus, device, and medium for evaluating facial beauty based on meta-learning.
- Facial beauty assessment technology uses AI models to evaluate facial images and generate beauty scores.
- AI models to evaluate facial images and generate beauty scores.
- training facial beauty assessment models often suffers from insufficient labeled data, resulting in poor model fit and insufficient generalization capabilities.
- the purpose of this application is to solve one of the technical problems existing in the related art to at least a certain extent.
- the embodiments of this application provide a face beauty evaluation method, device, equipment and medium based on meta-learning, which provides richer shallow features for the main task of face beauty evaluation through auxiliary tasks, thereby improving the performance of the main task.
- An embodiment of the first aspect of the present application is a method for evaluating facial beauty based on meta-learning, comprising:
- the first meta-learner and the second meta-learner are based on the same meta-learner framework.
- the method before the step of extracting features from the facial image to obtain a plurality of local texture features, the method further includes:
- the face image is dimensionality reduced according to the mean and the projection matrix to obtain a reduced-dimensional face image.
- the feature extraction of the face image is performed to obtain a plurality of local Texture features, including:
- the LBP feature, the HOG feature, and the SIFT feature are normalized to obtain a plurality of local texture features.
- fusing the plurality of local texture features to obtain fused feature data includes:
- the sum of the first hyperparameter, the second hyperparameter and the third hyperparameter is 1.
- the first support set and the first query set constitute a first subtask set, and the sampled data of the first subtask set are used to train a first meta-learner to obtain a first meta-learner after a meta-training phase;
- the training of a second meta-learner for the auxiliary task using the auxiliary feature data to obtain a second meta-learner with updated second parameters includes:
- obtaining a target meta-learner according to the target parameters includes:
- the meta-learner framework after adjusting the parameters is used as the target meta-learner.
- An embodiment of the second aspect of the present application is a meta-learning-based facial beauty evaluation device, comprising:
- An input unit used for acquiring a face image and an auxiliary image
- a first feature extraction unit configured to extract features from the face image to obtain a plurality of local texture features, and fuse the plurality of local texture features to obtain fused feature data
- a second feature extraction unit configured to extract features from the auxiliary image to obtain auxiliary feature data for the auxiliary task
- a first training unit is configured to train a first meta-learner for a facial beauty assessment task using the fused facial features to obtain a first meta-learner with updated first parameters;
- a second training unit configured to train a second meta-learner for the auxiliary task using the auxiliary feature data to obtain a second meta-learner with updated second parameters
- a meta-learner output unit configured to obtain target parameters according to the first parameters and the second parameters, and obtain a target meta-learner according to the target parameters
- An embodiment of the fourth aspect of the present application is a computer storage medium storing computer-executable instructions, wherein the computer-executable instructions are used to execute the meta-learning-based face beauty evaluation device as described above.
- the above scheme has at least the following beneficial effects: by using different types of feature information for different tasks of the meta-learner to train, combining multiple feature information can enable the model to better capture the diversity and complexity of facial images, overcoming the shortcomings of poor performance of single feature training models, thereby improving the overall robustness of the model; providing richer shallow features for the main task of facial beauty evaluation through auxiliary tasks, thereby improving the performance of the main task; meta-learning can help the model quickly learn common features from a small number of samples, so that the model can adapt to classification tasks more quickly, thereby improving the generalization ability of the model and reducing overfitting.
- Figure 1 is a step-by-step diagram of a face beauty evaluation method based on meta-learning
- Figure 2 is a schematic diagram of the face beauty evaluation method based on meta-learning
- FIG3 is a structural diagram of a face beauty evaluation device based on meta-learning.
- the embodiments of the present application provide a method for evaluating facial beauty based on meta-learning.
- the face beauty evaluation method based on meta-learning includes the following steps:
- Step S100 acquiring a face image and an auxiliary image
- Step S200 extracting features from the face image to obtain multiple local texture features, and fusing the multiple local texture features to obtain fused feature data;
- Step S300 extracting features from the auxiliary image to obtain auxiliary feature data for the auxiliary task
- Step S400 training a first meta-learner for a face beauty assessment task using fused facial features to obtain a first meta-learner that updates a first parameter
- Step S500 training a second meta-learner for the auxiliary task using the auxiliary feature data to obtain a second meta-learner with updated second parameters
- Step S600 obtaining a target parameter according to the first parameter and the second parameter, and obtaining a target meta-learner according to the target parameter;
- Step S700 evaluating the beauty of a face based on the target meta-learner model to obtain a face beauty result.
- step S100 a face image and an auxiliary image are acquired.
- face images are obtained from large image databases on the Internet, such as the large-scale Asian face beauty database.
- face images are obtained from pictures taken in real time.
- Auxiliary images are images used to train auxiliary tasks.
- Auxiliary tasks are tasks related to facial beauty assessment, such as face recognition tasks and gender recognition tasks. For example, if the auxiliary task is face recognition, the auxiliary images are face images; if the auxiliary task is gender recognition, the auxiliary images are gender recognition data or images of people.
- Preprocess the facial image such as rotation, cropping, dimensionality reduction, etc.
- the dimensionality reduction of the face image includes the following steps:
- the facial image is averaged to obtain a mean; the covariance of the facial image is calculated, and the covariance is eigenvalue decomposed to obtain eigenvalues and eigenvectors, and a projection matrix is formed by the eigenvalues and eigenvectors that meet preset conditions; the facial image is dimensionality reduced according to the mean and the projection matrix to obtain a reduced-dimensional facial image.
- This dimensionality reduction method mainly reduces the data from high-dimensional space to low-dimensional space by finding the principal component features in the face image, while retaining the maximum variance.
- S is a set of face images, ⁇ S 1 ,S 2 ,S 3 ...S n ⁇ S, ⁇ S 1 ,S 2 ,S 3 ...S n ⁇ represents n face images.
- the face image is averaged to obtain the mean; Where, Represents the mean.
- the face image is dimensionality-reduced according to the mean and projection matrix to obtain a reduced-dimensional face image.
- Y W T B i , where Y is the reduced-dimensional face image.
- the feature extraction of the face image to obtain multiple local texture features includes the following steps:
- Extract LBP features of face images extract HOG features of face images; extract SIFT features of face images; normalize LBP features, HOG features and SIFT features to obtain multiple local texture features.
- the local texture features of the face image after dimensionality reduction are extracted through LBP, HOG and SIFT to obtain multiple feature matrices, namely LBP feature PA , HOG feature PB and SIFT feature PC .
- the LBP features, HOG features and SIFT features are normalized to obtain multiple local texture features.
- the fusion of multiple local texture features to obtain fused feature data includes the following steps: obtaining a first hyperparameter obtained by pre-training the LBP feature; obtaining a second hyperparameter obtained by pre-training the HOG feature; obtaining a third hyperparameter obtained by pre-training the SIFT feature; performing weighted cascade fusion of the LBP feature, the HOG feature, and the SIFT feature according to the first hyperparameter, the second hyperparameter, and the third hyperparameter to obtain fused feature data; wherein the sum of the first hyperparameter, the second hyperparameter, and the third hyperparameter is 1.
- w1 is the first hyperparameter
- w2 is the second hyperparameter
- w3 is the third hyperparameter
- w1+w2+w3 1
- P is the fused feature data.
- step S300 feature extraction is performed on the auxiliary image to obtain auxiliary feature data for the auxiliary task.
- Auxiliary feature data can be obtained by extracting features from the auxiliary image through feature extractors such as convolutional neural networks.
- the first and second meta-learners are based on the same meta-learner framework.
- a deep learning model based on the Vision Transformer, pre-trained on ImageNet, is used as the baseline model.
- Model-Agnostic Meta-Learning is used as the basic meta-learning framework.
- the training principle of the meta-learner is as follows.
- the data set is divided into three subsets, namely the meta-training set D train , the meta-test set D test , and the meta-validation set D val .
- the meta-training stage it consists of multiple small sample classification subtasks. Each task is trained separately and the corresponding hyperparameters are updated. Divide the meta-training tasks. Assume that T is the set of all tasks and Ti represents the i-th subtask.
- N categories are randomly selected from each meta-training set, and K samples are randomly selected from each category to form a support sample set (Support Set), that is, Secondly, some samples are extracted from the remaining data of N categories to form a query sample set (Query Set), that is, Thus forming a meta-training task in the form of N-way-K-shot.
- Query Set query sample set
- N categories are randomly sampled from each meta-test set, and K samples are randomly sampled from each category to form a training subset; some samples are sampled from the remaining data of the N categories to form a test subset.
- the meta-learner is trained with the training subset to fine-tune the parameters of the meta-learner; the fine-tuned meta-learner is tested with the test subset; it can be understood that the test subset here is the type of data that the model is expected to be able to actually use for classification.
- the meta-validation set D val is used to verify the training effect of the basic parameters ⁇ of the current meta-learner. When it is necessary to adapt to new learning tasks, it is only necessary to fine-tune the meta-learner to achieve the effect of gradient descent, so that the basic parameters of the learner are updated from ⁇ to ⁇ ′. Repeating this step can quickly converge to the optimal initialization parameters.
- step S400 the first meta-learner for the face beauty assessment task is trained using the fused facial features to obtain the first meta-learner that updates the first parameter.
- the auxiliary feature data are divided into a second training set and a second test set; a plurality of auxiliary feature data of the second category are selected from the second training set, and a plurality of auxiliary feature data are selected from each selected auxiliary feature data of the second category to form a second support set, and the auxiliary feature data in the second training set excluding the support set are formed into a second query set; the second support set and the second query set are formed into a second subtask set, and the sampling data of the second subtask set are used to train a second meta-learner to obtain a second meta-learner after the meta-training stage; a plurality of auxiliary feature data of the second category are selected from the second test set, and a plurality of auxiliary feature data are selected from each selected auxiliary feature data of the second category to form a second training subset, and the auxiliary feature data in the second training set excluding the support set are formed into a second testing subset; the second parameters of the second meta-learner after the meta-training stage are adjusted according
- the loss value L1 of the first meta learner for updating the first parameter is calculated.
- step S500 the second meta-learner for the auxiliary task is trained using the auxiliary feature data to obtain a second meta-learner that updates the second parameters.
- the auxiliary feature data are divided into a second training set and a second test set; a plurality of auxiliary feature data of the second category are selected from the second training set, and a plurality of auxiliary feature data are selected from each selected auxiliary feature data of the second category to form a second support set, and the auxiliary feature data in the second training set excluding the support set are formed into a second query set; the second support set and the second query set are formed into a second subtask set, and the sampling data of the second subtask set are used to train a second meta-learner to obtain a second meta-learner after the meta-training stage; a plurality of auxiliary feature data of the second category are selected from the second test set, and a plurality of auxiliary feature data are selected from each selected auxiliary feature data of the second category to form a second training subset, and the auxiliary feature data in the second training set excluding the support set are formed into a second testing subset; the second parameters of the second meta-learner after the meta-training stage are adjusted according
- the loss value L2 of the second meta learner for updating the second parameter is calculated.
- auxiliary tasks there can be multiple auxiliary tasks at the same time, and there can also be multiple second meta-learners for the auxiliary tasks.
- step S600 a target parameter is obtained according to the first parameter and the second parameter, and a target meta-learner is obtained according to the target parameter.
- the first parameter and the second parameter are weightedly cascaded and fused to obtain the target parameter; it should be noted that the weight of the first parameter is greater than the weight of the second parameter.
- the meta-learner framework after the parameter adjustment is used as the target meta-learner. Specifically, if the loss value L corresponding to the meta-learner framework after the parameter adjustment is less than the set loss value threshold, the meta-learner framework after the parameter adjustment is used as the target meta-learner; if the loss value L corresponding to the meta-learner framework after the parameter adjustment is greater than or equal to the set loss value threshold, the first meta-learner and the second meta-learner are continued to be trained.
- step S700 facial beauty evaluation is performed based on the target meta-learner-based model to obtain a facial beauty result.
- a facial beauty evaluation model is constructed based on the target meta-learner, and the image to be evaluated is input into the facial beauty evaluation model for facial beauty evaluation to obtain a facial beauty score, which is used as the facial beauty result.
- auxiliary tasks provide richer shallow features for the main task of facial beauty assessment, improving the performance of the main task.
- Meta-learning can help the model quickly learn common features from a small number of samples, allowing the model to adapt to classification tasks more quickly, thereby improving the model's generalization ability and reducing overfitting.
- An embodiment of the present application provides a face beauty evaluation device based on meta-learning.
- the apparatus for evaluating facial beauty includes an input unit 110 , a first feature extraction unit 120 , a second feature extraction unit 130 , a first training unit 140 , a second training unit 150 , a meta-learner output unit 160 , and an evaluation unit 170 .
- the input unit 110 is used to obtain a facial image and an auxiliary image
- the first feature extraction unit 120 is used to perform feature extraction on the facial image to obtain multiple local texture features, and fuse the multiple local texture features to obtain fused feature data
- the second feature extraction unit 130 is used to perform feature extraction on the auxiliary image to obtain auxiliary feature data for the auxiliary task
- the first training unit 140 is used to use the fused facial features to train the first meta-learner for the facial beauty assessment task to obtain a first meta-learner that updates the first parameter
- the second training unit 150 is used to use the auxiliary feature data to train the second meta-learner for the auxiliary task to obtain a second meta-learner that updates the second parameter
- the meta-learner output unit 160 is used to obtain target parameters based on the first parameter and the second parameter, and obtain a target meta-learner based on the target parameter
- the evaluation unit 170 is used to perform facial beauty evaluation based on a model based on the target meta-learner to obtain a
- the facial beauty evaluation device in this embodiment adopts the above-mentioned facial beauty evaluation method, and each unit of the facial beauty evaluation device corresponds one-to-one to each step of the facial beauty evaluation method.
- the facial beauty evaluation device and the facial beauty evaluation method have the same technical solution, solve the same technical problems, and have the same technical effect.
- An embodiment of the present application provides an electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the above-described meta-learning-based face beauty assessment method when executing the computer program.
- the electronic device may be any intelligent terminal including a computer.
- the processor can be a general-purpose CPU (Central Processing Unit), a microprocessor, an application-specific integrated circuit (ASIC), or or one or more integrated circuits, etc., for executing relevant programs to implement the technical solutions provided in the embodiments of the present application.
- CPU Central Processing Unit
- ASIC application-specific integrated circuit
- the memory can be implemented in the form of a read-only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
- ROM read-only memory
- RAM random access memory
- the memory can store an operating system and other application programs.
- the relevant program code is stored in the memory and is called by the processor to execute the methods of the embodiments of this application.
- the input/output interface is used to realize information input and output.
- the communication interface is used to realize the communication interaction between this device and other devices. Communication can be achieved through wired means (such as USB, network cable, etc.) or wireless means (such as mobile network, WIFI, Bluetooth, etc.).
- wired means such as USB, network cable, etc.
- wireless means such as mobile network, WIFI, Bluetooth, etc.
- the bus transmits information between the various components of the device (such as the processor, memory, input/output interface, and communication interface).
- the processor, memory, input/output interface, and communication interface communicate with each other within the device through the bus.
- An embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for executing the above-mentioned meta-learning-based face beauty evaluation method.
- Computer storage media is included in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data) and is volatile and non-volatile, removable, and non-removable.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory, or other memory technology, CD-ROM, digital versatile disks (DVD), or other optical disk storage, magnetic cassettes, magnetic tapes, disk storage, or other magnetic storage devices, or any other medium that can be used to store desired information and can be accessed by a computer.
- communication media generally contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium.
- a modulated data signal such as a carrier wave or other transmission mechanism
- the reference terms "one embodiment/example”, “another embodiment/example” or “certain embodiments/examples” and the like are intended to mean that the specific features, structures, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the present application.
- the schematic representation of the above terms does not necessarily refer to the same embodiment or example.
- the specific features, structures, materials or characteristics described may be combined in any one or more embodiments or examples in a suitable manner.
- the units described above as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed across multiple network units. Some or all of these units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- the functional units in the various embodiments of the present application may be integrated into a single processing unit, or each unit may exist physically separately, or two or more units may be integrated into a single unit.
- the aforementioned integrated units may be implemented in the form of hardware or software functional units.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the computer software product is stored in a storage medium, including multiple instructions for enabling a computer device (which can be a personal computer, server, or network device, etc.) to execute all or part of the steps of the methods of various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk, and other media that can store programs.
- the disclosed devices and methods can be implemented in other ways.
- the device embodiments described above are merely schematic.
- the division of the above-mentioned units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
- Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
La présente demande concerne un procédé et un appareil d'évaluation esthétique d'un visage sur la base d'un méta-apprentissage, ainsi qu'un dispositif et un support. Le procédé comprend les étapes consistant à : effectuer une extraction de caractéristiques sur une image d'un visage et fusionner des caractéristiques de texture locales de façon à obtenir des données de caractéristiques fusionnées ; effectuer une extraction de caractéristiques sur une image auxiliaire de façon à obtenir des données de caractéristiques auxiliaires ; utiliser les caractéristiques de visage fusionnées pour entraîner un premier méta-apprenant, qui est utilisé pour une tâche d'évaluation esthétique du visage, de façon à mettre à jour un premier paramètre ; utiliser les données de caractéristiques auxiliaires pour entraîner un second méta-apprenant, qui est utilisé pour une tâche auxiliaire, de façon à mettre à jour un second paramètre ; sur la base des premier et second paramètres, obtenir un méta-apprenant cible ; et, à partir d'un modèle basé sur le méta-apprenant cible, effectuer une évaluation esthétique du visage de façon à obtenir un résultat esthétique du visage. Le défaut lié à la médiocrité de l'effet d'un modèle entraîné au moyen d'une seule caractéristique est surmonté, ce qui améliore la robustesse globale du modèle. Grâce à une tâche auxiliaire, davantage de caractéristiques superficielles sont fournies pour une tâche principale d'évaluation esthétique du visage, ce qui améliore les performances de la tâche principale.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410321911.8A CN118212499A (zh) | 2024-03-20 | 2024-03-20 | 基于元学习的人脸美丽度评价方法、装置、设备及介质 |
| CN202410321911.8 | 2024-03-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025194748A1 true WO2025194748A1 (fr) | 2025-09-25 |
Family
ID=91456803
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/124262 Pending WO2025194748A1 (fr) | 2024-03-20 | 2024-10-11 | Procédé et appareil d'évaluation esthétique d'un visage sur la base d'un méta-apprentissage, dispositif et support |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN118212499A (fr) |
| WO (1) | WO2025194748A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118212499A (zh) * | 2024-03-20 | 2024-06-18 | 五邑大学 | 基于元学习的人脸美丽度评价方法、装置、设备及介质 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109978836A (zh) * | 2019-03-06 | 2019-07-05 | 华南理工大学 | 基于元学习的用户个性化图像美感评价方法、系统、介质和设备 |
| CN110689523A (zh) * | 2019-09-02 | 2020-01-14 | 西安电子科技大学 | 基于元学习个性化图像信息评价方法、信息数据处理终端 |
| US20200019758A1 (en) * | 2018-07-16 | 2020-01-16 | Adobe Inc. | Meta-learning for facial recognition |
| CN112419270A (zh) * | 2020-11-23 | 2021-02-26 | 深圳大学 | 元学习下的无参考图像质量评价方法、装置及计算机设备 |
| CN114973316A (zh) * | 2022-05-12 | 2022-08-30 | 平安科技(深圳)有限公司 | 一种基于元任务池优化设计的主动元学习方法与装置 |
| CN118212499A (zh) * | 2024-03-20 | 2024-06-18 | 五邑大学 | 基于元学习的人脸美丽度评价方法、装置、设备及介质 |
-
2024
- 2024-03-20 CN CN202410321911.8A patent/CN118212499A/zh active Pending
- 2024-10-11 WO PCT/CN2024/124262 patent/WO2025194748A1/fr active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200019758A1 (en) * | 2018-07-16 | 2020-01-16 | Adobe Inc. | Meta-learning for facial recognition |
| CN109978836A (zh) * | 2019-03-06 | 2019-07-05 | 华南理工大学 | 基于元学习的用户个性化图像美感评价方法、系统、介质和设备 |
| CN110689523A (zh) * | 2019-09-02 | 2020-01-14 | 西安电子科技大学 | 基于元学习个性化图像信息评价方法、信息数据处理终端 |
| CN112419270A (zh) * | 2020-11-23 | 2021-02-26 | 深圳大学 | 元学习下的无参考图像质量评价方法、装置及计算机设备 |
| CN114973316A (zh) * | 2022-05-12 | 2022-08-30 | 平安科技(深圳)有限公司 | 一种基于元任务池优化设计的主动元学习方法与装置 |
| CN118212499A (zh) * | 2024-03-20 | 2024-06-18 | 五邑大学 | 基于元学习的人脸美丽度评价方法、装置、设备及介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118212499A (zh) | 2024-06-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10747989B2 (en) | Systems and/or methods for accelerating facial feature vector matching with supervised machine learning | |
| Yan et al. | Ranking with uncertain labels | |
| CN113569895B (zh) | 图像处理模型训练方法、处理方法、装置、设备及介质 | |
| JP5506722B2 (ja) | マルチクラス分類器をトレーニングするための方法 | |
| CN110837846B (zh) | 一种图像识别模型的构建方法、图像识别方法及装置 | |
| CN110659665B (zh) | 一种异维特征的模型构建方法及图像识别方法、装置 | |
| CN109919252B (zh) | 利用少数标注图像生成分类器的方法 | |
| CN108288051B (zh) | 行人再识别模型训练方法及装置、电子设备和存储介质 | |
| CN114913303B (zh) | 虚拟形象生成方法及相关装置、电子设备、存储介质 | |
| WO2019015246A1 (fr) | Acquisition de caractéristiques d'image | |
| JP6029041B2 (ja) | 顔印象度推定方法、装置、及びプログラム | |
| US20240330690A1 (en) | Point-of-interest recommendation method and system based on brain-inspired spatiotemporal perceptual representation | |
| JP2868078B2 (ja) | パターン認識方法 | |
| WO2007117448A2 (fr) | Établissement de connexions entre des collections d'images | |
| CN111814620A (zh) | 人脸图像质量评价模型建立方法、优选方法、介质及装置 | |
| CN113704528B (zh) | 聚类中心确定方法、装置和设备及计算机存储介质 | |
| WO2025194748A1 (fr) | Procédé et appareil d'évaluation esthétique d'un visage sur la base d'un méta-apprentissage, dispositif et support | |
| CN113762019B (zh) | 特征提取网络的训练方法、人脸识别方法和装置 | |
| Yan et al. | A parameter-free framework for general supervised subspace learning | |
| CN120105202A (zh) | 基于多模态的生成式广义零样本学习方法 | |
| CN118537900A (zh) | 人脸识别方法、装置、电子设备及存储介质 | |
| CN116824237A (zh) | 一种基于两阶段主动学习的图像识别分类方法 | |
| CN116109888A (zh) | 一种语义引导增广数据生成的少样本图像识别方法 | |
| CN116012899A (zh) | 人脸识别模型训练方法、装置和计算机设备 | |
| CN116363236A (zh) | 基于文本的配图方法、装置、设备及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24930406 Country of ref document: EP Kind code of ref document: A1 |