[go: up one dir, main page]

CN120108016A - Expression processing method, smart AI glasses and storage medium - Google Patents

Expression processing method, smart AI glasses and storage medium Download PDF

Info

Publication number
CN120108016A
CN120108016A CN202510107294.6A CN202510107294A CN120108016A CN 120108016 A CN120108016 A CN 120108016A CN 202510107294 A CN202510107294 A CN 202510107294A CN 120108016 A CN120108016 A CN 120108016A
Authority
CN
China
Prior art keywords
target
feature
expression
glasses
feature set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510107294.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pinsheng Technology Co ltd
Original Assignee
Shenzhen Pinsheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pinsheng Technology Co ltd filed Critical Shenzhen Pinsheng Technology Co ltd
Priority to CN202510107294.6A priority Critical patent/CN120108016A/en
Publication of CN120108016A publication Critical patent/CN120108016A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种表情处理方法、智能AI眼镜以及存储介质,应用于智能AI眼镜,所述智能AI眼镜包括传感器和通信模块;所述方法包括:通过所述传感器捕捉目标对象的面部表情图像,所述目标对象为佩服所述智能AI眼镜的用户,或者,所述目标对象为所述智能AI眼镜拍摄的拍摄对象;对所述面部表情图像进行图像提取,得到目标特征集;根据所述目标特征集生成预设格式的目标表情图像;通过所述通信模块将所述目标表情图像发送给指定设备。采用本申请实施例能够提升智能AI眼镜智能性。

The present application discloses an expression processing method, smart AI glasses, and a storage medium, which are applied to smart AI glasses, wherein the smart AI glasses include a sensor and a communication module; the method includes: capturing a facial expression image of a target object through the sensor, wherein the target object is a user who admires the smart AI glasses, or the target object is a photographed object photographed by the smart AI glasses; performing image extraction on the facial expression image to obtain a target feature set; generating a target expression image in a preset format according to the target feature set; and sending the target expression image to a designated device through the communication module. The use of the embodiments of the present application can improve the intelligence of smart AI glasses.

Description

Expression processing method, intelligent AI glasses and storage medium
Technical Field
The application relates to the technical field of wearable equipment or artificial intelligence, in particular to an expression processing method, intelligent AI (advanced technology) glasses and a storage medium.
Background
The smart AI glasses are referred to as AI glasses, smart glasses, or AI smart glasses. Along with the rapid development of technology, intelligent AI glasses are also becoming more popular, and at present, the function of the intelligent AI glasses is single, so the problem of how to improve the intelligence of the intelligent AI glasses is to be solved.
Disclosure of Invention
The embodiment of the application provides an expression processing method, intelligent AI glasses and a storage medium, which can improve the intelligence of the intelligent AI glasses.
In a first aspect, an embodiment of the present application provides an expression processing method, which is applied to intelligent AI glasses, where the intelligent AI glasses include a sensor and a communication module, and the method includes:
capturing a facial expression image of a target object through the sensor, wherein the target object is a user wearing the intelligent AI glasses or is a shooting object shot by the intelligent AI glasses;
extracting the facial expression image to obtain a target feature set;
generating a target expression image in a preset format according to the target feature set;
and sending the target expression image to a designated device through the communication module.
In a second aspect, the embodiment of the application provides an expression processing device, wherein the intelligent AI glasses comprise a sensor and a communication module, the intelligent AI glasses comprise an acquisition unit, an extraction unit, a generation unit and an interaction unit, wherein,
The acquisition unit is used for capturing facial expression images of a target object through the sensor, wherein the target object is a user wearing the intelligent AI glasses or is a shooting object shot by the intelligent AI glasses;
The extraction unit is used for extracting the facial expression image to obtain a target feature set;
the generating unit is used for generating a target expression image in a preset format according to the target feature set;
The interaction unit is used for sending the target expression image to the appointed equipment through the communication module.
In a third aspect, an embodiment of the present application provides smart AI glasses, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
It can be seen that, the expression processing method, the intelligent AI glasses and the storage medium described in the embodiments of the present application are applied to the intelligent AI glasses, the intelligent AI glasses include a sensor and a communication module, the facial expression image of a target object is captured by the sensor, the target object is a user wearing the intelligent AI glasses, or the target object is a shooting object shot by the intelligent AI glasses, the facial expression image is extracted to obtain a target feature set, a target expression image in a preset format is generated according to the target feature set, and the target expression image is sent to a designated device by the communication module, so that the facial expression image of the user can be collected, and a target expression image in a corresponding preset format is generated and shared with other users (friends), thereby improving the intelligence and the interestingness of the intelligent AI glasses and improving the user experience.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an expression processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a structure of an intelligent AI glasses according to an embodiment of the present application;
FIG. 3 is a schematic diagram of another embodiment of the present application;
fig. 4 is a functional unit composition block diagram of an intelligent AI glasses provided by an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the embodiment of the application, the appointed equipment can comprise a smart Phone (such as an Android Mobile Phone, an iOS Mobile Phone, a Windows Phone Mobile Phone and the like), a palm computer, a tablet personal computer, a Bluetooth sound box, a smart television, a smart refrigerator, a smart robot, a driving recorder, a notebook computer, a Mobile internet device (Mobile INTERNET DEVICES, MID) or a wearable device (such as smart AI glasses, smart bracelets, smart watches and Bluetooth headphones) and the like, which are merely examples, but are not limited to the appointed equipment, and the appointed equipment can also comprise a server, such as a cloud server.
Embodiments of the present application are described in detail below.
Referring to fig. 1, fig. 1 is a flow chart of an expression processing method provided by an embodiment of the present application, which is applied to intelligent AI glasses, wherein the intelligent AI glasses include a sensor and a communication module, and as shown in the figure, the expression processing method includes:
101. and capturing facial expression images of a target object through the sensor, wherein the target object is a user wearing the intelligent AI glasses or is a shooting object shot by the intelligent AI glasses.
The target object may be a user wearing the intelligent AI glasses, or the target object may be a shooting object shot by the intelligent AI glasses.
As shown in fig. 2, the smart AI glasses include a sensor for capturing a facial expression to obtain a facial expression image, and a communication module for implementing a communication function, where the communication module may include at least one of a mobile communication module (2G, 3G, 4G, 5G, etc.), a wireless fidelity (WIRELESS FIDELITY, wi-Fi), a bluetooth communication module, an infrared communication module, a millimeter wave communication module, a radar communication module, etc., which are not limited herein.
In the embodiment of the application, the sensor can comprise one or more sensors, and the sensor can comprise at least one of a camera, a temperature sensor, a humidity sensor, a substance detection sensor, a myoelectric sensor and the like, and is not limited herein.
Wherein the substance detection sensor may be used to detect facial skin parameters of the target object and the myoelectric sensor may be used to detect facial muscle movement parameters of the target object.
In a specific implementation, when the target object is a user wearing the intelligent AI glasses, the sensor may capture a facial expression image of the target object, for example, the facial expression image may be obtained by photographing the face of the target object, or when the target object is a photographing object photographed by the intelligent AI glasses, the sensor may capture a facial expression image of the target object, for example, the facial expression image may be obtained by photographing the face of the photographing object.
102. And extracting the facial expression image to obtain a target feature set.
In a specific implementation, the target feature set may include at least one feature, which may include at least one feature selected from the group consisting of feature points, feature textures, feature regions, feature values, feature vectors, and the like, without limitation.
Specifically, the facial expression image can be extracted to obtain a target feature set, and the target feature set can be used for realizing expression recognition.
103. And generating a target expression image in a preset format according to the target feature set.
The preset format may be preset or default, for example, the preset format may be any expression packet format, and the expression packet format may be a static expression packet format or a dynamic expression packet format, for example, the expression packet format may include an emoji expression packet format.
In a specific implementation, an emotion symbol generator may be utilized to generate a target emotion image in a preset format according to a target feature set, for example, the target emotion image may be sent to a friend by capturing a facial expression of a user and converting the facial expression into a corresponding emoji. The emoji generator may be preset or default to the system, e.g., the emoji generator may include machine learning models, neural network models, large models, etc., without limitation.
104. And sending the target expression image to a designated device through the communication module.
The designated device may be preset or default, and the designated device may be a part of the smart AI glasses or may also be a device that communicates with other than the smart AI glasses.
Wherein the designated device may comprise one or more devices.
In the specific implementation, the target expression image can be sent to the appointed equipment through the communication module, so that the expression image corresponding to the user can be generated through the intelligent AI glasses, the target expression image is obtained and then sent to the appointed equipment, and the intelligence and the interestingness of the intelligent AI glasses can be improved.
Optionally, the step 102 of extracting an image of the facial expression image to obtain a target feature set may include the following steps:
Extracting features of the facial expression image to obtain a first feature set;
Determining a target expression type corresponding to the first feature set;
And determining the characteristics corresponding to the target expression type according to the first characteristic set to obtain the target characteristic set.
In a specific implementation, the facial expression image may be subjected to feature extraction to obtain a first feature set, and the first feature set may be input into a classification network to obtain a corresponding expression type, where the expression type may include at least one of smiling, laughing, smiling, bending, embarrassing, gazing, and the like, and is not limited herein. The classification network may also be preset or default to the system, e.g., the classification network may include at least one of convolutional neural networks, large models, etc., without limitation.
In a specific implementation, the target expression type corresponding to the first feature set and different expression types can be determined based on the classification network, so that the required features are different when the target expression image is generated, and further, the features corresponding to the target expression type can be determined according to the first feature set to obtain the target feature set, and thus, the intelligent and interesting of the intelligent AI glasses can be improved based on the corresponding features of the preset format and the target expression type.
Optionally, the step of determining the feature corresponding to the target expression type according to the first feature set to obtain the target feature set may include the following steps:
Acquiring a feature selection rule set corresponding to the preset format to obtain a plurality of feature selection rule sets, wherein each feature selection rule set corresponds to one expression type;
determining a target feature selection rule set corresponding to the target expression type from the feature selection rule sets;
And screening the first feature set according to the target feature selection rule set to obtain the target feature set.
In a specific implementation, a feature selection rule set corresponding to a preset format may be obtained to obtain a plurality of feature selection rule sets, where each feature selection rule set corresponds to one expression type. The feature selection rule set may include at least one feature selection rule, and the feature selection rule may include at least one of which type of feature (e.g., feature point, feature value, feature vector, feature texture, etc.), which location or region of the feature is selected, which degree of feature is selected, and so on, without limitation.
Then, a target feature selection rule set corresponding to the target expression type can be determined from the feature selection rule sets, and then the first feature set is screened according to the target feature selection rule set to obtain a target feature set.
Optionally, the step 103 of generating the target expression image in the preset format according to the target feature set may include the following steps:
acquiring a reference expression template set corresponding to the preset format, wherein the reference expression template set comprises a plurality of reference expression templates, and each reference expression template corresponds to one expression type;
determining a target reference expression template corresponding to the target expression type from the reference expression template set;
and determining the target expression image according to the target feature set and the target reference expression template.
In a specific implementation, a reference expression template set corresponding to a preset format can be obtained, the reference expression template set can comprise a plurality of reference expression templates, each reference expression template corresponds to one expression type, then a target reference expression template corresponding to a target expression type can be determined from the reference expression template set, and then a target expression image is determined according to the target feature set and the target reference expression template, so that a target reference expression template corresponding to the target expression type can be selected, and then a corresponding expression image is synthesized based on the target reference expression template and the corresponding features of the preset format and the target expression type, thereby being beneficial to improving the intelligence and the interestingness of the intelligent AI glasses.
Optionally, the step of determining the target expression image according to the target feature set and the target reference expression template may include the steps of:
acquiring a feature position area corresponding to each feature in the target feature set to obtain at least one feature position area;
Marking corresponding areas in the target reference expression template according to the at least one characteristic position area to obtain at least one area;
And filling the target feature set into a corresponding region in the at least one region in the target reference expression template to obtain the target expression image.
In specific implementation, a feature position area corresponding to each feature in the target feature set can be obtained to obtain at least one feature position area, the corresponding area is marked according to the at least one feature position area in the target reference expression template to obtain at least one area, finally, the target feature set is filled into the corresponding area in the at least one area in the target reference expression template to obtain a target expression image, namely the target feature set can be fused to the corresponding position in the target reference expression template, so that the target expression image is fused to some features of the target object, the target expression image is related to the facial feature depth of the target object, and the intelligent AI glasses are improved in intelligence and interestingness.
Optionally, in the step 102, feature extraction is performed on the facial expression image to obtain a first feature set, which may include the following steps:
Acquiring target shooting parameters of the target expression image;
acquiring target skin color parameters of the target expression image;
determining a target feature extraction algorithm corresponding to the target skin color parameters;
determining target algorithm control parameters of the target feature extraction algorithm corresponding to the target shooting parameters;
and carrying out feature extraction on the facial expression image according to the target feature extraction algorithm and the target algorithm control parameter to obtain the first feature set.
The target shooting parameters may include at least one of shooting angle, shooting distance, sensitivity, exposure time, etc., which are not limited herein.
The target skin tone parameters may include at least one of skin tone type, skin tone degree, skin tone age, etc., without limitation herein.
In specific implementation, the target shooting parameters of the target expression image can be obtained, the target skin color parameters of the target expression image can be obtained, the mapping relation between the preset skin color parameters and the feature extraction algorithm can be stored in advance, the target feature extraction algorithm corresponding to the target skin color parameters can be determined based on the mapping relation, the mapping relation between the preset shooting parameters and the algorithm control parameters of the target feature extraction algorithm can be stored in advance, and the target algorithm control parameters of the target feature extraction algorithm corresponding to the target shooting parameters can be determined based on the mapping relation.
The target algorithm control parameter may be used to control a feature extraction effect of the target feature extraction algorithm, where the feature extraction effect may include at least one of a feature type, a feature extraction degree, a feature extraction speed, a feature extraction area, and the like, and is not limited herein.
The target feature extraction algorithm may include one or more feature extraction algorithms, each of which may be used to extract one type of feature.
Then, feature extraction can be performed on the facial expression image according to a target feature extraction algorithm and a target algorithm control parameter to obtain a first feature set, so that on one hand, the corresponding target feature extraction algorithm can be determined based on skin color parameters of a target object, and further, the feature extraction integrity and the accuracy are preliminarily ensured, and on the other hand, the algorithm control parameter of the target feature extraction algorithm can be optimized based on shooting parameter depth, so that the final algorithm control parameter depth accords with the characteristics of the image, and thus, the feature extraction integrity and the accuracy can be ensured in depth, thereby helping to ensure that a target expression image matched with the depth of the target object is obtained, and further, the intelligent and interesting of the intelligent AI glasses are promoted.
Optionally, the smart AI glasses further communicate with the wearable device of the target object, and the step of determining the target algorithm control parameter of the target feature extraction algorithm corresponding to the target shooting parameter may include the following steps:
acquiring the capturing moment of the facial expression image;
Acquiring, by the communication module, a target physiological state parameter of the target object acquired by the wearable device;
Determining a first algorithm control parameter of the target feature extraction algorithm corresponding to the target shooting parameter;
determining a first optimization parameter corresponding to the target physiological state parameter;
And determining the target algorithm control parameter according to the first optimization parameter and the first algorithm control parameter.
Wherein the target physiological state parameter may include at least one of blood pressure, muscle movement parameter, blood sugar, blood fat, brain wave parameter, vein parameter, electrocardiogram, etc., without limitation. The target physiological state parameter reflects to some extent the emotional condition of the target subject.
In a specific implementation, the capturing moment of the facial expression image can be obtained, the target physiological state parameter of the target object acquired by the wearable device can be obtained through the communication module, the mapping relation between the preset physiological state parameter and the optimization parameter can be stored in advance, and further, the first optimization parameter corresponding to the target physiological state parameter can be determined based on the mapping relation.
The mapping relation between the preset shooting parameters and the algorithm control parameters of the target feature extraction algorithm can be stored in advance, further, the first algorithm control parameters of the target feature extraction algorithm corresponding to the target shooting parameters can be determined based on the mapping relation, and the target algorithm control parameters can be determined according to the first optimization parameters and the first algorithm control parameters, namely, the target algorithm control parameters= (1+the first optimization parameters) & gt the first algorithm control parameters, so that the algorithm control parameters can be optimized based on the physiological feature depth of the user, the final algorithm control parameters further deeply conform to the emotion of the target object, the feature extraction integrity and the accuracy can be guaranteed in depth, and accordingly, the target expression image which is matched with the depth of the target object can be guaranteed, and the intelligent and the interestingness of the intelligent AI glasses can be improved.
It can be seen that the expression processing method described in the embodiment of the application is applied to intelligent AI glasses, the intelligent AI glasses comprise a sensor and a communication module, facial expression images of target objects are captured through the sensor, the target objects are users wearing the intelligent AI glasses, or the target objects are shooting objects shot by the intelligent AI glasses, image extraction is performed on the facial expression images to obtain a target feature set, a target expression image with a preset format is generated according to the target feature set, and the target expression image is sent to a designated device through the communication module, so that facial expression images of users can be collected, corresponding target expression images with the preset format are generated, and are shared with other users (friends), and therefore intelligence and interestingness of the intelligent AI glasses can be improved, and user experience is improved.
In accordance with the above embodiment, please refer to fig. 3, fig. 3 is a schematic structural diagram of another smart AI glasses provided in an embodiment of the present application, as shown in the drawing, the smart AI glasses include a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the smart AI glasses include a sensor and a communication module, and the programs include instructions for executing the following steps:
capturing a facial expression image of a target object through the sensor, wherein the target object is a user wearing the intelligent AI glasses or is a shooting object shot by the intelligent AI glasses;
extracting the facial expression image to obtain a target feature set;
generating a target expression image in a preset format according to the target feature set;
and sending the target expression image to a designated device through the communication module.
Optionally, in the aspect of extracting the image of the facial expression image to obtain a target feature set, the program includes instructions for performing the following steps:
Extracting features of the facial expression image to obtain a first feature set;
Determining a target expression type corresponding to the first feature set;
And determining the characteristics corresponding to the target expression type according to the first characteristic set to obtain the target characteristic set.
Optionally, in the aspect of determining the feature corresponding to the target expression type according to the first feature set to obtain the target feature set, the program includes instructions for executing the following steps:
Acquiring a feature selection rule set corresponding to the preset format to obtain a plurality of feature selection rule sets, wherein each feature selection rule set corresponds to one expression type;
determining a target feature selection rule set corresponding to the target expression type from the feature selection rule sets;
And screening the first feature set according to the target feature selection rule set to obtain the target feature set.
Optionally, in the aspect of generating the target expression image in a preset format according to the target feature set, the program includes instructions for executing the following steps:
acquiring a reference expression template set corresponding to the preset format, wherein the reference expression template set comprises a plurality of reference expression templates, and each reference expression template corresponds to one expression type;
determining a target reference expression template corresponding to the target expression type from the reference expression template set;
and determining the target expression image according to the target feature set and the target reference expression template.
Optionally, in said determining said target expression image from said target feature set and said target reference expression template, the above procedure comprises instructions for performing the steps of:
acquiring a feature position area corresponding to each feature in the target feature set to obtain at least one feature position area;
Marking corresponding areas in the target reference expression template according to the at least one characteristic position area to obtain at least one area;
And filling the target feature set into a corresponding region in the at least one region in the target reference expression template to obtain the target expression image.
Optionally, in the aspect of extracting features of the facial expression image to obtain a first feature set, the program includes instructions for:
Acquiring target shooting parameters of the target expression image;
acquiring target skin color parameters of the target expression image;
determining a target feature extraction algorithm corresponding to the target skin color parameters;
determining target algorithm control parameters of the target feature extraction algorithm corresponding to the target shooting parameters;
and carrying out feature extraction on the facial expression image according to the target feature extraction algorithm and the target algorithm control parameter to obtain the first feature set.
Optionally, the smart AI glasses further communicate with the wearable device of the target object, and the step of determining the target algorithm control parameter of the target feature extraction algorithm corresponding to the target shooting parameter may include the following steps:
acquiring the capturing moment of the facial expression image;
Acquiring, by the communication module, a target physiological state parameter of the target object acquired by the wearable device;
Determining a first algorithm control parameter of the target feature extraction algorithm corresponding to the target shooting parameter;
determining a first optimization parameter corresponding to the target physiological state parameter;
And determining the target algorithm control parameter according to the first optimization parameter and the first algorithm control parameter.
It can be seen that, the intelligent AI glasses described in the embodiments of the present application include a sensor and a communication module, the sensor captures a facial expression image of a target object, the target object is a user wearing the intelligent AI glasses, or the target object is a shooting object shot by the intelligent AI glasses, image extraction is performed on the facial expression image to obtain a target feature set, a target expression image in a preset format is generated according to the target feature set, and the communication module sends the target expression image to a designated device, so that the facial expression image of the user can be acquired, and a target expression image in a corresponding preset format is generated and shared with other users (friends), thereby improving the intelligence and the interestingness of the intelligent AI glasses and improving the user experience.
Fig. 4 is a block diagram of functional units of a smart AI glasses 400 according to an embodiment of the application. The smart AI glasses 400 include an acquisition unit 401, an extraction unit 402, a generation unit 403, and an interaction unit 404, wherein,
The acquiring unit 401 is configured to capture, by using the sensor, a facial expression image of a target object, where the target object is a user wearing the smart AI glasses, or the target object is a shooting object shot by the smart AI glasses;
the extracting unit 402 is configured to perform image extraction on the facial expression image to obtain a target feature set;
the generating unit 403 is configured to generate a target expression image in a preset format according to the target feature set;
The interaction unit 404 is configured to send, through the communication module, the target expression image to a specified device.
Optionally, in the aspect of performing image extraction on the facial expression image to obtain a target feature set, the extracting unit 402 is specifically configured to:
Extracting features of the facial expression image to obtain a first feature set;
Determining a target expression type corresponding to the first feature set;
And determining the characteristics corresponding to the target expression type according to the first characteristic set to obtain the target characteristic set.
Optionally, in the aspect that the features corresponding to the target expression type are determined according to the first feature set, so as to obtain the target feature set, the extracting unit 402 is specifically configured to:
Acquiring a feature selection rule set corresponding to the preset format to obtain a plurality of feature selection rule sets, wherein each feature selection rule set corresponds to one expression type;
determining a target feature selection rule set corresponding to the target expression type from the feature selection rule sets;
And screening the first feature set according to the target feature selection rule set to obtain the target feature set.
Optionally, in the aspect of generating the target expression image in the preset format according to the target feature set, the generating unit 403 is specifically configured to:
acquiring a reference expression template set corresponding to the preset format, wherein the reference expression template set comprises a plurality of reference expression templates, and each reference expression template corresponds to one expression type;
determining a target reference expression template corresponding to the target expression type from the reference expression template set;
and determining the target expression image according to the target feature set and the target reference expression template.
Optionally, in the aspect of determining the target expression image according to the target feature set and the target reference expression template, the generating unit 403 is specifically configured to:
acquiring a feature position area corresponding to each feature in the target feature set to obtain at least one feature position area;
Marking corresponding areas in the target reference expression template according to the at least one characteristic position area to obtain at least one area;
And filling the target feature set into a corresponding region in the at least one region in the target reference expression template to obtain the target expression image.
Optionally, in the aspect of performing feature extraction on the facial expression image to obtain a first feature set, the extracting unit 402 is specifically configured to:
Acquiring target shooting parameters of the target expression image;
acquiring target skin color parameters of the target expression image;
determining a target feature extraction algorithm corresponding to the target skin color parameters;
determining target algorithm control parameters of the target feature extraction algorithm corresponding to the target shooting parameters;
and carrying out feature extraction on the facial expression image according to the target feature extraction algorithm and the target algorithm control parameter to obtain the first feature set.
Optionally, the smart AI glasses further communicate with a wearable device of the target object, and in the aspect of determining a target algorithm control parameter of the target feature extraction algorithm corresponding to the target shooting parameter, the extraction unit 402 is specifically configured to:
acquiring the capturing moment of the facial expression image;
Acquiring, by the communication module, a target physiological state parameter of the target object acquired by the wearable device;
Determining a first algorithm control parameter of the target feature extraction algorithm corresponding to the target shooting parameter;
determining a first optimization parameter corresponding to the target physiological state parameter;
And determining the target algorithm control parameter according to the first optimization parameter and the first algorithm control parameter.
It can be seen that, the intelligent AI glasses described in the embodiments of the present application include a sensor and a communication module, the sensor captures a facial expression image of a target object, the target object is a user wearing the intelligent AI glasses, or the target object is a shooting object shot by the intelligent AI glasses, image extraction is performed on the facial expression image to obtain a target feature set, a target expression image in a preset format is generated according to the target feature set, and the communication module sends the target expression image to a designated device, so that the facial expression image of the user can be acquired, and a target expression image in a corresponding preset format is generated and shared with other users (friends), thereby improving the intelligence and the interestingness of the intelligent AI glasses and improving the user experience.
It may be understood that the functions of each program module of the smart AI glasses of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the present application also provides a computer storage medium storing a computer program for electronic data exchange, where the computer program causes a computer to execute some or all of the steps of any one of the methods described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. The Memory includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program codes.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable Memory, and the Memory may include a flash disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1.一种表情处理方法,其特征在于,应用于智能AI眼镜,所述智能AI眼镜包括传感器和通信模块;所述方法包括:1. An expression processing method, characterized in that it is applied to smart AI glasses, wherein the smart AI glasses include a sensor and a communication module; the method comprises: 通过所述传感器捕捉目标对象的面部表情图像,所述目标对象为佩服所述智能AI眼镜的用户,或者,所述目标对象为所述智能AI眼镜拍摄的拍摄对象;Capturing a facial expression image of a target object through the sensor, wherein the target object is a user who admires the smart AI glasses, or the target object is a subject photographed by the smart AI glasses; 对所述面部表情图像进行图像提取,得到目标特征集;Performing image extraction on the facial expression image to obtain a target feature set; 根据所述目标特征集生成预设格式的目标表情图像;Generating a target expression image in a preset format according to the target feature set; 通过所述通信模块将所述目标表情图像发送给指定设备。The target expression image is sent to a designated device through the communication module. 2.根据权利要求1所述的方法,其特征在于,所述对所述面部表情图像进行图像提取,得到目标特征集,包括:2. The method according to claim 1, characterized in that the step of performing image extraction on the facial expression image to obtain a target feature set comprises: 对所述面部表情图像进行特征提取,得到第一特征集;Performing feature extraction on the facial expression image to obtain a first feature set; 确定所述第一特征集对应的目标表情类型;Determining a target expression type corresponding to the first feature set; 根据所述第一特征集确定所述目标表情类型对应的特征,得到所述目标特征集。The features corresponding to the target expression type are determined according to the first feature set to obtain the target feature set. 3.根据权利要求2所述的方法,其特征在于,所述根据所述第一特征集确定所述目标表情类型对应的特征,得到所述目标特征集,包括:3. The method according to claim 2, characterized in that the step of determining the features corresponding to the target expression type according to the first feature set to obtain the target feature set comprises: 获取所述预设格式对应的特征选取规则集,得到多个特征选取规则集,每一特征选取规则集对应一种表情类型;Acquire a feature selection rule set corresponding to the preset format to obtain multiple feature selection rule sets, each feature selection rule set corresponding to an expression type; 从所述多个特征选取规则集中确定与所述目标表情类型对应的目标特征选取规则集;Determining a target feature selection rule set corresponding to the target expression type from the multiple feature selection rule sets; 根据所述目标特征选取规则集对所述第一特征集进行筛选,得到所述目标特征集。The first feature set is screened according to the target feature selection rule set to obtain the target feature set. 4.根据权利要求3所述的方法,其特征在于,所述根据所述目标特征集生成预设格式的目标表情图像,包括:4. The method according to claim 3, characterized in that the step of generating a target expression image in a preset format according to the target feature set comprises: 获取所述预设格式对应的参考表情模板集,所述参考表情模板集包括多个参考表情模板,每一参考表情模板对应一个表情类型;Acquire a reference expression template set corresponding to the preset format, wherein the reference expression template set includes a plurality of reference expression templates, and each reference expression template corresponds to an expression type; 从所述参考表情模板集中确定与所述目标表情类型对应的目标参考表情模板;Determining a target reference expression template corresponding to the target expression type from the reference expression template set; 根据所述目标特征集和所述目标参考表情模板确定所述目标表情图像。The target expression image is determined according to the target feature set and the target reference expression template. 5.根据权利要求4所述的方法,其特征在于,所述根据所述目标特征集和所述目标参考表情模板确定所述目标表情图像,包括:5. The method according to claim 4, characterized in that the step of determining the target expression image according to the target feature set and the target reference expression template comprises: 获取所述目标特征集中每一特征对应的特征位置区域,得到至少一个特征位置区域;Acquire a feature position region corresponding to each feature in the target feature set to obtain at least one feature position region; 在所述目标参考表情模板中根据所述至少一个特征位置区域标记相应的区域,得到至少一个区域;Marking a corresponding region in the target reference expression template according to the at least one characteristic position region to obtain at least one region; 将所述目标特征集填充到所述目标参考表情模板中所述至少一个区域中相应的区域,得到所述目标表情图像。The target feature set is filled into a corresponding area of the at least one area in the target reference expression template to obtain the target expression image. 6.根据权利要求2所述的方法,其特征在于,所述对所述面部表情图像进行特征提取,得到第一特征集,包括:6. The method according to claim 2, characterized in that the feature extraction of the facial expression image to obtain the first feature set comprises: 获取所述目标表情图像的目标拍摄参数;Acquire target shooting parameters of the target expression image; 获取所述目标表情图像的目标肤色参数;Obtaining target skin color parameters of the target expression image; 确定与所述目标肤色参数对应的目标特征提取算法;Determining a target feature extraction algorithm corresponding to the target skin color parameter; 确定与所述目标拍摄参数对应的所述目标特征提取算法的目标算法控制参数;Determining target algorithm control parameters of the target feature extraction algorithm corresponding to the target shooting parameters; 根据所述目标特征提取算法和所述目标算法控制参数对所述面部表情图像进行特征提取,得到所述第一特征集。Feature extraction is performed on the facial expression image according to the target feature extraction algorithm and the target algorithm control parameters to obtain the first feature set. 7.一种智能AI眼镜,其特征在于,所述智能AI眼镜包括传感器和通信模块;所述智能AI眼镜包括:获取单元、提取单元、生成单元和交互单元,其中,7. A smart AI glasses, characterized in that the smart AI glasses include a sensor and a communication module; the smart AI glasses include: an acquisition unit, an extraction unit, a generation unit and an interaction unit, wherein: 所述获取单元,用于通过所述传感器捕捉目标对象的面部表情图像,所述目标对象为佩服所述智能AI眼镜的用户,或者,所述目标对象为所述智能AI眼镜拍摄的拍摄对象;The acquisition unit is used to capture a facial expression image of a target object through the sensor, where the target object is a user who admires the smart AI glasses, or the target object is a subject photographed by the smart AI glasses; 所述提取单元,用于对所述面部表情图像进行图像提取,得到目标特征集;The extraction unit is used to perform image extraction on the facial expression image to obtain a target feature set; 所述生成单元,用于根据所述目标特征集生成预设格式的目标表情图像;The generating unit is used to generate a target expression image in a preset format according to the target feature set; 所述交互单元,用于通过所述通信模块将所述目标表情图像发送给指定设备。The interaction unit is used to send the target expression image to a designated device through the communication module. 8.根据权利要求7所述的智能AI眼镜,其特征在于,在所述对所述面部表情图像进行图像提取,得到目标特征集方面,所述提取单元具体用于:8. The smart AI glasses according to claim 7, characterized in that, in the aspect of performing image extraction on the facial expression image to obtain the target feature set, the extraction unit is specifically used to: 对所述面部表情图像进行特征提取,得到第一特征集;Performing feature extraction on the facial expression image to obtain a first feature set; 确定所述第一特征集对应的目标表情类型;Determining a target expression type corresponding to the first feature set; 根据所述第一特征集确定所述目标表情类型对应的特征,得到所述目标特征集。The features corresponding to the target expression type are determined according to the first feature set to obtain the target feature set. 9.一种智能AI眼镜,其特征在于,包括处理器、存储器,所述存储器用于存储一个或多个程序,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-6任一项所述的方法中的步骤的指令。9. A smart AI glasses, characterized in that it includes a processor and a memory, wherein the memory is used to store one or more programs and is configured to be executed by the processor, and the program includes instructions for executing the steps in the method according to any one of claims 1 to 6. 10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-6任一项所述的方法。10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program comprises program instructions, and when the program instructions are executed by a processor, the processor is caused to execute the method according to any one of claims 1 to 6.
CN202510107294.6A 2025-01-23 2025-01-23 Expression processing method, smart AI glasses and storage medium Pending CN120108016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510107294.6A CN120108016A (en) 2025-01-23 2025-01-23 Expression processing method, smart AI glasses and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510107294.6A CN120108016A (en) 2025-01-23 2025-01-23 Expression processing method, smart AI glasses and storage medium

Publications (1)

Publication Number Publication Date
CN120108016A true CN120108016A (en) 2025-06-06

Family

ID=95873087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510107294.6A Pending CN120108016A (en) 2025-01-23 2025-01-23 Expression processing method, smart AI glasses and storage medium

Country Status (1)

Country Link
CN (1) CN120108016A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150332088A1 (en) * 2014-05-16 2015-11-19 Verizon Patent And Licensing Inc. Generating emoticons based on an image of a face
WO2018128996A1 (en) * 2017-01-03 2018-07-12 Clipo, Inc. System and method for facilitating dynamic avatar based on real-time facial expression detection
CN108876877A (en) * 2017-05-16 2018-11-23 苹果公司 Emoticon dualization
US20200219295A1 (en) * 2010-06-07 2020-07-09 Affectiva, Inc. Emoji manipulation using machine learning
CN115546361A (en) * 2021-06-30 2022-12-30 腾讯科技(深圳)有限公司 Three-dimensional cartoon image processing method and device, computer equipment and storage medium
CN115953512A (en) * 2022-12-29 2023-04-11 北京百度网讯科技有限公司 Expression generation method, neural network training method, device, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200219295A1 (en) * 2010-06-07 2020-07-09 Affectiva, Inc. Emoji manipulation using machine learning
US20150332088A1 (en) * 2014-05-16 2015-11-19 Verizon Patent And Licensing Inc. Generating emoticons based on an image of a face
WO2018128996A1 (en) * 2017-01-03 2018-07-12 Clipo, Inc. System and method for facilitating dynamic avatar based on real-time facial expression detection
CN108876877A (en) * 2017-05-16 2018-11-23 苹果公司 Emoticon dualization
CN115546361A (en) * 2021-06-30 2022-12-30 腾讯科技(深圳)有限公司 Three-dimensional cartoon image processing method and device, computer equipment and storage medium
CN115953512A (en) * 2022-12-29 2023-04-11 北京百度网讯科技有限公司 Expression generation method, neural network training method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN110956691B (en) A three-dimensional face reconstruction method, device, equipment and storage medium
CN113313085B (en) Image processing method and device, electronic equipment and storage medium
CN104519263B (en) The method and electronic equipment of a kind of image acquisition
CN109145788A (en) Attitude data method for catching and system based on video
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109242940B (en) Method and device for generating three-dimensional dynamic image
CN114092678A (en) Image processing method, device, electronic device and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN114007099A (en) Video processing method and device for video processing
CN114360018B (en) Rendering method and device of three-dimensional facial expression, storage medium and electronic device
CN108388889B (en) Method and device for analyzing face image
CN109978640A (en) Apparel try-on method, device, storage medium and mobile terminal
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
KR101995411B1 (en) Device and method for making body model
US20250182368A1 (en) Method and application for animating computer generated images
CN113705311A (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN113920023A (en) Image processing method and device, computer readable medium and electronic device
CN113822976A (en) Training method and device of generator, storage medium and electronic device
CN120108016A (en) Expression processing method, smart AI glasses and storage medium
CN115984943B (en) Facial expression capturing and model training method, device, equipment, medium and product
CN113176827B (en) AR interaction method and system based on expressions, electronic device and storage medium
CN113298731B (en) Image color migration method and device, computer readable medium and electronic equipment
CN114078082B (en) A training and image generation method and device for a gender conversion model of a person image
CN111738087B (en) Method and device for generating face model of game character
CN111461005B (en) Gesture recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination