WO2020155984A1 - 人脸表情图像处理方法、装置和电子设备 - Google Patents
人脸表情图像处理方法、装置和电子设备 Download PDFInfo
- Publication number
- WO2020155984A1 WO2020155984A1 PCT/CN2019/129140 CN2019129140W WO2020155984A1 WO 2020155984 A1 WO2020155984 A1 WO 2020155984A1 CN 2019129140 W CN2019129140 W CN 2019129140W WO 2020155984 A1 WO2020155984 A1 WO 2020155984A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- facial expression
- facial
- processing
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the present disclosure relates to the field of image processing, and in particular to a method, device, electronic device, and computer-readable storage medium for processing facial expression images.
- smart terminals can be used to listen to music, play games, chat online, and take photos.
- the camera technology of the smart terminal the camera pixel has reached more than 10 million pixels, with higher definition and the camera effect comparable to professional cameras.
- embodiments of the present disclosure provide a method for processing facial expression images, including:
- the first face image is overlaid on the position of the face image to obtain a first image effect.
- the acquiring the first image, where the first image includes a face image includes:
- the recognizing the facial expression of the facial image includes:
- performing first processing on the facial image to obtain the first facial image includes:
- first processing is performed on the face image to obtain the first face image.
- acquiring a processing configuration file corresponding to the first facial expression includes:
- acquiring a processing configuration file corresponding to the first facial expression includes:
- the processing parameters in the processing configuration file are set according to the level of the first facial expression.
- the performing first processing on the face image according to the processing configuration file to obtain the first face image includes:
- the segmented face image is enlarged to obtain an enlarged face image.
- the covering the first face image on the position of the face image to obtain the first image effect includes:
- the first face image is overlaid on the face image, and the first positioning feature point is overlapped with the second positioning feature point to obtain a first image effect.
- the acquiring the first image, where the first image includes a face image includes:
- the first image includes at least two face images.
- the recognizing the facial expression of the facial image includes:
- performing first processing on the facial image to obtain the first facial image includes:
- first processing is performed on the facial image corresponding to the first facial expression to obtain the first facial image.
- first processing is performed on the face image corresponding to the first facial expression to obtain the first face image.
- the covering the first face image on the position of the face image to obtain the first image effect includes:
- the at least one first face image is overlaid on the position of the face image corresponding to the first face image to obtain a first image effect.
- a facial expression image processing device including:
- the first image acquisition module is configured to acquire a first image, and the first image includes a face image
- the facial expression recognition module is used to recognize the facial expressions of the facial image
- the first processing module is configured to perform first processing on the face image in response to recognizing that the facial expression is the first facial expression to obtain the first facial image;
- the facial expression image processing module is used to overlay the first facial image on the position of the facial image to obtain the first image effect.
- the first image acquisition module further includes:
- the first video acquisition module is configured to acquire a first video, and at least one video frame in the first video includes a face image.
- the facial expression recognition module further includes:
- a face recognition module for recognizing a face image in the first image
- An expression feature extraction module for extracting facial expression features from the face image
- the facial expression recognition sub-module is used to recognize facial expressions according to the facial expression features.
- the first processing module further includes:
- a processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression in response to recognizing that the facial expression is a first facial expression
- the first face image processing module is configured to perform first processing on the face image according to the processing configuration file to obtain a first face image.
- processing configuration file obtaining module further includes:
- the first facial expression recognition module is used to recognize the facial expression as the first facial expression
- the first processing configuration file obtaining module is configured to obtain a processing configuration file corresponding to the first facial expression when the level of the first facial expression reaches a preset level.
- processing configuration file obtaining module further includes:
- the second facial expression recognition module is used to recognize the facial expression as the first facial expression
- a second processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression
- An expression level judgment module configured to obtain a processing configuration file corresponding to the first facial expression
- the processing parameter setting module is configured to set the processing parameters in the processing configuration file according to the level of the first facial expression.
- the first face image processing module further includes:
- a face segmentation module configured to segment the face image from the first image
- the magnification module is configured to perform magnification processing on the segmented face image according to the processing configuration file to obtain the magnified face image.
- the facial expression image processing module further includes:
- a positioning feature point acquisition module configured to acquire a first positioning feature point on the first face image and a second positioning feature point on the face image
- the covering module is used for covering the first face image on the face image, and making the first positioning feature point coincide with the second positioning feature point to obtain a first image effect.
- a facial expression image processing device including:
- the second image acquisition module is configured to acquire a first image, and the first image includes at least two face images;
- the third facial expression recognition module is used to recognize the facial expression of each of the at least two facial images
- the second processing module is configured to, in response to recognizing that at least one of the facial expressions is a first facial expression, perform first processing on the facial image corresponding to the first facial expression to obtain the first person Face image
- the first facial expression image processing module is configured to overlay the at least one first facial image on the position of the facial image corresponding to the first facial image to obtain a first image effect.
- the second processing module further includes:
- a corresponding processing configuration file obtaining module configured to obtain a first processing configuration file corresponding to the first facial expression of the facial image in response to recognizing that at least one of the facial expressions is a first facial facial expression
- the second processing submodule is configured to perform first processing on the face image corresponding to the first facial expression according to the first processing configuration file to obtain a first face image.
- an embodiment of the present disclosure provides an electronic device, including: at least one processor; and,
- the device can execute any of the facial expression image processing methods described in the foregoing first aspect.
- embodiments of the present disclosure provide a non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make a computer execute the aforementioned first aspect Any of the aforementioned facial expression image processing methods.
- the present disclosure discloses a method, device, electronic equipment and computer-readable storage medium for processing facial expression images.
- the method for processing a facial expression image includes: acquiring a first image, the first image including a facial image; recognizing the facial expression of the facial image; responding to recognizing that the facial expression is the first person
- For facial expressions perform first processing on the face image to obtain a first face image; overlay the first face image on the position of the face image to obtain a first image effect.
- the embodiment of the present disclosure controls the generation result of the face image effect through the expression of the face, which solves the technical problems of complex image effect production, fixed processing effect, and inability to flexibly configure the processing effect in the prior art.
- FIG. 1 is a flowchart of Embodiment 1 of a facial expression image processing method provided by an embodiment of the disclosure
- FIGS. 2a-2e are schematic diagrams of specific examples of facial expression image processing methods provided by embodiments of the disclosure.
- FIG. 3 is a flowchart of Embodiment 2 of a method for processing facial expression images provided by an embodiment of the disclosure
- Embodiment 4 is a schematic structural diagram of Embodiment 1 of a facial expression image processing apparatus provided by an embodiment of the disclosure
- Embodiment 2 of a facial expression image processing apparatus provided by an embodiment of the disclosure
- Fig. 6 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
- FIG. 1 is a flowchart of the first implementation of a facial expression image processing method provided by an embodiment of the present disclosure.
- the facial expression image processing method provided in this embodiment may be executed by a facial expression image processing device.
- the device can be implemented as software, or as a combination of software and hardware.
- the facial expression image processing device can be integrated in a certain device in the facial expression image processing system, such as a facial expression image processing server or a facial expression image Processing terminal equipment. As shown in Figure 1, the method includes the following steps:
- Step S101 Obtain a first image, where the first image includes a face image
- the obtaining the first image includes obtaining the first image from a local storage space or obtaining the first image from a network storage space. No matter where the first image is obtained from, the storage that needs to obtain the first image is preferred. Address, and then obtain the first image from the storage address.
- the first image may be a video image or a picture, or a picture with dynamic effects, which will not be repeated here.
- the acquiring the first image includes acquiring the first video, and at least one video frame in the first video includes a face image.
- the first video can be obtained through an image sensor, which refers to various devices that can collect images, and typical image sensors are video cameras, cameras, cameras, etc.
- the image sensor may be a camera on a mobile terminal, such as a front or rear camera on a smart phone, and the video image collected by the camera may be directly displayed on the display screen of the phone. In this step, Obtain the image and video taken by the image sensor for further image recognition in the next step.
- the first image includes a face image, which is the basis of facial expressions.
- the picture includes at least one face image
- the first image is a video
- at least one of the video frames in the first image includes at least one face image.
- Step S102 Recognizing the facial expression of the facial image
- recognizing the facial expression of the facial image includes: recognizing the facial image in the first image; extracting facial facial expression features from the facial image; according to the facial facial expression Features recognize facial expressions.
- Face detection is a process in which an image or a group of image sequences is given arbitrarily, and a certain strategy is adopted to search it to determine the position and area of all faces.
- face detection methods can be divided into four categories: (1) A method based on prior knowledge, which forms a rule base of typical faces to encode faces, and locates faces through the relationship between facial features; (2) Feature invariance method, which finds stable features when the pose, viewing angle or lighting conditions change, and then uses these features to determine the face; (3) Template matching method, which stores several standard faces Patterns are used to describe the entire face and facial features separately, and then calculate the correlation between the input image and the stored pattern and use it for detection; (4) Appearance-based method, which is opposite to the template matching method, which is performed from the training image set Learn to obtain models and use these models for detection.
- An implementation of the method (4) can be used here to illustrate the process of face detection: first, features need to be extracted to complete the modeling, this embodiment uses Haar features as the key feature for judging the face, and the Haar feature is a kind of Simple rectangular features, fast extraction speed.
- the feature template used in the calculation of Haar features is composed of two or more congruent rectangles using a simple combination of rectangles, among which there are black and white rectangles in the feature template;
- Use the AdaBoost algorithm to find a part of the key features from a large number of Haar features, and use these features to generate an effective classifier.
- the constructed classifier can detect the face in the image.
- multiple face feature points can be detected, and 106 feature points can typically be used to identify a face.
- Face image preprocessing mainly includes denoising, normalization of scale and gray level, etc.
- the input image usually has a more complex scene.
- the face image size, aspect ratio, lighting conditions, partial coverage, and head deflection obtained by face detection are usually different.
- the facial expression features are extracted.
- Motion-based feature extraction methods mainly describe expression changes based on changes in the relative positions and distances of facial feature points in sequence images, including optical flow, motion models, feature point tracking, etc. These methods are robust; based on Deformation feature extraction methods are mainly used to extract features from static images.
- the model features are obtained by comparing the appearance or texture of natural expression models. Typical algorithms are based on active appearance model (AAM) and point distribution model (PDM), and based on texture features Gabor transform and local binary mode LBP.
- AAM active appearance model
- PDM point distribution model
- facial expression classification is to send the expression features extracted in the previous stage to a trained classifier or regressor, and let the classifier or regressor give a predicted value to judge the expression category corresponding to the expression feature.
- the common expression classification algorithms mainly include linear classifiers, neural network classifiers, support vector machines SVM, hidden Markov models and other classification and recognition methods.
- Step S103 In response to recognizing that the facial expression is the first facial expression, perform first processing on the facial image to obtain the first facial image;
- performing first processing on the facial image to obtain the first facial image includes: responding to recognizing the The facial expression is a first facial expression, and a processing configuration file corresponding to the first facial expression is acquired; according to the processing configuration file, first processing is performed on the facial image to obtain the first facial image.
- facial expressions such as smile, sadness, anger, etc.
- different processing profiles can be set for each facial expression, so that each expression can be processed differently.
- the human face is enlarged to obtain an enlarged human face; optionally, when the facial expression is recognized as sad, Add a teardrop sticker or a sticker of dark clouds and lightning on a human face to obtain a human face with a sticker; optionally, when it is recognized that the facial expression is angry, the human face is rendered red and the nostrils are enlarged.
- acquiring a processing configuration file corresponding to the first facial expression includes: recognizing that the facial expression is the first facial expression Facial expression; when the level of the first facial expression reaches a preset level, obtain a processing configuration file corresponding to the first facial expression.
- the level represents the level of facial expressions. Taking a smile as an example, a smile is a low-level smile, and a big laugh is a high-level smile. Smile, other expressions and so on.
- the judging the level of the facial expression includes: comparing the facial expression with a preset template expression; and comparing the level of the template expression with the highest degree of matching with the facial expression As the level of the facial expression.
- the expression is a smile.
- the smile can be divided into multiple levels, such as 100 levels, and each level has a standard template facial expression image corresponding to it.
- the facial expressions recognized in the step are compared with the 100-level template facial expression images, and the level corresponding to the template facial expression image with the highest matching degree is used as the facial expression level.
- the judging the level of the facial expression includes: comparing the facial expression with a preset template expression; and using the facial expression and the preset template expression similarity as the person The level of facial expression.
- the template facial expression image may have only one, and the recognized facial expression is compared with the template facial expression image, and the result of the comparison is a similarity percentage, such as after comparison If it is obtained that the similarity between the facial expression and the template facial expression image is 90%, the level of the facial expression can be obtained as 90%.
- an expression level is preset, which is a condition for triggering the first processing.
- set smile level 50 as the preset expression level, then when the first expression is recognized as smile level 50 In the case of the above smile, a processing profile corresponding to the smile is obtained.
- acquiring a processing configuration file corresponding to the first facial expression includes: recognizing that the facial expression is the first facial expression Facial expression; obtaining a processing configuration file corresponding to the first facial expression; determining the level of the first facial expression; setting the processing parameters in the processing configuration file according to the level of the first facial expression .
- the manner of determining the level of the first facial expression may be the same as the manner in the foregoing embodiment, and will not be repeated here.
- the level of the first facial expression is used as a reference for the setting of the processing parameter in the processing configuration file, so that the expression can be used to control the effect of the processing.
- the first facial expression is a smile
- a processing configuration file corresponding to the smile is acquired, and the processing configuration file is configured to cut out and enlarge the human face, and also needs to be set
- the magnification factor is used to control the magnification.
- the level of the smile can be used to control the magnification.
- the control magnification here can directly use the level as the magnification, or it can be the corresponding relationship between the level and the magnification.
- the smile is magnified by 1 times for levels 1-10, and the smile is magnified by 1.1 times for levels 11-20, and so on. In this way, as the degree of smile on the face becomes higher and higher, the face will be magnified more.
- the aforementioned expressions, levels, and processing parameters are all examples, and are not enough to limit the present disclosure. In fact, the expression level can be used to control any processing parameter to form a variety of control effects, which will not be repeated here.
- the performing first processing on the face image according to the processing configuration file to obtain the first face image includes: segmenting the face image from the first image; according to the The configuration file is processed, and the segmented face image is enlarged to obtain an enlarged face image.
- the face can be segmented from the first image according to the face contour recognized in step S102 to form a matting effect.
- the segmentation can also be Perform preprocessing on the face image that comes out.
- the preprocessing can be to blur the edges of the face image. Any blurring method can be used for the blurring.
- An optional blurring method is Gaussian blur, which is understandable Yes, any blurring can be used for the blurring here, so I won't repeat it here.
- the position of the pixel in the enlarged image can be calculated based on the position of the pixel in the original image, and then the color value of the pixel of the enlarged image can be interpolated. Specifically, assuming that the position of the pixel on the original image is (x, y) and the position of the pixel on the enlarged image is (u, v), the corresponding position of (u, v) can be calculated by the following formula 1.
- ⁇ _1 is the magnification of the pixel in the X-axis direction
- ⁇ _2 is the magnification of the pixel in the Y-axis direction.
- ⁇ _1 ⁇ _2.
- a 100*100 image is enlarged to 200*200; but ⁇ _1 And ⁇ _2 may not be equal, for example, a 100*100 image is enlarged to 200*300.
- the pixel point (10, 20) in the enlarged image corresponds to the pixel point (5, 10) in the original image, and the color value of the pixel point (5, 10) in the original image is assigned to The pixel point (10,20) in the enlarged image.
- the color value of the pixel point (x, y) in the original image can be smoothed and then assigned to the pixel point of the enlarged image.
- the point (x, y) The average color of the surrounding 2*2 pixels is used as the color value of the pixel corresponding to the point (x, y) in the enlarged image.
- Step S104 overlay the first face image on the position of the face image to obtain a first image effect.
- the first face image obtained through the first processing in step S103 is overlaid to the position where the face image is located to obtain the first image effect.
- the covering the first face image on the position of the face image to obtain the first image effect includes: acquiring the first positioning feature point on the first face image and A second positioning feature point on the face image; overlaying the first face image on the face image, and making the first positioning feature point coincide with the second positioning feature point, Get the first image effect.
- the first positioning feature point and the second positioning feature point may be the central feature point on the face image, for example, the feature point on the nose tip on the first face image and the nose tip on the face image. Feature points. In this way, the first face image and its corresponding face image can completely cover and fit.
- the first locating feature point and the second locating feature point can also be feature points set according to specific needs to achieve other coverage effects, which are not limited here.
- a first image is acquired, and the first image includes a face image.
- the first image is a video image frame collected by an image sensor, and the video image frame includes Face image;
- the facial expression of the facial image is recognized; in response to recognizing that the facial expression is the first facial expression, the first processing is performed on the facial image, Obtain a first face image; overlay the first face image on the position of the face image to obtain a first image effect.
- the face expression is a smile, and the face is recognized according to Smile, generating the effect of magnifying the face.
- the human face did not smile at first, and the image did not change.
- the human face smiled, but the smile was not enough to trigger the generation of the image effect.
- the smile level of the face is further increased, triggering the zooming effect of the face, superimposing the enlarged face to the position of the original face, and strengthening and highlighting the smile of the face, as shown in Figure 2d-2e.
- the smile disappears, the big head The effect gradually disappears and the image returns to its original state.
- FIG. 3 is a flowchart of the second embodiment of a facial expression image processing method provided by an embodiment of the disclosure.
- the facial expression image processing method provided in this embodiment may be executed by a facial expression image processing device.
- the processing device may be implemented as software, or as a combination of software and hardware.
- the facial expression image processing device may be integrated in a device in an image processing system, such as an image processing server or an image processing terminal device. As shown in Figure 3, the method includes the following steps:
- Step S301 Acquire a first image, where the first image includes at least two face images;
- Step S302 Recognizing the facial expression of each of the at least two facial images
- Step S303 in response to recognizing that at least one of the facial expressions is a first facial expression, perform first processing on a facial image corresponding to the first facial facial expression to obtain a first facial image;
- Step S304 Overlay the at least one first face image on the position of the face image corresponding to the first face image to obtain a first image effect.
- the recognition of multiple faces is involved, that is, the first image includes multiple face images. At this time, each face image is processed as described in the first embodiment. In the first image Separate image effects can be achieved for different faces and different expressions.
- first processing is performed on the face image corresponding to the first facial expression to obtain the first face image.
- a processing configuration file is separately set for each different expression of each face, so that each different expression of each face is processed independently without interfering with each other.
- an independent processing configuration file is generated for each expression of each face.
- the configuration file is independent, and the expression of each face can be independently configured to produce different image effects for multiple expressions of multiple faces.
- the present disclosure discloses a method, device, electronic equipment and computer-readable storage medium for processing facial expression images.
- the method for processing a facial expression image includes: acquiring a first image, the first image including a facial image; recognizing the facial expression of the facial image; responding to recognizing that the facial expression is the first person
- For facial expressions perform first processing on the face image to obtain a first face image; overlay the first face image on the position of the face image to obtain a first image effect.
- the embodiment of the present disclosure controls the generation result of the face image effect through the expression of the face, which solves the technical problems of complex image effect production, fixed processing effect, and inability to flexibly configure the processing effect in the prior art.
- the apparatus 400 includes: a first image acquisition module 401, a facial expression recognition module 402, and a first processing Module 403 and facial expression image processing module 404. among them,
- the first image acquisition module 401 is configured to acquire a first image, and the first image includes a face image;
- the facial expression recognition module 402 is used to recognize the facial expression of the facial image
- the first processing module 403 is configured to, in response to recognizing that the facial expression is the first facial expression, perform first processing on the facial image to obtain the first facial image;
- the facial expression image processing module 404 is configured to overlay the first facial image on the position of the facial image to obtain a first image effect.
- the first image acquisition module 401 further includes:
- the first video acquisition module is configured to acquire a first video, and at least one video frame in the first video includes a face image.
- the facial expression recognition module 402 further includes:
- a face recognition module for recognizing a face image in the first image
- An expression feature extraction module for extracting facial expression features from the face image
- the facial expression recognition sub-module is used to recognize facial expressions according to the facial expression features.
- the first processing module 403 further includes:
- a processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression in response to recognizing that the facial expression is a first facial expression
- the first face image processing module is configured to perform first processing on the face image according to the processing configuration file to obtain a first face image.
- processing configuration file obtaining module further includes:
- the first facial expression recognition module is used to recognize the facial expression as the first facial expression
- the first processing configuration file obtaining module is configured to obtain a processing configuration file corresponding to the first facial expression when the level of the first facial expression reaches a preset level.
- processing configuration file obtaining module further includes:
- the second facial expression recognition module is used to recognize the facial expression as the first facial expression
- a second processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression
- An expression level judgment module configured to obtain a processing configuration file corresponding to the first facial expression
- the processing parameter setting module is configured to set the processing parameters in the processing configuration file according to the level of the first facial expression.
- the first face image processing module further includes:
- a face segmentation module configured to segment the face image from the first image
- the enlargement module is configured to perform enlargement processing on the segmented face image according to the processing configuration file to obtain an enlarged face image.
- the facial expression image processing module 404 further includes:
- a positioning feature point acquisition module configured to acquire a first positioning feature point on the first face image and a second positioning feature point on the face image
- the covering module is used for covering the first face image on the face image, and making the first positioning feature point coincide with the second positioning feature point to obtain a first image effect.
- the device shown in FIG. 4 can execute the method of the embodiment shown in FIG. Refer to the description in the embodiment shown in FIG. 1 for the execution process and technical effects of this technical solution, and will not be repeated here.
- FIG. 5 is a schematic structural diagram of Embodiment 1 of a facial expression image processing apparatus provided by an embodiment of the disclosure.
- the apparatus 500 includes: a second image acquisition module 501, a third facial expression recognition module 502, and a first The second processing module 503 and the first facial expression image processing module 504. among them,
- the second image acquisition module 501 is configured to acquire a first image, and the first image includes at least two face images;
- the third facial expression recognition module 502 is configured to recognize the facial expression of each of the at least two facial images
- the second processing module 503 is configured to, in response to recognizing that at least one of the facial expressions is a first facial expression, perform first processing on the facial image corresponding to the first facial expression to obtain the first facial expression.
- the first facial expression image processing module 504 is configured to overlay the at least one first facial image on the position of the facial image corresponding to the first facial image to obtain a first image effect.
- the second processing module 503 further includes:
- a corresponding processing configuration file obtaining module configured to obtain a first processing configuration file corresponding to the first facial expression of the facial image in response to recognizing that at least one of the facial expressions is a first facial facial expression
- the second processing submodule is configured to perform first processing on the face image corresponding to the first facial expression according to the first processing configuration file to obtain a first face image.
- the device shown in FIG. 5 can execute the method of the embodiment shown in FIG. 3.
- FIG. 6 shows a schematic structural diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure.
- the electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (such as Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which can be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
- the program in the memory (RAM) 603 executes various appropriate actions and processing.
- the RAM 603 also stores various programs and data required for the operation of the electronic device 600.
- the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
- An input/output (I/O) interface 605 is also connected to the bus 604.
- the following devices can be connected to the I/O interface 605: including input devices 606 such as touch screen, touch panel, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, An output device 607 such as a vibrator; a storage device 608 such as a magnetic tape, a hard disk, etc.; and a communication device 609.
- the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
- FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
- the process described above with reference to the flowchart can be implemented as a computer software program.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 609, or installed from the storage device 608, or installed from the ROM602.
- the processing device 601 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
- the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires a first image, the first image includes a face image; Describe the facial expression of the facial image; in response to recognizing that the facial expression is the first facial expression, perform first processing on the facial image to obtain the first facial image; The image is overlaid on the position of the face image to obtain the first image effect.
- the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function Executable instructions.
- the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (16)
- 一种人脸表情图像处理方法,其特征在于,包括:获取第一图像,所述第一图像中包括人脸图像;识别所述人脸图像的人脸表情;响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述获取第一图像,所述第一图像中包括人脸图像,包括:获取第一视频,所述第一视频中的至少一个视频帧中包括人脸图像。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述识别所述人脸图像的人脸表情,包括:识别所述第一图像中的人脸图像;在所述人脸图像中提取人脸表情特征;根据所述人脸表情特征对人脸表情进行识别。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件;根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像。
- 如权利要求4所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件,包括:识别所述人脸表情为第一人脸表情;当所述第一人脸表情的等级达到预设的等级,获取与所述第一人脸表情对应的处理配置文件。
- 如权利要求4所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的 处理配置文件,包括:识别所述人脸表情为第一人脸表情;获取与所述第一人脸表情对应的处理配置文件;判断所述第一人脸表情的等级;根据所述第一人脸表情的等级设置所述处理配置文件中的处理参数。
- 如权利要求4所述的人脸表情图像处理方法,其特征在于,所述根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像,包括:从第一图像中分割所述人脸图像;根据所述处理配置文件,对所述分割出来的人脸图像进行放大处理,得到放大后的人脸图像。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:获取所述第一人脸图像上的第一定位特征点和所述人脸图像的上的第二定位特征点;将所述第一人脸图像覆盖于所述人脸图像上,且使所述第一定位特征点与所述第二定位特征点重合,得到第一图像效果。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述获取第一图像,所述第一图像中包括人脸图像,包括:获取第一图像,所述第一图像中包括至少两个人脸图像。
- 如权利要求9所述的人脸表情图像处理方法,其特征在于,所述识别所述人脸图像的人脸表情,包括:识别所述至少两个人脸图像中的每一个人脸图像的人脸表情。
- 如权利要求10所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
- 如权利要求11所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸 表情所对应的人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情中的至少一个为第一人脸表情;获取与所述人脸图像的第一人脸表情对应的第一处理配置文件;根据所述第一处理配置文件,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
- 如权利要求11或12所述的人脸表情图像处理方法,其特征在于,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:将所述至少一个第一人脸图像覆盖在所述第一人脸图像所对应的人脸图像的位置,得到第一图像效果。
- 一种人脸表情图像处理装置,其特征在于,包括:第一图像获取模块,用于获取第一图像,所述第一图像中包括人脸图像;人脸表情识别模块,用于识别所述人脸图像的人脸表情;第一处理模块,用于响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;人脸表情图像处理模块,用于将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
- 一种电子设备,包括:存储器,用于存储非暂时性计算机可读指令;以及处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-13中任意一项所述的人脸表情图像处理方法。
- 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-13中任意一项所述的人脸表情图像处理方法。。
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/426,840 US12327299B2 (en) | 2019-01-31 | 2019-12-27 | Facial expression image processing method and apparatus, and electronic device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910101335.5A CN111507142A (zh) | 2019-01-31 | 2019-01-31 | 人脸表情图像处理方法、装置和电子设备 |
| CN201910101335.5 | 2019-01-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020155984A1 true WO2020155984A1 (zh) | 2020-08-06 |
Family
ID=71841614
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/129140 Ceased WO2020155984A1 (zh) | 2019-01-31 | 2019-12-27 | 人脸表情图像处理方法、装置和电子设备 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US12327299B2 (zh) |
| CN (1) | CN111507142A (zh) |
| WO (1) | WO2020155984A1 (zh) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112307942A (zh) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | 一种面部表情量化表示方法、系统及介质 |
| CN115565220A (zh) * | 2022-09-01 | 2023-01-03 | 桂林电子科技大学 | 基于面部表情识别的空调控制装置、方法以及存储介质 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112532896A (zh) * | 2020-10-28 | 2021-03-19 | 北京达佳互联信息技术有限公司 | 视频的制作方法、装置、电子设备以及存储介质 |
| US12494056B2 (en) | 2021-09-30 | 2025-12-09 | Lemon Inc. | Social networking based on asset items |
| US11763496B2 (en) * | 2021-09-30 | 2023-09-19 | Lemon Inc. | Social networking based on asset items |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104780339A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的表情特效动画加载方法和电子设备 |
| CN104780458A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的特效加载方法和电子设备 |
| US20180115746A1 (en) * | 2015-11-17 | 2018-04-26 | Tencent Technology (Shenzhen) Company Limited | Video calling method and apparatus |
| CN108229269A (zh) * | 2016-12-31 | 2018-06-29 | 深圳市商汤科技有限公司 | 人脸检测方法、装置和电子设备 |
| CN108495049A (zh) * | 2018-06-15 | 2018-09-04 | Oppo广东移动通信有限公司 | 拍摄控制方法及相关产品 |
| CN108734126A (zh) * | 2018-05-21 | 2018-11-02 | 深圳市梦网科技发展有限公司 | 一种美颜方法、美颜装置及终端设备 |
Family Cites Families (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101733246B1 (ko) * | 2010-11-10 | 2017-05-08 | 삼성전자주식회사 | 얼굴 포즈를 이용한 화상 통화를 위한 화면 구성 장치 및 방법 |
| CN106371551A (zh) * | 2015-07-20 | 2017-02-01 | 深圳富泰宏精密工业有限公司 | 人脸表情操作系统、方法及电子装置 |
| CN105184249B (zh) * | 2015-08-28 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | 用于人脸图像处理的方法和装置 |
| KR20170096904A (ko) * | 2016-02-17 | 2017-08-25 | 삼성전자주식회사 | 사용자의 피부 유형에 따른 콘텐트를 제공하기 위한 전자 장치 및 방법 |
| US10600226B2 (en) * | 2016-09-07 | 2020-03-24 | The University Of Hong Kong | System and method for manipulating a facial image and a system for animating a facial image |
| CN106372622A (zh) * | 2016-09-30 | 2017-02-01 | 北京奇虎科技有限公司 | 一种人脸表情分类方法及装置 |
| CN107545163B (zh) * | 2017-08-14 | 2022-04-26 | Oppo广东移动通信有限公司 | 解锁控制方法及相关产品 |
| CN107705356A (zh) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | 图像处理方法和装置 |
| KR101968723B1 (ko) * | 2017-10-18 | 2019-04-12 | 네이버 주식회사 | 카메라 이펙트를 제공하는 방법 및 시스템 |
| CN108022206A (zh) * | 2017-11-30 | 2018-05-11 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
| CN108198159A (zh) * | 2017-12-28 | 2018-06-22 | 努比亚技术有限公司 | 一种图像处理方法、移动终端以及计算机可读存储介质 |
| CN108257091B (zh) * | 2018-01-16 | 2022-08-05 | 北京小米移动软件有限公司 | 用于智能镜子的成像处理方法和智能镜子 |
| JP6834998B2 (ja) * | 2018-01-25 | 2021-02-24 | 日本電気株式会社 | 運転状況監視装置、運転状況監視システム、運転状況監視方法、プログラム |
| TWI711980B (zh) * | 2018-02-09 | 2020-12-01 | 國立交通大學 | 表情辨識訓練系統及表情辨識訓練方法 |
| CN108830784A (zh) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
| CN108985241B (zh) * | 2018-07-23 | 2023-05-02 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机设备及存储介质 |
| CN109034063A (zh) * | 2018-07-27 | 2018-12-18 | 北京微播视界科技有限公司 | 人脸特效的多人脸跟踪方法、装置和电子设备 |
| CN108958610A (zh) * | 2018-07-27 | 2018-12-07 | 北京微播视界科技有限公司 | 基于人脸的特效生成方法、装置和电子设备 |
| CN110019893B (zh) * | 2018-08-28 | 2021-10-15 | 京东方科技集团股份有限公司 | 一种画作获取方法及装置 |
| CN109618183B (zh) * | 2018-11-29 | 2019-10-25 | 北京字节跳动网络技术有限公司 | 一种视频特效添加方法、装置、终端设备及存储介质 |
| CN110072047B (zh) * | 2019-01-25 | 2020-10-09 | 北京字节跳动网络技术有限公司 | 图像形变的控制方法、装置和硬件装置 |
| CN111507143B (zh) * | 2019-01-31 | 2023-06-02 | 北京字节跳动网络技术有限公司 | 表情图像效果生成方法、装置和电子设备 |
| CN112749603B (zh) * | 2019-10-31 | 2024-09-17 | 上海商汤智能科技有限公司 | 活体检测方法、装置、电子设备及存储介质 |
-
2019
- 2019-01-31 CN CN201910101335.5A patent/CN111507142A/zh active Pending
- 2019-12-27 US US17/426,840 patent/US12327299B2/en active Active
- 2019-12-27 WO PCT/CN2019/129140 patent/WO2020155984A1/zh not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104780339A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的表情特效动画加载方法和电子设备 |
| CN104780458A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的特效加载方法和电子设备 |
| US20180115746A1 (en) * | 2015-11-17 | 2018-04-26 | Tencent Technology (Shenzhen) Company Limited | Video calling method and apparatus |
| CN108229269A (zh) * | 2016-12-31 | 2018-06-29 | 深圳市商汤科技有限公司 | 人脸检测方法、装置和电子设备 |
| CN108734126A (zh) * | 2018-05-21 | 2018-11-02 | 深圳市梦网科技发展有限公司 | 一种美颜方法、美颜装置及终端设备 |
| CN108495049A (zh) * | 2018-06-15 | 2018-09-04 | Oppo广东移动通信有限公司 | 拍摄控制方法及相关产品 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112307942A (zh) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | 一种面部表情量化表示方法、系统及介质 |
| CN115565220A (zh) * | 2022-09-01 | 2023-01-03 | 桂林电子科技大学 | 基于面部表情识别的空调控制装置、方法以及存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111507142A (zh) | 2020-08-07 |
| US12327299B2 (en) | 2025-06-10 |
| US20220207917A1 (en) | 2022-06-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109359538B (zh) | 卷积神经网络的训练方法、手势识别方法、装置及设备 | |
| US10599914B2 (en) | Method and apparatus for human face image processing | |
| US12327299B2 (en) | Facial expression image processing method and apparatus, and electronic device | |
| US11409794B2 (en) | Image deformation control method and device and hardware device | |
| JP7383714B2 (ja) | 動物顔部の画像処理方法と装置 | |
| US20120154638A1 (en) | Systems and Methods for Implementing Augmented Reality | |
| US11042259B2 (en) | Visual hierarchy design governed user interface modification via augmented reality | |
| CN110084204B (zh) | 基于目标对象姿态的图像处理方法、装置和电子设备 | |
| CN110619656B (zh) | 基于双目摄像头的人脸检测跟踪方法、装置及电子设备 | |
| EP4276754A1 (en) | Image processing method and apparatus, device, storage medium, and computer program product | |
| US20240095886A1 (en) | Image processing method, image generating method, apparatus, device, and medium | |
| WO2020192195A1 (zh) | 图像处理方法、装置和电子设备 | |
| CN114360044A (zh) | 手势识别方法、装置、终端设备及计算机可读存储介质 | |
| US12020469B2 (en) | Method and device for generating image effect of facial expression, and electronic device | |
| CN110069996A (zh) | 头部动作识别方法、装置和电子设备 | |
| CN111199169A (zh) | 图像处理方法和装置 | |
| KR20220124593A (ko) | 의상제거 이미지에 기초한 의상 변형 방법 | |
| WO2020215854A1 (zh) | 渲染图像的方法、装置、电子设备和计算机可读存储介质 | |
| CN117746502B (zh) | 图像标注方法、动作识别方法、装置和电子设备 | |
| CN110222576B (zh) | 拳击动作识别方法、装置和电子设备 | |
| CN111507139A (zh) | 图像效果生成方法、装置和电子设备 | |
| CN111598002B (zh) | 多面部表情捕捉方法、装置、电子设备及计算机存储介质 | |
| CN111489769B (zh) | 图像处理方法、装置和硬件装置 | |
| US20240193851A1 (en) | Generation of a 360-degree object view by leveraging available images on an online platform | |
| CN111899154A (zh) | 漫画视频生成方法及漫画生成方法、装置、设备、介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19913354 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/12/2021) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19913354 Country of ref document: EP Kind code of ref document: A1 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 17426840 Country of ref document: US |