[go: up one dir, main page]

WO2022199102A1 - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
WO2022199102A1
WO2022199102A1 PCT/CN2021/134644 CN2021134644W WO2022199102A1 WO 2022199102 A1 WO2022199102 A1 WO 2022199102A1 CN 2021134644 W CN2021134644 W CN 2021134644W WO 2022199102 A1 WO2022199102 A1 WO 2022199102A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
point
key point
face key
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2021/134644
Other languages
French (fr)
Chinese (zh)
Inventor
孟维遮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Publication of WO2022199102A1 publication Critical patent/WO2022199102A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image processing method, an apparatus, an electronic device, and a storage medium.
  • the current image special effect processing process may include: the user clicks on a set special effect template (eg, a set animal image template, a set decoration template, etc.). After receiving the click input for setting the special effect template, the terminal may perform special effect fusion on the selected setting special effect template and the user's face image, and display the fused special effect image. Afterwards, the terminal may receive the movement track input by the user on the special effect image, and draw a line pattern indicating the movement track on a fixed position of the movement track.
  • a set special effect template eg, a set animal image template, a set decoration template, etc.
  • the present disclosure provides an image processing method, apparatus, electronic device and storage medium.
  • an image processing method comprising:
  • the trajectory position information of at least one trajectory point in the movement trajectory determine the relative position of each trajectory point relative to all the trajectory points. relative position of the face image
  • the image special effect processing is repeatedly performed, and the image special effect processing includes:
  • the first special effect line is displayed.
  • the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point and the first face key point
  • the two face key points are symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image.
  • the first position information of at least two face key points, and the track position of at least one track point in the movement track information to determine the relative position of each trajectory point relative to the face image including:
  • the first position information of the first face key point and the second face key point determine the first rotation matrix and the first scaling matrix of the current face posture in the face image
  • the relative position of the trajectory point relative to the face image is obtained.
  • both the first position information and the track position information include absolute coordinates on the display screen
  • Determining the first rotation matrix and the first scaling matrix of the current face in the face image according to the first position information of the first face key point and the second face key point including:
  • the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point;
  • the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture.
  • obtaining the relative position of the trajectory point relative to the face image according to the first rotation matrix, the first scaling matrix and the translation vector includes:
  • the relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
  • Q represents the relative position
  • Ms 1 represents the first scaling matrix
  • Mr 1 represents the first rotation matrix
  • the relative position is converted according to the second position information of the face key points in the face image displayed on the display page after the current time, to obtain each The first absolute position of the trajectory point on the display screen, including:
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position.
  • the second position information includes absolute coordinates on the display screen, and the determination is determined according to the second position information of the first face key point and the second face key point
  • the second rotation matrix and the second scaling matrix of the current face posture in the face image including:
  • the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point;
  • the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture.
  • obtaining the first position of the trajectory point according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position an absolute position including:
  • the second rotation matrix the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes:
  • R represents the first absolute position
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position
  • (x c , y c ) ) represents the second position information of the target face key point
  • T represents the transposition process.
  • the image special effect processing further includes:
  • the second special effect line is displayed in the display page.
  • the method further includes:
  • each track point determines the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the
  • the benchmark is left and right symmetrical;
  • the generating a second special effect line symmetrical with the first special effect line according to the first special effect line comprising:
  • the relative position includes relative coordinates relative to the face image, and the symmetry point of each trajectory point is determined according to the relative position of each trajectory point relative to the face image
  • the relative position relative to the face image including:
  • the updated relative position is determined as the relative position of the symmetry point.
  • an image processing apparatus comprising:
  • an acquisition module used for acquiring the movement track input by the user in the display page including the face image in response to the special effect display instruction
  • the determining module is configured to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory. the relative position of the trajectory point relative to the face image;
  • An image special effect processing module configured to repeatedly perform image special effect processing, wherein the image special effect processing includes:
  • the first special effect line is displayed.
  • the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point and the first face key point
  • the two face key points are symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image.
  • the determining module is further configured to:
  • the first position information of the first face key point and the second face key point determine the first rotation matrix and the first scaling matrix of the current face posture in the face image
  • the relative position of the trajectory point relative to the face image is obtained.
  • both the first position information and the track position information include absolute coordinates on the display screen, and the determining module is further configured to:
  • the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point;
  • the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture.
  • the determining module is further configured to:
  • the relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
  • Q represents the relative position
  • Ms 1 represents the first scaling matrix
  • Mr 1 represents the first rotation matrix
  • the image special effect processing module is further configured to:
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position.
  • the second position information includes absolute coordinates on the display screen
  • the image special effect processing module is further configured to:
  • the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point;
  • the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture.
  • the image special effect processing module is further configured to:
  • the second rotation matrix the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes:
  • R represents the first absolute position
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position
  • (x c , y c ) ) represents the second position information of the target face key point
  • T represents the transposition process.
  • the image special effect processing further includes:
  • the second special effect line is displayed in the display page.
  • the determining module is further configured to:
  • each track point determines the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the
  • the benchmark is left and right symmetrical;
  • the relative position includes relative coordinates relative to the face image
  • the determining module is further configured to:
  • the updated relative position is determined as the relative position of the symmetry point.
  • an electronic device including:
  • processors one or more processors
  • one or more memories for storing the one or more processor-executable instructions
  • the one or more processors are configured to execute the image processing method described in the first aspect or any possible implementation manner of the first aspect.
  • a non-volatile computer-readable storage medium which, when the instructions in the non-volatile computer-readable storage medium are executed by a processor of an electronic device, causes all the The electronic device can execute the image processing method described in the first aspect or any possible implementation manner of the first aspect.
  • a computer program product including a computer program, and when the computer program is executed by a processor, realizes the image described in the first aspect or any possible implementation manner of the first aspect Approach.
  • the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained.
  • Position information to determine the relative position of each track point relative to the face image.
  • the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained.
  • the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment.
  • FIG. 2 is a schematic diagram of a human face image according to an exemplary embodiment.
  • Fig. 3 is a flow chart of a method for determining relative positions of track points according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram of a display page of a face image according to an exemplary embodiment.
  • Fig. 5 is a schematic diagram of a display page of a face image according to an exemplary embodiment.
  • Fig. 6 is a flow chart of a method for determining the first absolute position of a track point according to an exemplary embodiment.
  • Fig. 7 is a flowchart of a method for generating a second special effect line according to an exemplary embodiment.
  • Fig. 8 is a flowchart of another method for generating a second special effect line according to an exemplary embodiment.
  • Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment.
  • the image processing method can be applied to electronic equipment.
  • the electronic device may be a terminal with a display screen, and the terminal may be installed with an application program for performing image special effect processing on the face image.
  • the embodiments of the present application are described by taking an electronic device as a terminal as an example.
  • the image processing method may include the following steps 101-103:
  • step 101 in response to the special effect display instruction, the movement track input by the user in the display page including the face image is acquired.
  • the user can perform image special effects processing on the face image when he wants to use the terminal to take photos, video captures, or live webcasts and other processes of capturing faces.
  • the face image may include not only the face but also the background.
  • the background may be a building or a landscape or the like.
  • the user may operate the terminal to open an application program with an image special effect processing function, and display a display page including a face image in the application program on the terminal. After receiving the special effect display instruction, the terminal may acquire the movement track input by the user in the display page including the face image in response to the special effect display instruction.
  • the special effect display instruction may be triggered after the terminal receives and executes the setting operation on the display page.
  • the special effect display instruction may be triggered after the user performs a setting operation on the self-drawn control.
  • the setting operation may include input in the form of click, long press, swipe, or voice for the self-drawn control.
  • the display page including the face image may be a shooting interface, a live broadcast interface, or a short (long) video shooting interface, and the like.
  • the trajectory of movement of the user input may be the trajectory of the user moving the input.
  • the input member may be a user's finger or a stylus or the like.
  • the movement track may include at least one track point arranged in a movement order.
  • the at least one trajectory point refers to one or more trajectory points.
  • acquiring the movement trajectory input by the user by the terminal may refer to: acquiring the trajectory position information of at least one trajectory point input by the user by the terminal.
  • the track position information of the track point refers to the absolute position of the track point on the display screen of the terminal.
  • the position information of the track point may be the absolute coordinates of the track point, where the absolute coordinates refer to the position coordinates relative to the specific point on the display screen with a specific point (eg, center point) of the display screen as the origin.
  • the user wants to add a rabbit ear special effect to his face during the webcast.
  • the user can operate the terminal to open the webcast application, and display a display page including the user's face image on the terminal.
  • the user performs a click operation on the self-drawn icon on the display page, and can use a finger to slide and draw a line in the shape of a left rabbit ear at the upper left position of the face of the face image displayed on the display page.
  • the terminal can generate a special effect display instruction, and respond to the special effect display instruction.
  • the human face in the human face image included in the display page can have a rabbit ear special effect.
  • step 102 according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory, determine the relative face of each trajectory point The relative position of the image.
  • the terminal may acquire the first position information of at least two face key points in the face image displayed at the current moment, and the track position information of each track point in the movement track.
  • the relative position of each track point relative to the face image is determined according to the first position information of the at least two face key points and the track position information of at least one track point.
  • the relative position of the track point relative to the face image may be represented by a vector that points to the track point from the target face key point.
  • the relative coordinate representation of the track point relative to the face image can also be used.
  • the embodiments of the present disclosure use the relative coordinates of the track points to the face image to represent the relative positions of the track points to the face image.
  • the terminal may obtain at least two face key points in the face image by performing face key point detection processing on the face image displayed on the display page.
  • the terminal may use an artificial intelligence (Artificial Intelligence, AI) technology to implement face key point detection processing on a face image.
  • AI Artificial Intelligence
  • the at least two face key points may include: a first face key point, a second face key point, and a target face key point.
  • the target face key point can be any face key point on the symmetry axis of the face image.
  • the first face key point and the second face key point may be symmetrical according to the target face key point.
  • the target face key point is the anchor point of the line connecting the first face key point and the second face key point. The connection line between the first face key point and the second face key point can follow the movement of the target face key point.
  • the first face key point and the second face key point are two face key points symmetrical about the target key point, and the target key point is the key point on the symmetry axis of the face image
  • the first face key point is The inclination angle of the line connecting the point and the second face key point can better reflect the rotation angle of the face in the face image displayed on the display page.
  • the first position information of the midpoint of the connection line located on the symmetry axis of the face image can reflect the position information of the face image, so that the position and posture information of the current face in the face image are considered, and each The relative position of the track point or the first absolute position has higher accuracy.
  • the first position information of the face key point may be absolute position information of the face key point on the display screen.
  • the first position information of the face key points may be absolute coordinates of the face key points.
  • the absolute coordinates refer to the position coordinates relative to the specific point on the display screen with a specific point (eg, a center point) of the display screen as the origin.
  • FIG. 2 it shows a schematic diagram of a human face image according to an exemplary embodiment.
  • the key point C of the target face can be a point at the tip of the nose of the face, and is located on the symmetry axis of the face image.
  • the first face key point A and the second face key point B may be two symmetrical points located on both sides of the edge of the face.
  • the inclination angle of the line between the first face key point A and the second face key point B can be used to reflect the rotation angle of the face.
  • the terminal may determine the relative position of each trajectory point relative to the face image by processing the absolute position of each trajectory point on the display screen through spatial change processing.
  • the terminal determines each face image according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory.
  • the process of the relative position of the track point relative to the face image may include the following steps 1021 to 1023 .
  • step 1021 for each track point, according to the first position information of the target face key point and the track position information of the track point, a translation vector of the track point pointing to the target face key point is determined.
  • the terminal may calculate the translation vector of the track point pointing to the target face key point according to the first position information of the target face key point and the track position information of the track point. to perform a translation operation on the track point.
  • the translation vector represents the translation gesture information from the track point to the target face key point, that is, the relative position of the track point and the target face key point.
  • the absolute coordinates (x c1 , y c1 ) of the target face key point C in the face image displayed on the display page at the current moment, the absolute coordinates (x c1 , y c1 ) of the target face key point C.
  • the absolute coordinates (x p , y p ) of a track point P in the finger movement track is (x p -x c1 ,y p -y c1 ) .
  • a first rotation matrix and a first scaling matrix of the current face pose in the face image are determined according to the first position information of the first face key point and the second face key point.
  • the first rotation matrix may represent the rotation attitude information of the current face attitude in the face image.
  • the first scaling matrix may represent scaling pose information of the current face pose in the face image.
  • the terminal in the case that the first position information of the face key point and the trajectory position information of the track point are both absolute coordinates on the display screen of the terminal, the terminal according to the first face key point and the second person
  • the process of determining the first position information of the face key point, the first rotation matrix and the first scaling matrix of the current face pose in the face image may include the following steps:
  • the first rotation matrix is obtained, and the first length is the first face key point pointing to the second face key point. the length of a vector
  • a first scaling matrix is obtained, and the reference length is the first scale set for the face in the face-up posture in the face image. length.
  • the terminal may obtain a first vector pointing from the first face key point to the second face key point according to the first position information of the first face key point and the second face key point.
  • a first length of the first vector is determined.
  • the first rotation matrix may be obtained according to the first position information of the first face key point, the first position information of the second face key point, and the first length.
  • the terminal calculates and determines the first vector according to the first position information of the first face key point A is (x a1 , y a1 ), and the first position information of the second face key point B is (x b1 , y b1 ) is (x a1 -x b1 ,y a1 -y b1 ). Then calculate the first vector
  • the first length of is
  • the first rotation matrix M r1 is obtained.
  • the first rotation matrix M r1 can be used for the translation vector Perform rotation processing, that is, according to the rotation attitude information of the current face attitude in the face image, the translation vector Performs rotation processing at the set scale.
  • step 1021 continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P in a schematic manner.
  • first vector The first length of is
  • the first scaling matrix M s1 is obtained.
  • the first scaling matrix M s1 can be used for the translation vector Perform zoom processing, that is, according to the zoom pose information of the current face pose in the face image, the translation vector Perform scaling processing at the set ratio.
  • the set ratio may be D:1. In some embodiments, D may be 100.
  • the inclination angle can better reflect the rotation angle of the face in the face image displayed on the display page. Therefore, according to the first length of the first vector of the first face key point pointing to the second face key point, the accuracy of the determined first rotation matrix used to indicate the rotation attitude information of the current face in the face image is more accurate. high. And at the same time, the length of the face in the face-up posture in the face image by the connection between the first face key point and the second face key point, and the real first length of the connection in the current face image, can be obtained. On the basis of repeatedly using the relevant information of the connection between the first face key point and the second face key point, a first scaling matrix for indicating the scaling posture information of the current face in the face image is determined, thereby reducing the calculation of the terminal. quantity.
  • step 1023 the relative position of the trajectory point relative to the face image is obtained according to the first rotation matrix, the first scaling matrix and the translation vector.
  • the process that the terminal obtains the relative position of the trajectory point with respect to the face image according to the first rotation matrix, the first scaling matrix and the translation vector may include: The vector and the first formula to get the relative position.
  • the first formula includes:
  • Q represents the relative position of the trajectory point relative to the face image
  • M s1 represents the first scaling matrix
  • M r1 represents the first rotation matrix
  • the translation vector since the translation vector of the track point pointing to the target face key point can reflect the relative distance of the track point relative to the face image, the first rotation matrix can reflect the rotation attitude information of the current face in the face image.
  • the first scaling matrix may reflect the scaling posture information of the current face in the face image.
  • the formula factors of the first formula include the first scaling matrix, the first rotation matrix and the translation vector. Therefore, using the first formula to calculate the relative position of the track point relative to the face image can take into account the various types of the current face in the face image. attitude information, so that the accuracy of the relative position of the calculated trajectory point relative to the face image is high.
  • the scheme of calculating the relative position of the trajectory point relative to the face image according to the first rotation matrix and the first scaling matrix of the current face posture in the face image and the translation vector of the trajectory point pointing to the target face key point since the translation vector of the track point pointing to the target face key point can reflect the relative distance of the track point relative to the face image, the first rotation matrix can reflect the rotation attitude information of the current face in the face image.
  • the first scaling matrix may reflect the scaling posture information of the current face in the face image. Therefore, in the case of considering various pose information of the current face in the face image, the relative position of the trajectory point and the face image can be obtained more realistically. Therefore, the accuracy of the relative position calculated according to the first rotation matrix and the first scaling matrix of the current face posture in the face image is high.
  • image special effect processing is repeatedly performed.
  • the image special effect processing includes: converting the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen; The trajectory points of each first absolute position generate a first special effect line. In the display page, the first special effect line is displayed.
  • the terminal after the terminal acquires the movement trajectory input by the user and before the terminal draws the first special effect line, the user's face may also undergo posture changes such as tilt and side head. Therefore, when the terminal calculates the relative position of the trajectory point relative to the face image, and when the terminal performs image special effect processing, the face pose in the face image displayed on the display interface may be different.
  • the terminal needs to calculate the absolute position of the trajectory point on the display screen according to the face image displayed in real time after acquiring the movement trajectory input by the user, so as to generate and display the first special effect line.
  • the special effect line (the first special effect line and the subsequent second special effect line collectively) displayed on the terminal has a refresh rate.
  • the image special effect processing needs to be performed repeatedly, so that the terminal can display the latest (latest) special effect line drawn, and the special effect line has the same position relative to the face in each face image. Therefore, it can be regarded as the same line that moves with the face visually.
  • the user's face may also change in posture during each refresh interval. Therefore, the image special effect processing is performed by using the face key points of the face image displayed by the terminal in real time.
  • FIG. 4 shows a schematic diagram of a face image of a display page provided by an embodiment of the present disclosure.
  • the shown face image is the face image when the user inputs the movement track.
  • the broken line drawn by the dotted line in the shape of an ear is the movement trajectory L0 input by the user.
  • P is a point on the moving trajectory L0.
  • FIG. 5 shows a schematic diagram of a face image of a display page provided by an embodiment of the present disclosure.
  • the shown face image is the face image displayed by the terminal after the current moment of the display page in the process of executing the image special effect processing.
  • the face image produces a posture change in which the head of the face is tilted.
  • the broken line L1 shown by the dotted line at the upper left of the head in the face image in FIG. 5 is the special effect line corresponding to the movement track input by the user shown in FIG. 4 .
  • the P1 point on the special effect line corresponds to the P point on the movement track.
  • 4 and 5 are face images of the same user at different times.
  • the face key points are the same target face key point C, the same first face key point A, and the same second face key B.
  • the terminal may repeatedly perform image special effect processing until receiving the special effect closing instruction.
  • the display position of the first special effect line that is, the absolute position on the display screen
  • the special effect closing instruction may be triggered after performing a setting operation on the display page.
  • the special effect closing instruction may be triggered by the user performing a setting input for the special effect triggering control.
  • the special effect triggering control may also be a special effect button in the display page.
  • the setting input may include input in the form of click, long press, swipe, or voice for the special effect trigger control.
  • the image special effect processing process includes the following steps:
  • the first special effect line is displayed.
  • the relative position is converted according to the second position information of the face key points in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen.
  • the terminal may acquire the second position information of the key points of the face in the face image displayed on the display page after the current moment. After acquiring the second position information, the terminal can convert the relative position of each track point relative to the face image according to the second position information of the face key point to obtain the first absolute position corresponding to each track point on the display screen.
  • the acquired face key points in the face image are the same as at least two face key points in step 102 .
  • the first absolute position of each track point on the display screen may be represented by coordinates of the track point in the pixel coordinate system of the display screen.
  • the terminal converts the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, and obtains each track point on the display screen.
  • the process of the first absolute position may include the following steps 1031 to 1032.
  • step 1031 for each track point, a second rotation matrix and a second scaling matrix of the current face pose in the face image are determined according to the second position information of the first face key point and the second face key point.
  • the second rotation matrix may represent the rotation posture information of the current face posture in the face image.
  • the second scaling matrix may represent the scaling pose information of the current face pose in the face image.
  • the terminal for each trajectory point in the at least one trajectory point may include the following steps:
  • a second rotation matrix is obtained, and the second length is the first face key point pointing to the second face key point. the length of the two vectors;
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.
  • a second rotation matrix is obtained according to the second position information of the first face key point and the second face key point, and the second length, and the second length is that the first face key point points to the second The length of the second vector of face keypoints.
  • the terminal may obtain a second vector pointing from the first face key point to the second face key point according to the second position information of the first face key point and the second face key point.
  • a second length of the second vector is determined.
  • a second rotation matrix may be obtained according to the second position information of the first face key point, the second position information of the second face key point, and the second length.
  • the terminal After the terminal acquires the movement trajectory input by the user, the terminal performs image special effect processing, and before the first special effect line is generated and displayed, the posture of the user's head changes. At this time, the face position of the face image displayed by the terminal on the display page is changed. For example, after the terminal acquires the movement track input by the user, and before the terminal performs image special effect processing and generates and displays the first special effect line, the user's head changes from the posture shown in FIG. 4 to the posture shown in FIG. 5 .
  • the terminal obtains the second vector according to the second position information (x a2 , y a2 ) of the first face key point A and the second position information (x b2 , y b2 ) of the second face key point B is (x a2 -x b2 ,y a2 -y b2 ). determine the second vector
  • the second length of is
  • the second rotation matrix M r2 is obtained, that is,
  • the second rotation matrix M r1 can be used to perform rotation processing changes on the relative positions of the trajectory points, that is, according to the rotation attitude information of the current face posture in the face image, the relative positions of the trajectory points are subjected to a set ratio of rotation processing changes.
  • the second scaling matrix is obtained according to the reference length of the line connecting the first face key point and the second face key point, and the second length.
  • the reference length is the second length set for the face in the face-up posture in the face image.
  • the first length and the second length set for the face in the face-up posture in the face image are equal.
  • second vector The second length of is
  • the second scaling matrix M s2 can be used to perform scaling processing and conversion on the relative position of the trajectory points, that is, performing scaling processing and conversion on the relative position of the trajectory points according to the scaling posture information of the current face posture in the face image.
  • the set ratio may be D:1. In some embodiments, D may be 100.
  • the inclination angle can better reflect the rotation angle of the face in the face image displayed on the display page. Therefore, according to the second length of the second vector of the first face key point pointing to the second face key point, the accuracy of the determined second rotation matrix used to indicate the rotation attitude information of the current face in the face image is more accurate. high.
  • the length of the face in the face-up posture in the face image by the connection between the first face key point and the second face key point, and the real length of the connection in the current face image can be repeated in Based on the related information of the connection between the first face key point and the second face key point, a second scaling matrix for indicating the scaling posture information of the current face in the face image is determined, thereby reducing the amount of calculation of the terminal.
  • step 1032 the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.
  • the process that the terminal obtains the first absolute position of the trajectory point according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point may include: the terminal rotates according to the second The matrix, the second scaling matrix, the second position information of the target face key point, the relative position, and the second formula are used to obtain the first absolute position of the track point.
  • the second formula includes:
  • R represents the first absolute position of the trajectory point on the display screen
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position of the trajectory point relative to the face image
  • ( x c , y c ) represents the second position information of the key points of the target face
  • T represents the transposition processing.
  • the second rotation matrix can reflect the rotation posture information of the current face in the face image.
  • the second scaling matrix can reflect the scaling posture information of the current face in the face image. Therefore, using the second formula to determine the first absolute position of the track point can not only make the first absolute position of the track point change with the change of the display position of the face image, but also change due to the rotation posture of the current face in the face image. information as well as the zoom pose information. In this way, the generated first special effect lines can not only follow the movement of the human face in the face image, but also can rotate and zoom following the human face in the face image, thereby enriching the special effect display effect by connecting the trajectory points of the first absolute positions.
  • the facial posture in the face image displayed by the terminal when performing the aforementioned step 102 changes from the facial posture in the facial image displayed by the terminal when performing step 103, then for the same trajectory point, The relative position of the trajectory point obtained in step 102 relative to the face image is different from the absolute position of the trajectory point obtained in step 103 relative to the display screen. If the facial posture in the face image displayed by the terminal when performing the aforementioned step 102 does not change from the facial posture in the facial image displayed by the terminal when performing the step 103, then for the same trajectory point, the result obtained in the step 102 The relative position of the track point relative to the face image is the same as the absolute position of the track point obtained in step 103 relative to the display screen. That is, the two positions coincide.
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key points, because the second rotation matrix can reflect the human The rotational pose information of the current face in the face image.
  • the second scaling matrix may reflect the scaling posture information of the current face in the face image. Therefore, using the second rotation matrix, the second scaling matrix and the second position information of the target face key point to convert the relative position of the trajectory point into the first absolute position of the trajectory point can not only make the first absolute position of the trajectory point It changes with the change of the display position of the face image, and also changes due to the change of the rotation attitude information and the zoom attitude information of the current face in the face image. In this way, the generated first special effect lines can not only follow the movement of the human face in the face image, but also can rotate and zoom following the human face in the face image, thereby enriching the special effect display effect by connecting the trajectory points of the first absolute positions.
  • the trajectory points located at the first absolute positions are connected to generate a first special effect line.
  • the terminal may generate the first special effect line by connecting the trajectory points of the first absolute positions corresponding to the trajectory points according to the arrangement order of the trajectory points in the movement trajectory input by the user.
  • the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence.
  • the first absolute position of the trajectory point X1 is Y1.
  • the first absolute position of the trajectory point X2 is Y2.
  • the first absolute position of the trajectory point X3 is Y3.
  • the terminal sequentially connects the trajectory point located at the first absolute position Y1, the trajectory point located at the first absolute position Y2 and the trajectory located at the first absolute position Y3 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3. Click to generate the first special effect line.
  • the first special effect line is displayed.
  • the terminal may display the generated first special effect line on the display page currently displayed by the terminal.
  • the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained.
  • Position information to determine the relative position of each track point relative to the face image.
  • the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained.
  • the first special effect line is drawn according to the movement trajectory input by the user, so that the user can draw the special effect independently.
  • the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions.
  • the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • the terminal may not only draw the first special effect line corresponding to the movement trajectory independently drawn according to the movement trajectory input by the user.
  • a second special effect line that is symmetrical to the first special effect line may also be drawn according to the first special effect line.
  • the second special effect line and the first special effect line are left-right symmetrical on the basis of the face in the face image.
  • the terminal can not only draw the special effect line L1 in the shape of an ear as shown by the dotted line at the upper left of the head in the face image in FIG. 5 . It is also possible to draw a special effect line L2 in the shape of an ear as shown by the dotted line at the upper right of the head in the face image in FIG. 5 .
  • the special effect line L1 and the special effect line L2 are left-right symmetrical on the basis of the face in the face image.
  • the point P1 on the special effect line L1 and the point P2 on the special effect line L2 are left-right symmetrical on the basis of the face in the face image.
  • the image special effect processing may further include the following:
  • a second special effect line symmetrical with the first special effect line is generated, and the second special effect line and the first special effect line are left-right symmetrical based on the face in the face image;
  • a second special effect line symmetrical to the first special effect line is generated according to the first special effect line.
  • the second special effect line and the first special effect line are left and right symmetrical on the basis of the face in the face image.
  • the terminal may generate, according to the first special effect line, a left-right symmetrical second special effect line based on the face in the face image currently displayed by the terminal.
  • the terminal may generate a second special effect line symmetrical to the first special effect line according to the first special effect line.
  • the embodiments of the present disclosure are described by taking the following two examples as examples.
  • the process for the terminal to generate a second special effect line symmetrical to the first special effect line according to the first special effect line may include the following steps 701 to 702 .
  • step 701 the second absolute position of each symmetrical point on the display screen is determined according to the second position information of the face key point and the relative position of each symmetrical point.
  • the terminal performs the above step 102 according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, After determining the relative position of each track point relative to the face image.
  • the terminal can also determine the relative position of the symmetrical point of each track point with respect to the face image according to the relative position of each track point with respect to the face image.
  • the symmetrical point and the trajectory point are left and right symmetrical with the face as the reference.
  • the process of determining the relative position of the symmetrical point of each trajectory point relative to the face image by the terminal may include: the terminal performs positive and negative on the coordinate value of the first direction in the relative coordinates of the trajectory point. Digital conversion processing is performed to obtain processed coordinate values, and the first direction is perpendicular to the symmetry axis of the face image.
  • the relative position of the track point is updated, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value.
  • the updated relative position is determined as the relative position of the symmetry point.
  • the first direction may be a direction perpendicular to the symmetry axis of the left-right symmetry of the face image.
  • the relative coordinates of the trajectory point P1 determined by the terminal relative to the face image are (x q , y q ).
  • the first direction is a direction perpendicular to the symmetry axis of the left-right symmetry of the face image, that is, the x-axis direction.
  • the terminal performs positive and negative conversion processing on the coordinate value in the first direction in the relative coordinates of the trajectory point, and obtains the processed coordinate value -x q .
  • the relative position of the symmetrical point is (-x q , y q ).
  • the terminal determines the second absolute position of each symmetrical point on the display screen according to the second position information of the face key point and the relative position of each symmetrical point. That is, the terminal converts the relative position of each symmetrical point according to the second position information of the key point of the face in the face image displayed on the display page after the current moment, and obtains the second absolute position of each symmetrical point on the display screen. .
  • the process of determining the second absolute position of each symmetrical point on the display screen by the terminal according to the second position information of the key point of the face and the relative position of each symmetrical point may refer to step A in the aforementioned image special effect processing process, according to the display page In the face image displayed after the current moment, the second position information of the key points of the face converts the relative positions to obtain the first absolute position of each track point on the display screen, which is not performed in this embodiment of the present disclosure. Repeat.
  • step 702 the symmetrical points located at the second absolute positions are connected to generate a second special effect line.
  • the terminal may, according to the arrangement order of the trajectory points in the movement trajectory input by the user, connect the trajectory points of the second absolute positions among the symmetrical points corresponding to the trajectory points, and generate the second special effect line .
  • the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence.
  • the second absolute position of the symmetrical point X4 corresponding to the trajectory point X1 is Y4.
  • the second absolute position of the symmetrical point X5 corresponding to the trajectory point X2 is Y5.
  • the second absolute position of the symmetrical point X6 corresponding to the trajectory point X3 is Y6.
  • the terminal sequentially connects the trajectory point located at the second absolute position Y4, the trajectory point located at the second absolute position Y5 and the trajectory located at the second absolute position Y6 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3 Click to generate the second special effect line.
  • the process of generating the second special effect line symmetrical to the first special effect line by the terminal according to the first special effect line may include the following steps 801 to 802 .
  • step 801 the second absolute position of the symmetrical point of each trajectory point on the display screen is determined according to the second position information of the face key point and the first absolute position of each trajectory point.
  • the trajectory point and the symmetry point are left and right symmetrical based on the face.
  • the process that the terminal determines the second absolute position of the symmetrical point of each trajectory point on the display screen according to the second position information of the face key point and the first absolute position of each trajectory point may include the following steps:
  • the fourth vector pointing to the track point from the target face key point is obtained;
  • the second absolute position of the symmetrical point is obtained.
  • a second vector pointing from the first face key point to the second face key point is obtained, and The third vector of two vectors.
  • the terminal obtains the second vector according to the second position information (x a2 , y a2 ) of the first face key point A and the second position information (x b2 , y b2 ) of the second face key point B is (x a2 -x b2 ,y a2 -y b2 ). and the vertical with the second vector the third vector of is (y b2 -y a2 ,x a2 -x b2 ).
  • a fourth vector pointing from the target face key point to the track point is obtained.
  • step 1021 continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P schematically.
  • the first absolute position of the trajectory point P is (x r , y r ).
  • the terminal obtains the first position from the target face key point to the trajectory point.
  • four vector is (x r -x c ,y r -y c ) .
  • the second absolute position of the symmetrical point is obtained according to the second vector, the third vector, the fourth vector and the second position information of the target face key point.
  • the terminal obtains the second absolute position of the symmetry point according to the second vector, the third vector, the fourth vector, the second position information of the target face key point, and the third formula.
  • the third formula includes:
  • M represents the second absolute position of the symmetry point
  • (x c , y c ) represents the second current position information of the target face key point.
  • the symmetrical point and the trajectory point can be directly left and right symmetrical with the face as the reference, and the relative coordinates of the trajectory point can be directly converted to the first one that is perpendicular to the symmetry axis of the face image.
  • the obtained relative coordinate is determined as the relative coordinate of the symmetrical point of the trajectory point.
  • the relative positions of the symmetrical points are converted according to the second position information of the face key points in the face image displayed on the display page after the current moment, to obtain the symmetrical points on the display screen.
  • Second absolute position Compared with the second implementation manner, the process of determining the second absolute position of the symmetrical point on the display screen is simplified, and the calculation efficiency of the second absolute position of the symmetrical point is improved.
  • step 802 the symmetrical points located at the second absolute positions are connected to generate a second special effect line.
  • the terminal may, according to the arrangement order of the trajectory points in the movement trajectory input by the user, connect the trajectory points of the second absolute positions among the symmetrical points corresponding to the trajectory points, and generate the second special effect line .
  • the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence.
  • the second absolute position of the symmetrical point X4 corresponding to the trajectory point X1 is Y4.
  • the second absolute position of the symmetrical point X5 corresponding to the trajectory point X2 is Y5.
  • the second absolute position of the symmetrical point X6 corresponding to the trajectory point X3 is Y6.
  • the terminal sequentially connects the trajectory point located at the second absolute position Y4, the trajectory point located at the second absolute position Y5 and the trajectory located at the second absolute position Y6 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3 Click to generate the second special effect line.
  • the second effect line is displayed in the display page.
  • the terminal may display the generated second special effect line on the display page currently displayed by the terminal.
  • the terminal may generate, according to the first special effect line, a second special effect line that is left-right symmetrical with the first special effect line based on the human face in the face image. And display the second special effect line in the display page. It realizes the function that the user can independently draw the left and right symmetrical special effects lines based on the face in the face image.
  • the terminal After acquiring the relative positions of the symmetrical points of each trajectory point in the moving trajectory input by the user, the terminal adopts the second position information of the key points of the face and the relative positions of each symmetrical point in the face image displayed on the real-time page. , the determined second absolute position of the symmetrical point of each track point on the display screen. Thereby, the second effect lines generated by connecting the symmetrical points of the second absolute positions are connected. Therefore, the display position of the generated second special effect line will change with the change of the display position of the face image displayed on the display page in real time. The special effect of the second special effect line following the movement of the face is realized. In this way, the left and right symmetrical special effects lines drawn on the basis of the human face can move with the human face, enriching the special effect display effect.
  • the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained.
  • Position information to determine the relative position of each track point relative to the face image.
  • the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained.
  • the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • the image processing apparatus 900 includes: an acquisition module 901 , a determination module 902 and an image special effect processing module 903 .
  • an acquisition module 901 configured to acquire the movement track input by the user in the display page including the face image in response to the special effect display instruction
  • the determining module 902 is configured to determine the relative position of each track point relative to the human face according to the first position information of at least two face key points and the track position information of at least one track point in the moving track in the face image displayed on the display page at the current moment. the relative position of the face image;
  • the image special effect processing module 903 is used to repeatedly perform image special effect processing, and the image special effect processing includes:
  • the first special effect line is displayed.
  • the at least two face key points include: a first face key point, a second face key point, and a target face key point, and the first face key point and the second face key point are related to
  • the target face key point is symmetrical, and the target face key point is any face key point on the symmetry axis of the face image.
  • the determining module 902 is further configured to:
  • the relative position of the trajectory point relative to the face image is obtained.
  • both the first position information and the track position information include absolute coordinates on the display screen, and the determining module 902 is further configured to:
  • the first rotation matrix is obtained, and the first length is the first face key point pointing to the second face key point. the length of a vector
  • a first scaling matrix is obtained, and the reference length is the first scale set for the face in the face-up posture in the face image. length.
  • the determining module 902 is further configured to:
  • the relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and the first formula, and the first formula includes:
  • Q represents the relative position
  • Ms 1 represents the first scaling matrix
  • Mr 1 represents the first rotation matrix
  • the image special effect processing module 903 is further configured to:
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.
  • the second position information includes absolute coordinates on the display screen
  • the image special effect processing module 903 is further configured to:
  • a second rotation matrix is obtained, and the second length is the first face key point pointing to the second face key point. the length of the two vectors;
  • a second scaling matrix is obtained, and the reference length is the second set for the face in the face-up posture in the face image. length.
  • the image special effect processing module 903 is further configured to:
  • the second rotation matrix the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the second formula includes:
  • R represents the first absolute position
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position
  • (x c , y c ) represents the target face key point
  • T represents the transposition process.
  • the image special effect processing further includes:
  • a second special effect line symmetrical with the first special effect line is generated, and the second special effect line and the first special effect line are left-right symmetrical based on the face in the face image;
  • the determining module 902 is further configured to:
  • each track point determines the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point are left and right symmetrical with the face as the benchmark;
  • the image special effect processing module 903 is also used for:
  • the relative position includes relative coordinates relative to the face image
  • the determining module 902 is further configured to:
  • the determination module can be used to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and at least one trajectory point in the movement trajectory input by the user on the display page.
  • the position information of the trajectory is determined, and the relative position of each trajectory point relative to the face image is determined.
  • the image special effect processing module repeatedly perform the conversion according to the second position information of the face key points in the face image displayed on the display page after the current moment, the relative positions are converted to obtain the first absolute position of each track point on the display screen. , and on the display page, the process of displaying the first special effect line formed by connecting the trajectory points located at the first absolute positions.
  • the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device may be a terminal.
  • the electronic device 1000 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compression standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, a moving picture expert compression standard Audio Layer 4) Player, Laptop or Desktop.
  • Electronic device 1000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • the electronic device 1000 includes: a processor 1001 and a memory 1002 .
  • the processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1001 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1001 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1002 may include one or more non-volatile computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1001 to realize the information display provided by the method embodiments in this application. method.
  • the electronic device 1000 may further include: a peripheral device interface 1003 and at least one peripheral device.
  • the processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1004 , a display screen 1005 , a camera 1006 , an audio circuit 1007 , a positioning component 1008 and a power supply 1009 .
  • the peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002 .
  • processor 1001, memory 1002, and peripherals interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1001, memory 1002, and peripherals interface 1003 or The two may be implemented on a separate chip or circuit board, which is not limited by the embodiments of the present disclosure.
  • the radio frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • radio frequency circuitry 1004 includes: an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and the like.
  • the radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, metropolitan area networks, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area networks and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 1004 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.
  • the display screen 1005 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 1005 also has the ability to acquire touch signals on or above the surface of the display screen 1005 .
  • the touch signal can be input to the processor 1001 as a control signal for processing.
  • the display screen 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1005 there may be one display screen 1005, which is arranged on the front panel of the electronic device 1000; in other embodiments, there may be at least two display screens 1005, which are respectively arranged on different surfaces of the electronic device 1000 or in a folded design. ; In still other embodiments, the display screen 1005 may be a flexible display screen, disposed on a curved surface or a folding surface of the electronic device 1000 . Even, the display screen 1005 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 1005 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • the camera assembly 1006 is used to capture images or video.
  • camera assembly 1006 includes a front-facing camera and a rear-facing camera.
  • the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal.
  • there are at least two rear cameras which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function
  • the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions.
  • VR Virtual Reality, virtual reality
  • the camera assembly 1006 may also include a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • Audio circuitry 1007 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1001 for processing, or to the radio frequency circuit 1004 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1001 or the radio frequency circuit 1004 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1007 may also include a headphone jack.
  • the positioning component 1008 is used to locate the current geographic location of the electronic device 1000 to implement navigation or LBS (Location Based Service).
  • the positioning component 1008 may be a positioning component based on the GPS (Global Positioning System, global positioning system) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • Power supply 1009 is used to power various components in electronic device 1000 .
  • the power source 1009 may be alternating current, direct current, disposable batteries or rechargeable batteries.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the electronic device 1000 also includes one or more sensors 1010 .
  • the one or more sensors 1010 include, but are not limited to, an acceleration sensor 1011, a gyro sensor 1012, a pressure sensor 1013, a fingerprint sensor 1014, an optical sensor 1015, and a proximity sensor 1016.
  • the acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the electronic device 1000 .
  • the acceleration sensor 1011 can be used to detect the components of the gravitational acceleration on the three coordinate axes.
  • the processor 1001 can control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011 .
  • the acceleration sensor 1011 can also be used for game or user movement data collection.
  • the gyroscope sensor 1012 can detect the body direction and rotation angle of the electronic device 1000 , and the gyroscope sensor 1012 can cooperate with the acceleration sensor 1011 to collect the 3D actions of the user on the electronic device 1000 .
  • the processor 1001 can implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1013 may be disposed on the side frame of the electronic device 1000 and/or the lower layer of the display screen 1005 .
  • the processor 1001 can perform left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 1013 .
  • the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005.
  • the operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
  • the fingerprint sensor 1014 is used to collect the user's fingerprint, and the processor 1001 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings.
  • the fingerprint sensor 1014 may be provided on the front, back, or side of the electronic device 1000 . When the electronic device 1000 is provided with physical buttons or a manufacturer's logo, the fingerprint sensor 1014 can be integrated with the physical buttons or the manufacturer's logo.
  • the optical sensor 1015 is used to collect ambient light intensity.
  • the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015 . In some embodiments, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is decreased. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1015 .
  • a proximity sensor 1016 also called a distance sensor, is usually provided on the front panel of the electronic device 1000 .
  • the proximity sensor 1016 is used to collect the distance between the user and the front of the electronic device 1000 .
  • the processor 1001 controls the display screen 1005 to switch from the bright screen state to the off screen state; when the proximity sensor 1016 detects When the distance between the user and the front of the electronic device 1000 gradually increases, the processor 1001 controls the display screen 1005 to switch from the off-screen state to the bright-screen state.
  • FIG. 10 does not constitute a limitation on the electronic device 1000, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • a non-volatile computer-readable storage medium is also provided, when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the methods provided by the above method embodiments. image processing methods.
  • the computer-readable storage medium can be ROM (Read-Only Memory, read-only memory), RAM (Random Access Memory, random access memory), CD-ROM (Compact Disc Read-Only Memory, read-only optical disk), Tape, floppy disk, and optical data storage devices, etc.
  • a computer program product including a computer program.
  • the computer program is executed by the processor, the image processing methods provided by the above method embodiments can be executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to the technical field of image processing, and relates to an image processing method and device. The method comprises: in response to a special effect display instruction, acquiring a movement trajectory input by a user in a display page comprising a face image; according to first position information of at least two face key points in the face image displayed at the current moment, and trajectory position information of at least one trajectory point in the movement trajectory, determining the relative position of each trajectory point relative to the face image; and repeatedly executing the image special effect processing, the image special effect processing comprising: converting the relative position according to second position information of the face key points in the face image displayed in the display page after the current moment to obtain a first absolute position of each trajectory point on a display screen; and connecting the trajectory points located at the first absolute positions, and generating and displaying a first special effect line.

Description

图像处理方法及装置Image processing method and device

交叉引用cross reference

本申请基于申请号为202110328694.1、申请日为2021年3月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on the Chinese patent application with the application number of 202110328694.1 and the filing date of March 26, 2021, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is incorporated herein by reference.

技术领域technical field

本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、电子设备及存储介质。The present disclosure relates to the technical field of image processing, and in particular, to an image processing method, an apparatus, an electronic device, and a storage medium.

背景技术Background technique

随着智能终端的发展,在诸如拍照、视频拍摄和网络直播等图像生成场景中,对人脸图像进行图像特效处理已成为一种主流的图像处理技术。With the development of intelligent terminals, in image generation scenarios such as photography, video shooting, and webcasting, image special effects processing on face images has become a mainstream image processing technology.

目前的图像特效处理过程可以包括:用户点击选定的设定特效模板(例如,设定的动物形象模板、设定的装饰物模板等)。终端在接收到针对设定特效模板的点击输入后,可以将选定的设定特效模板与用户的人脸图像进行特效融合,并显示融合后的特效图像。之后,终端可以接收用户在特效图像上输入的移动轨迹,并在移动轨迹的固定位置上绘制指示移动轨迹的线条图案。The current image special effect processing process may include: the user clicks on a set special effect template (eg, a set animal image template, a set decoration template, etc.). After receiving the click input for setting the special effect template, the terminal may perform special effect fusion on the selected setting special effect template and the user's face image, and display the fused special effect image. Afterwards, the terminal may receive the movement track input by the user on the special effect image, and draw a line pattern indicating the movement track on a fixed position of the movement track.

发明内容SUMMARY OF THE INVENTION

本公开提供一种图像处理方法、装置、电子设备及存储介质。The present disclosure provides an image processing method, apparatus, electronic device and storage medium.

根据本公开实施例的第一方面,提供一种图像处理方法,所述方法包括:According to a first aspect of the embodiments of the present disclosure, there is provided an image processing method, the method comprising:

响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;In response to the special effect display instruction, acquiring the movement track input by the user in the display page including the face image;

根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定各所述轨迹点相对所述人脸图像的相对位置;According to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, determine the relative position of each trajectory point relative to all the trajectory points. relative position of the face image;

重复执行图像特效处理,所述图像特效处理包括:The image special effect processing is repeatedly performed, and the image special effect processing includes:

根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current time, to obtain the first position of each track point on the display screen. absolute position;

连接位于各所述第一绝对位置的轨迹点,生成第一特效线条;Connect the trajectory points located at each of the first absolute positions to generate a first special effect line;

在所述显示页面中,显示所述第一特效线条。In the display page, the first special effect line is displayed.

在一种可能实现方式中,所述至少两个人脸关键点包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点,所述第一人脸关键点与所述第二人脸关键点关于所述目标人脸关键点对称,所述目标人脸关键点为所述人脸图像的对称轴上的任一人脸关键点。In a possible implementation manner, the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point and the first face key point The two face key points are symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image.

在一种可能实现方式中,所述根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定每个所述轨迹点相对所述人脸图像的相对位置,包括:In a possible implementation manner, in the face image displayed at the current moment according to the display page, the first position information of at least two face key points, and the track position of at least one track point in the movement track information to determine the relative position of each trajectory point relative to the face image, including:

针对各所述轨迹点,根据所述目标人脸关键点的第一位置信息和所述轨迹点的轨迹位置信息,确定所述轨迹点指向所述目标人脸关键点的平移向量;For each of the track points, according to the first position information of the target face key point and the track position information of the track point, determine the translation vector of the track point pointing to the target face key point;

根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵;According to the first position information of the first face key point and the second face key point, determine the first rotation matrix and the first scaling matrix of the current face posture in the face image;

根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置。According to the first rotation matrix, the first scaling matrix and the translation vector, the relative position of the trajectory point relative to the face image is obtained.

在一种可能实现方式中,所述第一位置信息和所述轨迹位置信息均包括在所述显示屏幕上的绝对坐标,In a possible implementation manner, both the first position information and the track position information include absolute coordinates on the display screen,

所述根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸的第一旋转矩阵以及第一缩放矩阵,包括:Determining the first rotation matrix and the first scaling matrix of the current face in the face image according to the first position information of the first face key point and the second face key point, including:

根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,以及第一长度,得到所述第一旋转矩阵,所述第一长度为所述第一人脸关键点指向所述第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point;

根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第一长度,得到所述第一缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第一长度。According to the reference length of the connecting line between the first face key point and the second face key point, and the first length, the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture.

在一种可能实现方式中,所述根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置,包括:In a possible implementation manner, obtaining the relative position of the trajectory point relative to the face image according to the first rotation matrix, the first scaling matrix and the translation vector includes:

根据所述第一缩放矩阵、所述第一旋转矩阵、所述平移向量以及第一公式,得到所述相对位置,所述第一公式包括:The relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:

Figure PCTCN2021134644-appb-000001
Figure PCTCN2021134644-appb-000001

其中,Q表示所述相对位置、Ms 1表示所述第一缩放矩阵、Mr 1表示所述第一旋转矩阵、

Figure PCTCN2021134644-appb-000002
表示所述平移向量。 Wherein, Q represents the relative position, Ms 1 represents the first scaling matrix, Mr 1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-000002
represents the translation vector.

在一种可能实现方式中,所述根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置,包括:In a possible implementation manner, the relative position is converted according to the second position information of the face key points in the face image displayed on the display page after the current time, to obtain each The first absolute position of the trajectory point on the display screen, including:

针对各所述轨迹点,根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵;For each of the trajectory points, according to the second position information of the first face key point and the second face key point, determine the second rotation matrix and the second rotation matrix of the current face pose in the face image scaling matrix;

根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position.

在一种可能实现方式中,所述第二位置信息包括在显示屏幕上的绝对坐标,所述根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵,包括:In a possible implementation manner, the second position information includes absolute coordinates on the display screen, and the determination is determined according to the second position information of the first face key point and the second face key point The second rotation matrix and the second scaling matrix of the current face posture in the face image, including:

根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,以及第二长度,得到所述第二旋转矩阵,所述第二长度为所述第一人脸关键点指向所述第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point;

根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第二长度,得到所述第二缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第二长度。According to the reference length of the line connecting the first face key point and the second face key point, and the second length, the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture.

在一种可能实现方式中,所述根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置,包括:In a possible implementation manner, obtaining the first position of the trajectory point according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position an absolute position, including:

根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息、所述相对位置以及第二公式,得到所述轨迹点的第一绝对位置,所述第二公式包括:According to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes:

R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ;

其中,R表示所述第一绝对位置、Mr 2表示所述第二旋转矩阵、Ms 2表示所述第二缩放矩阵、(x q,y q)表示所述相对位置、(x c,y c)表示所述目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position, (x c , y c ) ) represents the second position information of the target face key point, and T represents the transposition process.

在一种可能实现方式中,所述图像特效处理还包括:In a possible implementation manner, the image special effect processing further includes:

根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,所述第二特效线条与所述第一特效线条以所述人脸图像中人脸为基准左右对称;generating a second special effect line symmetrical to the first special effect line according to the first special effect line, where the second special effect line and the first special effect line are left-right symmetrical with respect to the face in the face image;

在所述显示页面中显示所述第二特效线条。The second special effect line is displayed in the display page.

在一种可能实现方式中,所述方法还包括:In a possible implementation, the method further includes:

根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,所述对称点与所述轨迹点以所述人脸为基准左右对称;According to the relative position of each track point relative to the face image, determine the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the The benchmark is left and right symmetrical;

所述根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,包括:The generating a second special effect line symmetrical with the first special effect line according to the first special effect line, comprising:

根据所述人脸关键点的第二位置信息,以及各所述对称点的相对位置,确定各所述对称点在所述显示屏幕上的第二绝对位置;Determine the second absolute position of each of the symmetrical points on the display screen according to the second position information of the face key points and the relative positions of each of the symmetrical points;

连接位于各所述第二绝对位置的对称点,生成所述第二特效线条。Connecting the symmetrical points located at each of the second absolute positions to generate the second special effect line.

在一种可能实现方式中,所述相对位置包括相对所述人脸图像的相对坐标,所述根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,包括:In a possible implementation manner, the relative position includes relative coordinates relative to the face image, and the symmetry point of each trajectory point is determined according to the relative position of each trajectory point relative to the face image The relative position relative to the face image, including:

对所述轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,所述第一方向与所述人脸图像的对称轴垂直;Perform positive and negative conversion processing on the coordinate value of the first direction in the relative coordinates of the trajectory point to obtain the processed coordinate value, and the first direction is perpendicular to the symmetry axis of the face image;

更新所述轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为所述处理后的坐标值;updating the relative position of the trajectory point, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value;

确定所述更新后的相对位置为所述对称点的相对位置。The updated relative position is determined as the relative position of the symmetry point.

根据本公开实施例的第二方面,提供一种图像处理装置,所述装置包括:According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus comprising:

获取模块,用于响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;an acquisition module, used for acquiring the movement track input by the user in the display page including the face image in response to the special effect display instruction;

确定模块,用于根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定各所述轨迹点相对所述人脸图像的相对位置;The determining module is configured to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory. the relative position of the trajectory point relative to the face image;

图像特效处理模块,用于重复执行图像特效处理,所述图像特效处理包括:An image special effect processing module, configured to repeatedly perform image special effect processing, wherein the image special effect processing includes:

根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current time, to obtain the first position of each track point on the display screen. absolute position;

连接位于各所述第一绝对位置的轨迹点,生成第一特效线条;Connect the trajectory points located at each of the first absolute positions to generate a first special effect line;

在所述显示页面中,显示所述第一特效线条。In the display page, the first special effect line is displayed.

在一种可能实现方式中,所述至少两个人脸关键点包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点,所述第一人脸关键点与所述第二人脸关键点关于所述目标人脸关键点对称,所述目标人脸关键点为所述人脸图像的对称轴上的任一人脸关键点。In a possible implementation manner, the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point and the first face key point The two face key points are symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image.

在一种可能实现方式中,所述确定模块,还用于:In a possible implementation manner, the determining module is further configured to:

针对各所述轨迹点,根据所述目标人脸关键点的第一位置信息和所述轨迹点的轨迹位置信息,确定所述轨迹点指向所述目标人脸关键点的平移向量;For each of the track points, according to the first position information of the target face key point and the track position information of the track point, determine the translation vector of the track point pointing to the target face key point;

根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵;According to the first position information of the first face key point and the second face key point, determine the first rotation matrix and the first scaling matrix of the current face posture in the face image;

根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置。According to the first rotation matrix, the first scaling matrix and the translation vector, the relative position of the trajectory point relative to the face image is obtained.

在一种可能实现方式中,所述第一位置信息和所述轨迹位置信息均包括在所述显示屏幕上的绝对坐标,所述确定模块,还用于:In a possible implementation manner, both the first position information and the track position information include absolute coordinates on the display screen, and the determining module is further configured to:

根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,以及第一长度,得到所述第一旋转矩阵,所述第一长度为所述第一人脸关键点指向所述第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point;

根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第一长度,得到所述第一缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第一长度。According to the reference length of the connecting line between the first face key point and the second face key point, and the first length, the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture.

在一种可能实现方式中,所述确定模块,还用于:In a possible implementation manner, the determining module is further configured to:

根据所述第一缩放矩阵、所述第一旋转矩阵、所述平移向量以及第一公式,得到所述相对位置,所述第一公式包括:The relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:

Figure PCTCN2021134644-appb-000003
Figure PCTCN2021134644-appb-000003

其中,Q表示所述相对位置、Ms 1表示所述第一缩放矩阵、Mr 1表示所述第一旋转矩阵、

Figure PCTCN2021134644-appb-000004
表示所述平移向量。 Wherein, Q represents the relative position, Ms 1 represents the first scaling matrix, Mr 1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-000004
represents the translation vector.

在一种可能实现方式中,所述图像特效处理模块,还用于:In a possible implementation manner, the image special effect processing module is further configured to:

针对各所述轨迹点,根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵;For each of the trajectory points, according to the second position information of the first face key point and the second face key point, determine the second rotation matrix and the second rotation matrix of the current face pose in the face image scaling matrix;

根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position.

在一种可能实现方式中,所述第二位置信息包括在显示屏幕上的绝对坐标,所述图像特效处理模块,还用于:In a possible implementation manner, the second position information includes absolute coordinates on the display screen, and the image special effect processing module is further configured to:

根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,以及第二长度,得到所述第二旋转矩阵,所述第二长度为所述第一人脸关键点指向所述第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point;

根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第二长度,得到所述第二缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第二长度。According to the reference length of the line connecting the first face key point and the second face key point, and the second length, the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture.

在一种可能实现方式中,所述图像特效处理模块,还用于:In a possible implementation manner, the image special effect processing module is further configured to:

根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息、所述相对位置以及第二公式,得到所述轨迹点的第一绝对位置,所述第二公式包括:According to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes:

R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ;

其中,R表示所述第一绝对位置、Mr 2表示所述第二旋转矩阵、Ms 2表示所述第二缩放矩阵、(x q,y q)表示所述相对位置、(x c,y c)表示所述目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position, (x c , y c ) ) represents the second position information of the target face key point, and T represents the transposition process.

在一种可能实现方式中,所述图像特效处理还包括:In a possible implementation manner, the image special effect processing further includes:

根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,所述第二特效线条与所述第一特效线条以所述人脸图像中人脸为基准左右对称;generating a second special effect line symmetrical to the first special effect line according to the first special effect line, where the second special effect line and the first special effect line are left-right symmetrical with respect to the face in the face image;

在所述显示页面中显示所述第二特效线条。The second special effect line is displayed in the display page.

在一种可能实现方式中,所述确定模块,还用于:In a possible implementation manner, the determining module is further configured to:

根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,所述对称点与所述轨迹点以所述人脸为基准左右对称;According to the relative position of each track point relative to the face image, determine the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the The benchmark is left and right symmetrical;

所述图像特效处理模块,还用于:The image special effect processing module is also used for:

根据所述人脸关键点的第二位置信息,以及各所述对称点的相对位置,确定各所述对称点在所述显示屏幕上的第二绝对位置;Determine the second absolute position of each of the symmetrical points on the display screen according to the second position information of the face key points and the relative positions of each of the symmetrical points;

连接位于各所述第二绝对位置的对称点,生成所述第二特效线条。Connecting the symmetrical points located at each of the second absolute positions to generate the second special effect line.

在一种可能实现方式中,所述相对位置包括相对所述人脸图像的相对坐标,所述确定模块,还用于:In a possible implementation manner, the relative position includes relative coordinates relative to the face image, and the determining module is further configured to:

对所述轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,所述第一方向与所述人脸图像的对称轴垂直;Perform positive and negative conversion processing on the coordinate value of the first direction in the relative coordinates of the trajectory point to obtain the processed coordinate value, and the first direction is perpendicular to the symmetry axis of the face image;

更新所述轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为所述处理后的坐标值;updating the relative position of the trajectory point, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value;

确定所述更新后的相对位置为所述对称点的相对位置。The updated relative position is determined as the relative position of the symmetry point.

根据本公开实施例的第三方面,提供了一种电子设备,包括:According to a third aspect of the embodiments of the present disclosure, an electronic device is provided, including:

一个或多个处理器;one or more processors;

用于存储所述一个或多个处理器可执行指令的一个或多个存储器;one or more memories for storing the one or more processor-executable instructions;

其中,所述一个或多个处理器被配置为执行上述第一方面或第一方面的任一种可能实现方式所述的图像处理方法。Wherein, the one or more processors are configured to execute the image processing method described in the first aspect or any possible implementation manner of the first aspect.

根据本公开实施例的第四方面,提供了一种非易失性计算机可读存储介质,当所述非易失性计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行上述第一方面或第一方面的任一种可能实现方式所述的图像处理方法。According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-volatile computer-readable storage medium, which, when the instructions in the non-volatile computer-readable storage medium are executed by a processor of an electronic device, causes all the The electronic device can execute the image processing method described in the first aspect or any possible implementation manner of the first aspect.

根据本公开实施例的第五方面,提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现上述第一方面或第一方面的任一种可能实现方式所述的图像处理方法。According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, including a computer program, and when the computer program is executed by a processor, realizes the image described in the first aspect or any possible implementation manner of the first aspect Approach.

本公开实施例中,可以通过根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及用户在显示页面中输入的移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置。以便重复执行根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置,并在显示页面中,显示位于各第一绝对位置的轨迹点连接成的第一特效线条的过程。上述技术方案中,第一特效线条是根据用户输入的移动轨迹绘制的,实现了用户自主绘制特效的功能。并且,在获取移动轨迹中各轨迹点与当前人脸图像的相对位置之后,可以采用显示页面实时所显示的人脸图像中,人脸关键点的位置信息以及各轨迹点相对位置,确定各轨迹点在显示屏幕上的第一绝对位置,从而在连接位于各第一绝对位置的轨迹点后生成并显示第一特效线条。这样,生成的第一特效线条的显示位置会随着显示页面实时所显示的人脸图像的显示位置的变化而变化,实现了第一特效线条跟随人脸移动的特效,丰富特效显示效果。In the embodiment of the present disclosure, the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained. Position information, to determine the relative position of each track point relative to the face image. In order to repeat the conversion of the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained. , the process of displaying the first special effect line formed by connecting the track points located at the first absolute positions. In the above technical solution, the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.

图1是根据一示例性实施例示出的一种图像处理方法的流程图。Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment.

图2是根据一示例性实施例示出的人脸图像的示意图。FIG. 2 is a schematic diagram of a human face image according to an exemplary embodiment.

图3是根据一示例性实施例示出的一种确定轨迹点相对位置方法的流程图。Fig. 3 is a flow chart of a method for determining relative positions of track points according to an exemplary embodiment.

图4是根据一示例性实施例示出的一种人脸图像的显示页面的示意图。Fig. 4 is a schematic diagram of a display page of a face image according to an exemplary embodiment.

图5是根据一示例性实施例示出的一种人脸图像的显示页面的示意图。Fig. 5 is a schematic diagram of a display page of a face image according to an exemplary embodiment.

图6是根据一示例性实施例示出的一种确定轨迹点第一绝对位置方法的流程图。Fig. 6 is a flow chart of a method for determining the first absolute position of a track point according to an exemplary embodiment.

图7是根据一示例性实施例示出的一种生成第二特效线条方法的流程图。Fig. 7 is a flowchart of a method for generating a second special effect line according to an exemplary embodiment.

图8是根据一示例性实施例示出的另一种生成第二特效线条方法的流程图。Fig. 8 is a flowchart of another method for generating a second special effect line according to an exemplary embodiment.

图9是根据一示例性实施例示出的一种图像处理装置的框图。Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment.

图10是根据一示例性实施例示出的一种电子设备的框图。Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment.

具体实施方式Detailed ways

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另 有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.

图1是根据一示例性实施例示出的一种图像处理方法的流程图。该图像处理方法可以应用于电子设备。电子设备可以为具有显示屏幕的终端,该终端可以安装有对人脸图像进行图像特效处理的应用程序。本申请实施例以电子设备为终端为例进行说明。如图1所示,图像处理方法可以包括以下步骤101~103:Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment. The image processing method can be applied to electronic equipment. The electronic device may be a terminal with a display screen, and the terminal may be installed with an application program for performing image special effect processing on the face image. The embodiments of the present application are described by taking an electronic device as a terminal as an example. As shown in FIG. 1, the image processing method may include the following steps 101-103:

在步骤101中,响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹。In step 101, in response to the special effect display instruction, the movement track input by the user in the display page including the face image is acquired.

本公开实施例中,用户想要在使用终端进行拍照、视频拍摄或者网络直播等拍摄人脸的过程中,可以对人脸图像进行图像特效处理。其中,人脸图像不仅可以包括人脸,也可以还包括背景。例如,背景可以为建筑物或者风景等。在一些实施例中,用户可以操作终端打开具有图像特效处理功能的应用程序,在终端上显示应用程序中包括人脸图像的显示页面。终端可以在接收到特效显示指令后,响应于该特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹。In the embodiment of the present disclosure, the user can perform image special effects processing on the face image when he wants to use the terminal to take photos, video captures, or live webcasts and other processes of capturing faces. The face image may include not only the face but also the background. For example, the background may be a building or a landscape or the like. In some embodiments, the user may operate the terminal to open an application program with an image special effect processing function, and display a display page including a face image in the application program on the terminal. After receiving the special effect display instruction, the terminal may acquire the movement track input by the user in the display page including the face image in response to the special effect display instruction.

其中,特效显示指令可以是终端在显示页面中接收到执行设定操作后触发的。例如,特效显示指令可以是用户针对自主绘制控件执行设定操作后触发。设定操作可以包括针对自主绘制控件的点击、长按、滑动或者语音等形式的输入。包括人脸图像的显示页面可以为拍摄界面、直播界面或者短(长)视频拍摄界面等。The special effect display instruction may be triggered after the terminal receives and executes the setting operation on the display page. For example, the special effect display instruction may be triggered after the user performs a setting operation on the self-drawn control. The setting operation may include input in the form of click, long press, swipe, or voice for the self-drawn control. The display page including the face image may be a shooting interface, a live broadcast interface, or a short (long) video shooting interface, and the like.

在一些实施例中,用户输入的移动轨迹可以是用户移动输入件的轨迹。输入件可以是用户的手指或者触控笔等。移动轨迹可以包括至少一个按移动顺序排布的轨迹点。该至少一个轨迹点指的是一个或多个轨迹点。在一些实施例中,终端获取用户输入的移动轨迹可以指的是:终端获取用户输入的至少一个轨迹点的轨迹位置信息。其中,轨迹点的轨迹位置信息指的是轨迹点在终端的显示屏幕上的绝对位置。示例的,轨迹点的位置信息可以为轨迹点的绝对坐标,该绝对坐标指的是以显示屏幕的特定点(例如,中心点)为原点,在显示屏幕上相对于特定点的位置坐标。In some embodiments, the trajectory of movement of the user input may be the trajectory of the user moving the input. The input member may be a user's finger or a stylus or the like. The movement track may include at least one track point arranged in a movement order. The at least one trajectory point refers to one or more trajectory points. In some embodiments, acquiring the movement trajectory input by the user by the terminal may refer to: acquiring the trajectory position information of at least one trajectory point input by the user by the terminal. The track position information of the track point refers to the absolute position of the track point on the display screen of the terminal. For example, the position information of the track point may be the absolute coordinates of the track point, where the absolute coordinates refer to the position coordinates relative to the specific point on the display screen with a specific point (eg, center point) of the display screen as the origin.

示例的,用户想要在网络直播过程中,对自身人脸添加兔耳朵特效。用户可以操作终端打开网络直播的应用程序,在终端上显示包括用户人脸图像的显示页面。用户在显示页面针对自主绘制图标执行点击操作,并可以采用手指在显示页面所显示的人脸图像的人脸的头部左上方位置处,滑动绘制左兔耳朵形状线条。终端可以接收到针对自主绘制图标的点击操作后,可以生成特效显示指令,并响应于该特效显示指令。获取用户输入的绘制左兔耳朵形状线条对应的手指移动轨迹。通过后续步骤,可以使得显示页面包括的人脸图像中人脸具有兔耳朵特效。For example, the user wants to add a rabbit ear special effect to his face during the webcast. The user can operate the terminal to open the webcast application, and display a display page including the user's face image on the terminal. The user performs a click operation on the self-drawn icon on the display page, and can use a finger to slide and draw a line in the shape of a left rabbit ear at the upper left position of the face of the face image displayed on the display page. After receiving the click operation on the self-drawn icon, the terminal can generate a special effect display instruction, and respond to the special effect display instruction. Obtain the finger movement trajectory corresponding to the drawn left rabbit ear shape line input by the user. Through the subsequent steps, the human face in the human face image included in the display page can have a rabbit ear special effect.

在步骤102中,根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置。In step 102, according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory, determine the relative face of each trajectory point The relative position of the image.

本公开实施例中,终端可以获取在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及移动轨迹中各轨迹点的轨迹位置信息。根据至少两个人脸关键点的第一位置信息以及至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置。在一些实施例中,轨迹点相对人脸图像的相对位置可以采用目标人脸关键点指向轨迹点的向量表征。或者,也可以采用轨迹点相对人脸图像的相对坐标表征。本公开实施例采用轨迹点相对人脸图像的相对坐标表征轨迹点相对人脸图像的相对位置。In the embodiment of the present disclosure, the terminal may acquire the first position information of at least two face key points in the face image displayed at the current moment, and the track position information of each track point in the movement track. The relative position of each track point relative to the face image is determined according to the first position information of the at least two face key points and the track position information of at least one track point. In some embodiments, the relative position of the track point relative to the face image may be represented by a vector that points to the track point from the target face key point. Alternatively, the relative coordinate representation of the track point relative to the face image can also be used. The embodiments of the present disclosure use the relative coordinates of the track points to the face image to represent the relative positions of the track points to the face image.

在一些实施例中,终端可以通过对显示页面中显示的人脸图像进行人脸关键点检测处理,得到人脸图像中的至少两个人脸关键点。示例的,终端可以采用人工智能(Artificial Intelligence,AI)技术实现对人脸图像进行人脸关键点检测处理。In some embodiments, the terminal may obtain at least two face key points in the face image by performing face key point detection processing on the face image displayed on the display page. For example, the terminal may use an artificial intelligence (Artificial Intelligence, AI) technology to implement face key point detection processing on a face image.

在一些实施例中,至少两个人脸关键点可以包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点。目标人脸关键点可以为人脸图像的对称轴上的任一人脸关键点。第一人脸关键点与第二人脸关键点可以根据目标人脸关键点对称。例如,目标人脸关键点为第一人脸关键点与第二人脸关键点之间连线的锚点。第一人脸关键点与第二人脸关键点之间的连线,可以跟随目标人脸关键点的移动而移动。这样,由于第一人脸关键点与第二人脸关键点为关于目标关键点对称的两个人脸关键点,而目标关键点为人脸图像的对称轴上的关键点,因此第一人脸关键点和第二人脸关键点的连线,其倾斜角度可以较好的反映显示页面所显示的人脸图像中人脸的旋转角度。且位于人脸图像的对称轴上的连线中点的第一位置信息可以反映人脸图像的位置信息,从而在考虑到人脸图像中当前人脸的位置、姿态信息的情况下,确定各轨迹点的相对位置或者第一绝对位置的准确性较高。In some embodiments, the at least two face key points may include: a first face key point, a second face key point, and a target face key point. The target face key point can be any face key point on the symmetry axis of the face image. The first face key point and the second face key point may be symmetrical according to the target face key point. For example, the target face key point is the anchor point of the line connecting the first face key point and the second face key point. The connection line between the first face key point and the second face key point can follow the movement of the target face key point. In this way, since the first face key point and the second face key point are two face key points symmetrical about the target key point, and the target key point is the key point on the symmetry axis of the face image, the first face key point is The inclination angle of the line connecting the point and the second face key point can better reflect the rotation angle of the face in the face image displayed on the display page. And the first position information of the midpoint of the connection line located on the symmetry axis of the face image can reflect the position information of the face image, so that the position and posture information of the current face in the face image are considered, and each The relative position of the track point or the first absolute position has higher accuracy.

本公开实施例中,人脸关键点的第一位置信息可以是人脸关键点在显示屏幕上的绝对位置信息。 例如,人脸关键点的第一位置信息可以是人脸关键点的绝对坐标。该绝对坐标指的是以显示屏幕的特定点(例如,中心点)为原点,在显示屏幕上相对于特定点的位置坐标。参见图2,其示出了根据一示例性实施例示出的人脸图像的示意图。如图2所示,目标人脸关键点C可以为人脸鼻尖处的点,且位于人脸图像对称轴上。第一人脸关键点A和第二人脸关键点B可以为位于人脸边缘两侧的两个对称点。该第一人脸关键点A和第二人脸关键点B之间连线的倾斜角可以用于反映人脸的旋转角度。In the embodiment of the present disclosure, the first position information of the face key point may be absolute position information of the face key point on the display screen. For example, the first position information of the face key points may be absolute coordinates of the face key points. The absolute coordinates refer to the position coordinates relative to the specific point on the display screen with a specific point (eg, a center point) of the display screen as the origin. Referring to FIG. 2, it shows a schematic diagram of a human face image according to an exemplary embodiment. As shown in FIG. 2 , the key point C of the target face can be a point at the tip of the nose of the face, and is located on the symmetry axis of the face image. The first face key point A and the second face key point B may be two symmetrical points located on both sides of the edge of the face. The inclination angle of the line between the first face key point A and the second face key point B can be used to reflect the rotation angle of the face.

在一些实施例中,终端可以将各轨迹点在显示屏幕上的绝对位置,通过空间变化处理确定各轨迹点相对人脸图像的相对位置。示例的,如图3所示,终端根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置的过程可以包括以下步骤1021至步骤1023。In some embodiments, the terminal may determine the relative position of each trajectory point relative to the face image by processing the absolute position of each trajectory point on the display screen through spatial change processing. Exemplarily, as shown in FIG. 3 , the terminal determines each face image according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory. The process of the relative position of the track point relative to the face image may include the following steps 1021 to 1023 .

在步骤1021中,针对各轨迹点,根据目标人脸关键点的第一位置信息和轨迹点的轨迹位置信息,确定轨迹点指向目标人脸关键点的平移向量。In step 1021, for each track point, according to the first position information of the target face key point and the track position information of the track point, a translation vector of the track point pointing to the target face key point is determined.

本公开实施例,针对至少一个轨迹点中各轨迹点,终端可以根据目标人脸关键点的第一位置信息和轨迹点的轨迹位置信息,计算轨迹点指向目标人脸关键点的平移向量。以对轨迹点执行平移操作。该平移向量表征轨迹点至目标人脸关键点的平移姿态信息,也即是轨迹点与目标人脸关键点的相对位置。In this embodiment of the present disclosure, for each track point in at least one track point, the terminal may calculate the translation vector of the track point pointing to the target face key point according to the first position information of the target face key point and the track position information of the track point. to perform a translation operation on the track point. The translation vector represents the translation gesture information from the track point to the target face key point, that is, the relative position of the track point and the target face key point.

在一些实施例中,假设显示页面在当前时刻所显示人脸图像中,目标人脸关键点C的绝对坐标(x c1,y c1)。第一人脸关键点A的绝对坐标(x a1,y a1)。第二人脸关键点B的绝对坐标(x b1,y b1)。且假设手指移动轨迹中的一个轨迹点P的绝对坐标(x p,y p)。则针对轨迹点P,终端确定的轨迹点P指向目标人脸关键点C的平移向量

Figure PCTCN2021134644-appb-000005
为(x p-x c1,y p-y c1) In some embodiments, it is assumed that in the face image displayed on the display page at the current moment, the absolute coordinates (x c1 , y c1 ) of the target face key point C. The absolute coordinates (x a1 , y a1 ) of the key point A of the first face. The absolute coordinates (x b1 , y b1 ) of the second face key point B. And it is assumed that the absolute coordinates (x p , y p ) of a track point P in the finger movement track. Then for the trajectory point P, the trajectory point P determined by the terminal points to the translation vector of the target face key point C
Figure PCTCN2021134644-appb-000005
is (x p -x c1 ,y p -y c1 ) .

在步骤1022中,根据第一人脸关键点和第二人脸关键点的第一位置信息,确定人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵。In step 1022, a first rotation matrix and a first scaling matrix of the current face pose in the face image are determined according to the first position information of the first face key point and the second face key point.

本公开实施例中,第一旋转矩阵可以表示人脸图像中当前人脸姿态的旋转姿态信息。第一缩放矩阵可以表示人脸图像中当前人脸姿态的缩放姿态信息。In this embodiment of the present disclosure, the first rotation matrix may represent the rotation attitude information of the current face attitude in the face image. The first scaling matrix may represent scaling pose information of the current face pose in the face image.

在一些实施例中,在人脸关键点的第一位置信息以及轨迹点的轨迹位置信息均为在终端的显示屏幕上的绝对坐标的情况下,终端根据第一人脸关键点和第二人脸关键点的第一位置信息,确定人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵的过程可以包括以下步骤:In some embodiments, in the case that the first position information of the face key point and the trajectory position information of the track point are both absolute coordinates on the display screen of the terminal, the terminal according to the first face key point and the second person The process of determining the first position information of the face key point, the first rotation matrix and the first scaling matrix of the current face pose in the face image may include the following steps:

根据第一人脸关键点和第二人脸关键点的第一位置信息,以及第一长度,得到第一旋转矩阵,第一长度为第一人脸关键点指向第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point pointing to the second face key point. the length of a vector;

根据第一人脸关键点和第二人脸关键点连线的参考长度,以及第一长度,得到第一缩放矩阵,参考长度为针对人脸图像中处于正视姿态的人脸设定的第一长度。According to the reference length of the line connecting the first face key point and the second face key point, and the first length, a first scaling matrix is obtained, and the reference length is the first scale set for the face in the face-up posture in the face image. length.

本公开实施例中,终端可以根据第一人脸关键点和第二人脸关键点的第一位置信息,得到第一人脸关键点指向第二人脸关键点的第一向量。确定第一向量的第一长度。在确定第一长度之后,可以根据第一人脸关键点的第一位置信息、第二人脸关键点的第一位置信息以及第一长度,得到第一旋转矩阵。In this embodiment of the present disclosure, the terminal may obtain a first vector pointing from the first face key point to the second face key point according to the first position information of the first face key point and the second face key point. A first length of the first vector is determined. After the first length is determined, the first rotation matrix may be obtained according to the first position information of the first face key point, the first position information of the second face key point, and the first length.

示例的,继续以步骤1021中假设的人脸关键点为例,针对假设的轨迹点P进行示意性说明。终端根据第一人脸关键点A的第一位置信息为(x a1,y a1),以及第二人脸关键点B的第一位置信息为(x b1,y b1),计算确定第一向量

Figure PCTCN2021134644-appb-000006
为(x a1-x b1,y a1-y b1)。进而计算得到第一向量
Figure PCTCN2021134644-appb-000007
的第一长度为|AB|,
Figure PCTCN2021134644-appb-000008
Illustratively, continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P schematically. The terminal calculates and determines the first vector according to the first position information of the first face key point A is (x a1 , y a1 ), and the first position information of the second face key point B is (x b1 , y b1 )
Figure PCTCN2021134644-appb-000006
is (x a1 -x b1 ,y a1 -y b1 ). Then calculate the first vector
Figure PCTCN2021134644-appb-000007
The first length of is |AB|,
Figure PCTCN2021134644-appb-000008

则根据第一人脸关键点A的第一位置信息、第二人脸关键点B的第二位置信息以及第一长度|AB|,得到第一旋转矩阵M r1

Figure PCTCN2021134644-appb-000009
Then, according to the first position information of the first face key point A, the second position information of the second face key point B, and the first length |AB|, the first rotation matrix M r1 is obtained.
Figure PCTCN2021134644-appb-000009

其中,第一旋转矩阵M r1可以用于对平移向量

Figure PCTCN2021134644-appb-000010
进行旋转处理,即根据人脸图像中当前人脸姿态的旋转姿态信息,对平移向量
Figure PCTCN2021134644-appb-000011
进行设定比例的旋转处理。 Among them, the first rotation matrix M r1 can be used for the translation vector
Figure PCTCN2021134644-appb-000010
Perform rotation processing, that is, according to the rotation attitude information of the current face attitude in the face image, the translation vector
Figure PCTCN2021134644-appb-000011
Performs rotation processing at the set scale.

在一些实施例中,继续以步骤1021中假设的人脸关键点为例,针对假设的轨迹点P进行示意性说明。第一向量

Figure PCTCN2021134644-appb-000012
的第一长度为|AB|,
Figure PCTCN2021134644-appb-000013
In some embodiments, continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P in a schematic manner. first vector
Figure PCTCN2021134644-appb-000012
The first length of is |AB|,
Figure PCTCN2021134644-appb-000013

则根据第一人脸关键点和第二人脸关键点连线的参考长度D,以及第一长度|AB|,得到第一缩放矩阵M s1

Figure PCTCN2021134644-appb-000014
Then, according to the reference length D of the line connecting the first face key point and the second face key point, and the first length |AB|, the first scaling matrix M s1 is obtained.
Figure PCTCN2021134644-appb-000014

其中,第一缩放矩阵M s1可以用于对平移向量

Figure PCTCN2021134644-appb-000015
进行缩放处理,即根据人脸图像中当前人脸姿态的缩放姿态信息,对平移向量
Figure PCTCN2021134644-appb-000016
进行设定比例的缩放处理。该设定比例可以为D:1。在一些实施例中,D可以为100。 Among them, the first scaling matrix M s1 can be used for the translation vector
Figure PCTCN2021134644-appb-000015
Perform zoom processing, that is, according to the zoom pose information of the current face pose in the face image, the translation vector
Figure PCTCN2021134644-appb-000016
Perform scaling processing at the set ratio. The set ratio may be D:1. In some embodiments, D may be 100.

本公开实施例中,由于第一人脸关键点和第二人脸关键点的连线,其倾斜角度可以较好的反映显示页面所显示的人脸图像中人脸的旋转角度。因此,根据第一人脸关键点指向第二人脸关键点的第一向量的第一长度,确定的用于指示人脸图像中当前人脸的旋转姿态信息的第一旋转矩阵的准确性较高。且同时利用第一人脸关键点和第二人脸关键点的连线在人脸图像中处于正视姿态的人脸的长度,以及该连线在当前人脸图像中的真实第一长度,可以在重复利用第一人脸关键点和第二人脸关键点的连线的相关信息的基础上,确定用于指示人脸图像中当前人脸的缩放姿态信息的第一缩放矩阵,减少终端计算量。In the embodiment of the present disclosure, due to the connection between the first face key point and the second face key point, the inclination angle can better reflect the rotation angle of the face in the face image displayed on the display page. Therefore, according to the first length of the first vector of the first face key point pointing to the second face key point, the accuracy of the determined first rotation matrix used to indicate the rotation attitude information of the current face in the face image is more accurate. high. And at the same time, the length of the face in the face-up posture in the face image by the connection between the first face key point and the second face key point, and the real first length of the connection in the current face image, can be obtained. On the basis of repeatedly using the relevant information of the connection between the first face key point and the second face key point, a first scaling matrix for indicating the scaling posture information of the current face in the face image is determined, thereby reducing the calculation of the terminal. quantity.

在步骤1023中,根据第一旋转矩阵、第一缩放矩阵以及平移向量,得到轨迹点相对人脸图像的相对位置。In step 1023, the relative position of the trajectory point relative to the face image is obtained according to the first rotation matrix, the first scaling matrix and the translation vector.

在一些实施例中,终端根据第一旋转矩阵、第一缩放矩阵以及平移向量,得到轨迹点相对人脸图像的相对位置的过程可以包括:终端可以根据第一缩放矩阵、第一旋转矩阵、平移向量以及第一公式,得到相对位置。第一公式包括:In some embodiments, the process that the terminal obtains the relative position of the trajectory point with respect to the face image according to the first rotation matrix, the first scaling matrix and the translation vector may include: The vector and the first formula to get the relative position. The first formula includes:

Figure PCTCN2021134644-appb-000017
Figure PCTCN2021134644-appb-000017

其中,Q表示轨迹点相对人脸图像的相对位置、M s1表示第一缩放矩阵、M r1表示第一旋转矩阵、

Figure PCTCN2021134644-appb-000018
表示平移向量。这样,由于轨迹点指向目标人脸关键点的平移向量可以反映轨迹点相对人脸图像的相对距离,第一旋转矩阵可以反映人脸图像中当前人脸的旋转姿态信息。第一缩放矩阵可以反映人脸图像中当前人脸的缩放姿态信息。而第一公式的公式因子包括第一缩放矩阵、第一旋转矩阵和平移向量,因此采用第一公式计算轨迹点相对人脸图像的相对位置,可以考虑到人脸图像中当前人脸的各类姿态信息,从而使得计算得到的轨迹点相对人脸图像的相对位置的准确性较高。 Among them, Q represents the relative position of the trajectory point relative to the face image, M s1 represents the first scaling matrix, M r1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-000018
represents the translation vector. In this way, since the translation vector of the track point pointing to the target face key point can reflect the relative distance of the track point relative to the face image, the first rotation matrix can reflect the rotation attitude information of the current face in the face image. The first scaling matrix may reflect the scaling posture information of the current face in the face image. The formula factors of the first formula include the first scaling matrix, the first rotation matrix and the translation vector. Therefore, using the first formula to calculate the relative position of the track point relative to the face image can take into account the various types of the current face in the face image. attitude information, so that the accuracy of the relative position of the calculated trajectory point relative to the face image is high.

本公开实施例中,根据人脸图像中当前人脸姿态的第一旋转矩阵、第一缩放矩阵以及轨迹点指向目标人脸关键点的平移向量,计算轨迹点相对人脸图像的相对位置的方案,由于轨迹点指向目标人脸关键点的平移向量可以反映轨迹点相对人脸图像的相对距离,第一旋转矩阵可以反映人脸图像中当前人脸的旋转姿态信息。第一缩放矩阵可以反映人脸图像中当前人脸的缩放姿态信息。因此,在考虑到人脸图像中当前人脸的各类姿态信息的情况下,可以得到较为真实地轨迹点与人脸图像的相对位置。因而根据人脸图像中当前人脸姿态的第一旋转矩阵、第一缩放矩阵计算得到的相对位置的准确性较高。In the embodiment of the present disclosure, the scheme of calculating the relative position of the trajectory point relative to the face image according to the first rotation matrix and the first scaling matrix of the current face posture in the face image and the translation vector of the trajectory point pointing to the target face key point , since the translation vector of the track point pointing to the target face key point can reflect the relative distance of the track point relative to the face image, the first rotation matrix can reflect the rotation attitude information of the current face in the face image. The first scaling matrix may reflect the scaling posture information of the current face in the face image. Therefore, in the case of considering various pose information of the current face in the face image, the relative position of the trajectory point and the face image can be obtained more realistically. Therefore, the accuracy of the relative position calculated according to the first rotation matrix and the first scaling matrix of the current face posture in the face image is high.

在步骤103中,重复执行图像特效处理。图像特效处理包括:根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置;连接位于各第一绝对位置的轨迹点,生成第一特效线条。在显示页面中,显示第一特效线条。In step 103, image special effect processing is repeatedly performed. The image special effect processing includes: converting the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen; The trajectory points of each first absolute position generate a first special effect line. In the display page, the first special effect line is displayed.

本公开实施例中,由于在终端获取到用户输入的移动轨迹之后,与终端绘制第一特效线条之前的时间内,用户的人脸还可能会发生诸如倾斜、侧头等姿态变化。因此,终端在计算轨迹点相对人脸图像的相对位置时,与终端执行图像特效处理时,显示界面所显示的人脸图像中的人脸姿态可能并不相同。In the embodiment of the present disclosure, after the terminal acquires the movement trajectory input by the user and before the terminal draws the first special effect line, the user's face may also undergo posture changes such as tilt and side head. Therefore, when the terminal calculates the relative position of the trajectory point relative to the face image, and when the terminal performs image special effect processing, the face pose in the face image displayed on the display interface may be different.

基于此,终端需要根据其获取到用户输入的移动轨迹之后,实时所显示的人脸图像计算轨迹点在显示屏幕上的绝对位置,以生成并显示第一特效线条。相类似的,终端显示的特效线条(第一特效线条以及后续第二特效线条的统称)具有刷新频率。每次刷新的过程中均需要重复执行图像特效处理,以使得终端可以显示最近一次(最新)绘制的特效线条,且该特效线条相对每个人脸图像中人脸的位 置均相同。从而在视觉上可以认为为跟随人脸移动的同一线条。其中,每个刷新间隙过程中用户的人脸也可能会发生姿态变化。因而采用终端实时显示的人脸图像的人脸关键点,执行图像特效处理。Based on this, the terminal needs to calculate the absolute position of the trajectory point on the display screen according to the face image displayed in real time after acquiring the movement trajectory input by the user, so as to generate and display the first special effect line. Similarly, the special effect line (the first special effect line and the subsequent second special effect line collectively) displayed on the terminal has a refresh rate. In the process of each refresh, the image special effect processing needs to be performed repeatedly, so that the terminal can display the latest (latest) special effect line drawn, and the special effect line has the same position relative to the face in each face image. Therefore, it can be regarded as the same line that moves with the face visually. Among them, the user's face may also change in posture during each refresh interval. Therefore, the image special effect processing is performed by using the face key points of the face image displayed by the terminal in real time.

示例的,请参考图4,其示出了本公开实施例提供的一种显示页面的人脸图像示意图。如图4所示,其示出的人脸图像为用户输入移动轨迹时的人脸图像。图4中位于人脸图像中头部左上方,虚线绘制的形如耳朵的折线线条为用户输入的移动轨迹L0。P为移动轨迹L0上一点。请参考图5,其示出了本公开实施例提供的一种显示页面的人脸图像示意图。如图5所示,其示出的人脸图像为终端在执行图像特效处理过程中,显示页面当前时刻之后所显示的人脸图像。该人脸图像相对图4所示的人脸图像产生了人脸头部倾斜的姿态变化。图5中位于人脸图像中头部左上方的虚线所示的折线线条L1为:与图4所示的用户输入的移动轨迹对应的特效线条。特效线条上的P1点与移动轨迹上P点对应。其中,图4和图5为同一用户不同时刻的人脸图像。图4和图5所示的人脸图像中,人脸关键点均为同一个目标人脸关键点C、同一个第一人脸关键点A以及同一个第二人脸关键B。For example, please refer to FIG. 4 , which shows a schematic diagram of a face image of a display page provided by an embodiment of the present disclosure. As shown in FIG. 4 , the shown face image is the face image when the user inputs the movement track. In FIG. 4 , located at the upper left of the head in the face image, the broken line drawn by the dotted line in the shape of an ear is the movement trajectory L0 input by the user. P is a point on the moving trajectory L0. Please refer to FIG. 5 , which shows a schematic diagram of a face image of a display page provided by an embodiment of the present disclosure. As shown in FIG. 5 , the shown face image is the face image displayed by the terminal after the current moment of the display page in the process of executing the image special effect processing. With respect to the face image shown in FIG. 4 , the face image produces a posture change in which the head of the face is tilted. The broken line L1 shown by the dotted line at the upper left of the head in the face image in FIG. 5 is the special effect line corresponding to the movement track input by the user shown in FIG. 4 . The P1 point on the special effect line corresponds to the P point on the movement track. 4 and 5 are face images of the same user at different times. In the face images shown in FIG. 4 and FIG. 5 , the face key points are the same target face key point C, the same first face key point A, and the same second face key B.

在一些实施例中,终端可以重复执行图像特效处理,直至接收到特效关闭指令。从而实现第一特效线条的显示位置(即在显示屏幕上的绝对位置)会随着实时显示的人脸图像的显示位置的变化而变化,实现第一特效线条跟随人脸移动的特效,丰富特效显示效果。其中,特效关闭指令可以是在显示页面中执行设定操作后触发的。示例的,特效关闭指令可以是用户针对特效触发控件执行设定输入触发的。该特效触发控件也可以是该显示页面中特效按钮。设定输入可以包括针对特效触发控件的点击、长按、滑动或者语音等形式的输入。In some embodiments, the terminal may repeatedly perform image special effect processing until receiving the special effect closing instruction. Thereby, the display position of the first special effect line (that is, the absolute position on the display screen) will change with the change of the display position of the face image displayed in real time, so as to realize the special effect that the first special effect line moves with the face, and enrich the special effects. display effect. The special effect closing instruction may be triggered after performing a setting operation on the display page. For example, the special effect closing instruction may be triggered by the user performing a setting input for the special effect triggering control. The special effect triggering control may also be a special effect button in the display page. The setting input may include input in the form of click, long press, swipe, or voice for the special effect trigger control.

其中,图像特效处理过程包括以下步骤:Wherein, the image special effect processing process includes the following steps:

根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key point of the face in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen;

连接位于各第一绝对位置的轨迹点,生成第一特效线条;Connect the track points located at the first absolute positions to generate the first special effect line;

在显示页面中,显示第一特效线条。In the display page, the first special effect line is displayed.

在一些实施例中,根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置。In some embodiments, the relative position is converted according to the second position information of the face key points in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen.

本公开实施例中,终端可以获取显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息。在获取到第二位置信息之后,终端可以根据人脸关键点的第二位置信息对各轨迹点相对人脸图像的相对位置进行转换,得到各轨迹点对应在显示屏幕上的第一绝对位置。其中,获取的该人脸图像中的人脸关键点与步骤102中,至少两个人脸关键点相同。在一些实施例中,各轨迹点在显示屏幕上的第一绝对位置可以采用轨迹点在显示屏幕的像素坐标系中的坐标表征。In the embodiment of the present disclosure, the terminal may acquire the second position information of the key points of the face in the face image displayed on the display page after the current moment. After acquiring the second position information, the terminal can convert the relative position of each track point relative to the face image according to the second position information of the face key point to obtain the first absolute position corresponding to each track point on the display screen. Wherein, the acquired face key points in the face image are the same as at least two face key points in step 102 . In some embodiments, the first absolute position of each track point on the display screen may be represented by coordinates of the track point in the pixel coordinate system of the display screen.

在一些实施例中,如图6所示,终端根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置的过程可以包括以下步骤1031至步骤1032。In some embodiments, as shown in FIG. 6 , the terminal converts the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, and obtains each track point on the display screen. The process of the first absolute position may include the following steps 1031 to 1032.

在步骤1031中,针对各轨迹点,根据第一人脸关键点和第二人脸关键点的第二位置信息,确定人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵。In step 1031, for each track point, a second rotation matrix and a second scaling matrix of the current face pose in the face image are determined according to the second position information of the first face key point and the second face key point.

本公开实施例中,第二旋转矩阵可以表示人脸图像中当前人脸姿态的旋转姿态信息。第二缩放矩阵可以表示人脸图像中当前人脸姿态的缩放姿态信息。In the embodiment of the present disclosure, the second rotation matrix may represent the rotation posture information of the current face posture in the face image. The second scaling matrix may represent the scaling pose information of the current face pose in the face image.

在一些实施例中,在人脸关键点的第二位置信息以及轨迹点的轨迹位置信息均为在终端的显示屏幕上的绝对坐标的情况下,终端针对至少一个轨迹点中各轨迹点,根据第一人脸关键点和第二人脸关键点的第二位置信息,确定人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵的过程可以包括以下步骤:In some embodiments, when the second position information of the face key point and the trajectory position information of the trajectory point are both absolute coordinates on the display screen of the terminal, the terminal for each trajectory point in the at least one trajectory point, according to The second position information of the first face key point and the second face key point, the process of determining the second rotation matrix and the second scaling matrix of the current face pose in the face image may include the following steps:

根据第一人脸关键点和第二人脸关键点的第二位置信息,以及第二长度,得到第二旋转矩阵,第二长度为第一人脸关键点指向第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, a second rotation matrix is obtained, and the second length is the first face key point pointing to the second face key point. the length of the two vectors;

根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息以及相对位置,得到轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.

在一些实施例中,根据第一人脸关键点和第二人脸关键点的第二位置信息,以及第二长度,得到第二旋转矩阵,第二长度为第一人脸关键点指向第二人脸关键点的第二向量的长度。In some embodiments, a second rotation matrix is obtained according to the second position information of the first face key point and the second face key point, and the second length, and the second length is that the first face key point points to the second The length of the second vector of face keypoints.

本公开实施例中,终端可以根据第一人脸关键点和第二人脸关键点的第二位置信息,得到第一人脸关键点指向第二人脸关键点的第二向量。确定第二向量的第二长度。在确定第二长度之后,可以根据第一人脸关键点的第二位置信息、第二人脸关键点的第二位置信息以及第二长度,得到第二旋转矩阵。In this embodiment of the present disclosure, the terminal may obtain a second vector pointing from the first face key point to the second face key point according to the second position information of the first face key point and the second face key point. A second length of the second vector is determined. After the second length is determined, a second rotation matrix may be obtained according to the second position information of the first face key point, the second position information of the second face key point, and the second length.

示例的,假设在终端获取用户输入的移动轨迹之后,到终端执行图像特效处理,生成并显示第一特效线条之前,用户的头部发生了姿态变化。此时,终端在显示页面中所显示的人脸图像的人脸位置发生了改变。例如,在终端获取用户输入的移动轨迹之后,到终端执行图像特效处理,生成并显示第一特效线条之前,用户的头部从图4所示的姿态,变化为图5所示的姿态。For example, it is assumed that after the terminal acquires the movement trajectory input by the user, the terminal performs image special effect processing, and before the first special effect line is generated and displayed, the posture of the user's head changes. At this time, the face position of the face image displayed by the terminal on the display page is changed. For example, after the terminal acquires the movement track input by the user, and before the terminal performs image special effect processing and generates and displays the first special effect line, the user's head changes from the posture shown in FIG. 4 to the posture shown in FIG. 5 .

继续以步骤1021中假设的人脸关键点为例,针对假设的轨迹点P进行示意性说明。终端根据第一人脸关键点A的第二位置信息(x a2,y a2),以及第二人脸关键点B的第二位置信息(x b2,y b2),得到第二向量

Figure PCTCN2021134644-appb-000019
为(x a2-x b2,y a2-y b2)。确定第二向量
Figure PCTCN2021134644-appb-000020
的第二长度为|AB|,
Figure PCTCN2021134644-appb-000021
根据第一人脸关键点A的第二位置信息、第二人脸关键点B的第二位置信息以及第二长度|AB|,得到第二旋转矩阵M r2,即
Figure PCTCN2021134644-appb-000022
Continuing to take the assumed face key point in step 1021 as an example, a schematic illustration of the assumed trajectory point P is given. The terminal obtains the second vector according to the second position information (x a2 , y a2 ) of the first face key point A and the second position information (x b2 , y b2 ) of the second face key point B
Figure PCTCN2021134644-appb-000019
is (x a2 -x b2 ,y a2 -y b2 ). determine the second vector
Figure PCTCN2021134644-appb-000020
The second length of is |AB|,
Figure PCTCN2021134644-appb-000021
According to the second position information of the first face key point A, the second position information of the second face key point B, and the second length |AB|, the second rotation matrix M r2 is obtained, that is,
Figure PCTCN2021134644-appb-000022

其中,第二旋转矩阵M r1可以用于对轨迹点相对位置进行旋转处理变化,即根据人脸图像中当前人脸姿态的旋转姿态信息,对轨迹点相对位置进行设定比例的旋转处理变化。 Wherein, the second rotation matrix M r1 can be used to perform rotation processing changes on the relative positions of the trajectory points, that is, according to the rotation attitude information of the current face posture in the face image, the relative positions of the trajectory points are subjected to a set ratio of rotation processing changes.

在一些实施例中,根据第一人脸关键点和第二人脸关键点连线的参考长度,以及第二长度,得到第二缩放矩阵。参考长度为针对人脸图像中处于正视姿态的人脸设定的第二长度。In some embodiments, the second scaling matrix is obtained according to the reference length of the line connecting the first face key point and the second face key point, and the second length. The reference length is the second length set for the face in the face-up posture in the face image.

本公开实施例中,针对人脸图像中处于正视姿态的人脸设定的第一长度和第二长度相等。示例的,续以步骤1021中假设的人脸关键点为例,针对假设的轨迹点P进行示意性说明。第二向量

Figure PCTCN2021134644-appb-000023
的第二长度为|AB|,
Figure PCTCN2021134644-appb-000024
根据第一人脸关键点和第二人脸关键点连线的参考长度D,以及第二长度|AB|,得到第二缩放矩阵M s2,即
Figure PCTCN2021134644-appb-000025
In the embodiment of the present disclosure, the first length and the second length set for the face in the face-up posture in the face image are equal. Illustratively, continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P. second vector
Figure PCTCN2021134644-appb-000023
The second length of is |AB|,
Figure PCTCN2021134644-appb-000024
According to the reference length D of the line connecting the first face key point and the second face key point, and the second length |AB|, the second scaling matrix M s2 is obtained, that is,
Figure PCTCN2021134644-appb-000025

其中,第二缩放矩阵M s2可以用于对轨迹点相对位置进行缩放处理转换,即根据人脸图像中当前人脸姿态的缩放姿态信息,对轨迹点相对位置进行设定比例的缩放处理转换。该设定比例可以为D:1。在一些实施例中,D可以为100。 The second scaling matrix M s2 can be used to perform scaling processing and conversion on the relative position of the trajectory points, that is, performing scaling processing and conversion on the relative position of the trajectory points according to the scaling posture information of the current face posture in the face image. The set ratio may be D:1. In some embodiments, D may be 100.

本公开实施例中,由于第一人脸关键点和第二人脸关键点的连线,其倾斜角度可以较好的反映显示页面所显示的人脸图像中人脸的旋转角度。因此,根据第一人脸关键点指向第二人脸关键点的第二向量的第二长度,确定的用于指示人脸图像中当前人脸的旋转姿态信息的第二旋转矩阵的准确性较高。且同时利用第一人脸关键点和第二人脸关键点的连线在人脸图像中处于正视姿态的人脸的长度,以及该连线在当前人脸图像中的真实长度,可以在重复利用第一人脸关键点和第二人脸关键点的连线的相关信息的基础上,确定用于指示人脸图像中当前人脸的缩放姿态信息的第二缩放矩阵,减少终端计算量。In the embodiment of the present disclosure, due to the connection between the first face key point and the second face key point, the inclination angle can better reflect the rotation angle of the face in the face image displayed on the display page. Therefore, according to the second length of the second vector of the first face key point pointing to the second face key point, the accuracy of the determined second rotation matrix used to indicate the rotation attitude information of the current face in the face image is more accurate. high. And at the same time, the length of the face in the face-up posture in the face image by the connection between the first face key point and the second face key point, and the real length of the connection in the current face image, can be repeated in Based on the related information of the connection between the first face key point and the second face key point, a second scaling matrix for indicating the scaling posture information of the current face in the face image is determined, thereby reducing the amount of calculation of the terminal.

在步骤1032中,根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息以及相对位置,得到轨迹点的第一绝对位置。In step 1032, the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.

在一些实施例中,终端根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息以及相对位置,得到轨迹点的第一绝对位置的过程可以包括:终端根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息、相对位置以及第二公式,得到轨迹点的第一绝对位置。第二公式包括:In some embodiments, the process that the terminal obtains the first absolute position of the trajectory point according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point may include: the terminal rotates according to the second The matrix, the second scaling matrix, the second position information of the target face key point, the relative position, and the second formula are used to obtain the first absolute position of the track point. The second formula includes:

R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ;

其中,R表示轨迹点在显示屏幕上的第一绝对位置、Mr 2表示第二旋转矩阵、Ms 2表示第二缩放矩阵、(x q,y q)表示轨迹点相对人脸图像相对位置、(x c,y c)表示目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position of the trajectory point on the display screen, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position of the trajectory point relative to the face image, ( x c , y c ) represents the second position information of the key points of the target face, and T represents the transposition processing.

这样,由于第二旋转矩阵可以反映人脸图像中当前人脸的旋转姿态信息。第二缩放矩阵可以反映 人脸图像中当前人脸的缩放姿态信息。因此,采用第二公式确定轨迹点的第一绝对位置,可以不仅使得轨迹点的第一绝对位置跟随人脸图像的显示位置的变化而变化,还会因人脸图像中当前人脸的旋转姿态信息以及缩放姿态信息的变化而变化。进而使得连接各第一绝对位置的轨迹点,生成的第一特效线条不仅可以跟随人脸图像中的人脸移动,还可以实现跟随人脸图像中的人脸旋转和缩放,丰富特效显示效果。In this way, since the second rotation matrix can reflect the rotation posture information of the current face in the face image. The second scaling matrix can reflect the scaling posture information of the current face in the face image. Therefore, using the second formula to determine the first absolute position of the track point can not only make the first absolute position of the track point change with the change of the display position of the face image, but also change due to the rotation posture of the current face in the face image. information as well as the zoom pose information. In this way, the generated first special effect lines can not only follow the movement of the human face in the face image, but also can rotate and zoom following the human face in the face image, thereby enriching the special effect display effect by connecting the trajectory points of the first absolute positions.

需要说明的是,若终端在执行前述步骤102时其所显示的人脸图像中人脸姿态与执行步骤103时其所显示的人脸图像中人脸姿态发生变化时,则针对同一轨迹点,步骤102得到的轨迹点相对人脸图像的相对位置与步骤103得到的轨迹点相对显示屏幕的绝对位置不同。若终端在执行前述步骤102时其所显示的人脸图像中人脸姿态与执行步骤103时其所显示的人脸图像中人脸姿态未发生变化时,则针对同一轨迹点,步骤102得到的轨迹点相对人脸图像的相对位置与步骤103得到的轨迹点相对显示屏幕的绝对位置相同。即二者位置重合。It should be noted that, if the facial posture in the face image displayed by the terminal when performing the aforementioned step 102 changes from the facial posture in the facial image displayed by the terminal when performing step 103, then for the same trajectory point, The relative position of the trajectory point obtained in step 102 relative to the face image is different from the absolute position of the trajectory point obtained in step 103 relative to the display screen. If the facial posture in the face image displayed by the terminal when performing the aforementioned step 102 does not change from the facial posture in the facial image displayed by the terminal when performing the step 103, then for the same trajectory point, the result obtained in the step 102 The relative position of the track point relative to the face image is the same as the absolute position of the track point obtained in step 103 relative to the display screen. That is, the two positions coincide.

本公开实施例中,根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息以及相对位置,得到轨迹点的第一绝对位置的方案,由于第二旋转矩阵可以反映人脸图像中当前人脸的旋转姿态信息。第二缩放矩阵可以反映人脸图像中当前人脸的缩放姿态信息。因此,采用第二旋转矩阵、第二缩放矩阵以及目标人脸关键点的第二位置信息将轨迹点的相对位置,转换成轨迹点的第一绝对位置,可以不仅使得轨迹点的第一绝对位置跟随人脸图像的显示位置的变化而变化,还会因人脸图像中当前人脸的旋转姿态信息以及缩放姿态信息的变化而变化。进而使得连接各第一绝对位置的轨迹点,生成的第一特效线条不仅可以跟随人脸图像中的人脸移动,还可以实现跟随人脸图像中的人脸旋转和缩放,丰富特效显示效果。In the embodiment of the present disclosure, the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key points, because the second rotation matrix can reflect the human The rotational pose information of the current face in the face image. The second scaling matrix may reflect the scaling posture information of the current face in the face image. Therefore, using the second rotation matrix, the second scaling matrix and the second position information of the target face key point to convert the relative position of the trajectory point into the first absolute position of the trajectory point can not only make the first absolute position of the trajectory point It changes with the change of the display position of the face image, and also changes due to the change of the rotation attitude information and the zoom attitude information of the current face in the face image. In this way, the generated first special effect lines can not only follow the movement of the human face in the face image, but also can rotate and zoom following the human face in the face image, thereby enriching the special effect display effect by connecting the trajectory points of the first absolute positions.

在一些实施例中,连接位于各第一绝对位置的轨迹点,生成第一特效线条。In some embodiments, the trajectory points located at the first absolute positions are connected to generate a first special effect line.

在一些实施例中,终端可以根据用户输入的移动轨迹中各轨迹点的排布顺序,连接各轨迹点分别对应的各第一绝对位置的轨迹点,生成第一特效线条。In some embodiments, the terminal may generate the first special effect line by connecting the trajectory points of the first absolute positions corresponding to the trajectory points according to the arrangement order of the trajectory points in the movement trajectory input by the user.

示例的,移动轨迹包括按序排列的轨迹点X1、轨迹点X2以及轨迹点X3。轨迹点X1的第一绝对位置为Y1。轨迹点X2的第一绝对位置为Y2。轨迹点X3的第一绝对位置为Y3。终端按照轨迹点X1、轨迹点X2以及轨迹点X3的排序顺序,依次连接位于第一绝对位置为Y1的轨迹点、位于第一绝对位置为Y2的轨迹点以及位于第一绝对位置为Y3的轨迹点,生成第一特效线条。Exemplarily, the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence. The first absolute position of the trajectory point X1 is Y1. The first absolute position of the trajectory point X2 is Y2. The first absolute position of the trajectory point X3 is Y3. The terminal sequentially connects the trajectory point located at the first absolute position Y1, the trajectory point located at the first absolute position Y2 and the trajectory located at the first absolute position Y3 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3. Click to generate the first special effect line.

在一些实施例中,在显示页面中,显示第一特效线条。In some embodiments, in the display page, the first special effect line is displayed.

本公开实施例中,终端可以在其当前所显示的显示页面中,显示生成的第一特效线条。In this embodiment of the present disclosure, the terminal may display the generated first special effect line on the display page currently displayed by the terminal.

本公开实施例中,可以通过根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及用户在显示页面中输入的移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置。以便重复执行根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置,并在显示页面中,显示位于各第一绝对位置的轨迹点连接成的第一特效线条的过程。In the embodiment of the present disclosure, the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained. Position information, to determine the relative position of each track point relative to the face image. In order to repeat the conversion of the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained. , the process of displaying the first special effect line formed by connecting the track points located at the first absolute positions.

上述技术方案中,第一特效线条是根据用户输入的移动轨迹绘制的,实现了用户可以自主绘制特效。并且,在获取移动轨迹中各轨迹点与当前人脸图像的相对位置之后,可以采用显示页面实时所显示的人脸图像中,人脸关键点的位置信息以及各轨迹点相对位置,确定各轨迹点在显示屏幕上的第一绝对位置,从而在连接位于各第一绝对位置的轨迹点后生成并显示第一特效线条。这样,生成的第一特效线条的显示位置会随着显示页面实时所显示的人脸图像的显示位置的变化而变化,实现了第一特效线条跟随人脸移动的特效,丰富特效显示效果。In the above technical solution, the first special effect line is drawn according to the movement trajectory input by the user, so that the user can draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.

在一些实施例中,终端不仅可以根据用户输入的移动轨迹绘制自主绘制移动轨迹对应的第一特效线条。还可以根据第一特效线条绘制与该第一特效线条对称的第二特效线条。第二特效线条与第一特效线条以人脸图像中人脸为基准左右对称。例如,如图5所示,终端不仅可以绘制图5中位于人脸图像中头部左上方的虚线所示的形如耳朵的特效线条L1。还可以绘制图5中位于人脸图像中头部右上方的虚线所示的形如耳朵的特效线条L2。特效线条L1与特效线条L2以人脸图像中人脸为基准左右对称。特效线条L1上的P1点与特效线条L2上的P2点以人脸图像中人脸为基准左右对称。则本公开实施例中,图像特效处理还可以包括以下:In some embodiments, the terminal may not only draw the first special effect line corresponding to the movement trajectory independently drawn according to the movement trajectory input by the user. A second special effect line that is symmetrical to the first special effect line may also be drawn according to the first special effect line. The second special effect line and the first special effect line are left-right symmetrical on the basis of the face in the face image. For example, as shown in FIG. 5 , the terminal can not only draw the special effect line L1 in the shape of an ear as shown by the dotted line at the upper left of the head in the face image in FIG. 5 . It is also possible to draw a special effect line L2 in the shape of an ear as shown by the dotted line at the upper right of the head in the face image in FIG. 5 . The special effect line L1 and the special effect line L2 are left-right symmetrical on the basis of the face in the face image. The point P1 on the special effect line L1 and the point P2 on the special effect line L2 are left-right symmetrical on the basis of the face in the face image. In this embodiment of the present disclosure, the image special effect processing may further include the following:

根据第一特效线条,生成与第一特效线条对称的第二特效线条,第二特效线条与第一特效线条以人脸图像中人脸为基准左右对称;According to the first special effect line, a second special effect line symmetrical with the first special effect line is generated, and the second special effect line and the first special effect line are left-right symmetrical based on the face in the face image;

在显示页面中显示第二特效线条。Display the second effect line in the display page.

在一些实施例中,根据第一特效线条,生成与第一特效线条对称的第二特效线条。第二特效线条 与第一特效线条以人脸图像中人脸为基准左右对称。In some embodiments, a second special effect line symmetrical to the first special effect line is generated according to the first special effect line. The second special effect line and the first special effect line are left and right symmetrical on the basis of the face in the face image.

本公开实施例中,终端可以根据第一特效线条生成以终端当前所显示的人脸图像中人脸为基准左右对称的第二特效线条。其中,终端根据第一特效线条生成与该第一特效线条对称的第二特效线条的实现方式有多种。本公开实施例以以下两种为例进行说明。In this embodiment of the present disclosure, the terminal may generate, according to the first special effect line, a left-right symmetrical second special effect line based on the face in the face image currently displayed by the terminal. There are various implementations for the terminal to generate a second special effect line symmetrical to the first special effect line according to the first special effect line. The embodiments of the present disclosure are described by taking the following two examples as examples.

第一种实现方式,如图7所示,终端根据第一特效线条生成与该第一特效线条对称的第二特效线条的过程可以包括以下步骤701至步骤702。In a first implementation manner, as shown in FIG. 7 , the process for the terminal to generate a second special effect line symmetrical to the first special effect line according to the first special effect line may include the following steps 701 to 702 .

在步骤701中,根据人脸关键点的第二位置信息,以及各对称点的相对位置,确定各对称点在显示屏幕上的第二绝对位置。In step 701, the second absolute position of each symmetrical point on the display screen is determined according to the second position information of the face key point and the relative position of each symmetrical point.

本公开实施例中,终端在执行上述步骤102根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置之后。终端还可以根据各轨迹点相对人脸图像的相对位置,确定各轨迹点的对称点相对人脸图像的相对位置。其中,对称点与轨迹点以人脸为基准左右对称。In the embodiment of the present disclosure, the terminal performs the above step 102 according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, After determining the relative position of each track point relative to the face image. The terminal can also determine the relative position of the symmetrical point of each track point with respect to the face image according to the relative position of each track point with respect to the face image. Among them, the symmetrical point and the trajectory point are left and right symmetrical with the face as the reference.

在一些实施例中,在轨迹点相对人脸图像的相对位置为轨迹点相对人脸图像的相对坐标的情况下,若相对坐标所属的二维坐标系,其一个轴可以为人脸图像的对称轴。则终端根据各轨迹点相对人脸图像的相对位置,确定各轨迹点的对称点相对人脸图像的相对位置的过程可以包括:终端对轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,第一方向与人脸图像的对称轴垂直。更新轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为处理后的坐标值。确定更新后的相对位置为对称点的相对位置。其中,在对称点与轨迹点以人脸为基准左右对称的情况下,第一方向可以为与人脸图像左右对称的对称轴垂直的方向。In some embodiments, when the relative position of the track point relative to the face image is the relative coordinate of the track point relative to the face image, if the relative coordinates belong to a two-dimensional coordinate system, one of its axes may be the symmetry axis of the face image. . Then, according to the relative position of each trajectory point relative to the face image, the process of determining the relative position of the symmetrical point of each trajectory point relative to the face image by the terminal may include: the terminal performs positive and negative on the coordinate value of the first direction in the relative coordinates of the trajectory point. Digital conversion processing is performed to obtain processed coordinate values, and the first direction is perpendicular to the symmetry axis of the face image. The relative position of the track point is updated, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value. The updated relative position is determined as the relative position of the symmetry point. Wherein, when the symmetry point and the trajectory point are left-right symmetrical with the face as a reference, the first direction may be a direction perpendicular to the symmetry axis of the left-right symmetry of the face image.

示例的,假设终端确定的轨迹点P1相对人脸图像的相对坐标为(x q,y q)。且第一方向为与人脸图像左右对称的对称轴垂直的方向,即x轴方向。终端对轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值-x q。则对称点的相对位置为(-x q,y q)。 For example, it is assumed that the relative coordinates of the trajectory point P1 determined by the terminal relative to the face image are (x q , y q ). And the first direction is a direction perpendicular to the symmetry axis of the left-right symmetry of the face image, that is, the x-axis direction. The terminal performs positive and negative conversion processing on the coordinate value in the first direction in the relative coordinates of the trajectory point, and obtains the processed coordinate value -x q . Then the relative position of the symmetrical point is (-x q , y q ).

本公开实施例中,终端根据人脸关键点的第二位置信息,以及各对称点的相对位置,确定各对称点在显示屏幕上的第二绝对位置。也即是终端根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对各对称点的相对位置进行转换,得到各对称点在显示屏幕上的第二绝对位置。该终端根据人脸关键点的第二位置信息,以及各对称点的相对位置,确定各对称点在显示屏幕上的第二绝对位置的过程可以参考前述图像特效处理过程中步骤A,根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置的过程,本公开实施例对此不做赘述。In the embodiment of the present disclosure, the terminal determines the second absolute position of each symmetrical point on the display screen according to the second position information of the face key point and the relative position of each symmetrical point. That is, the terminal converts the relative position of each symmetrical point according to the second position information of the key point of the face in the face image displayed on the display page after the current moment, and obtains the second absolute position of each symmetrical point on the display screen. . The process of determining the second absolute position of each symmetrical point on the display screen by the terminal according to the second position information of the key point of the face and the relative position of each symmetrical point may refer to step A in the aforementioned image special effect processing process, according to the display page In the face image displayed after the current moment, the second position information of the key points of the face converts the relative positions to obtain the first absolute position of each track point on the display screen, which is not performed in this embodiment of the present disclosure. Repeat.

在步骤702中,连接位于各第二绝对位置的对称点,生成第二特效线条。In step 702, the symmetrical points located at the second absolute positions are connected to generate a second special effect line.

在一些实施例中,终端可以根据用户输入的移动轨迹中各轨迹点的排布顺序,连接各轨迹点的分别对应的各对称点中,各第二绝对位置的轨迹点,生成第二特效线条。In some embodiments, the terminal may, according to the arrangement order of the trajectory points in the movement trajectory input by the user, connect the trajectory points of the second absolute positions among the symmetrical points corresponding to the trajectory points, and generate the second special effect line .

示例的,移动轨迹包括按序排列的轨迹点X1、轨迹点X2以及轨迹点X3。轨迹点X1对应的对称点X4的第二绝对位置为Y4。轨迹点X2对应的对称点X5第二绝对位置为Y5。轨迹点X3对应的对称点X6第二绝对位置为Y6。终端按照轨迹点X1、轨迹点X2以及轨迹点X3的排序顺序,依次连接位于第二绝对位置为Y4的轨迹点、位于第二绝对位置为Y5的轨迹点以及位于第二绝对位置为Y6的轨迹点,生成第二特效线条。Exemplarily, the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence. The second absolute position of the symmetrical point X4 corresponding to the trajectory point X1 is Y4. The second absolute position of the symmetrical point X5 corresponding to the trajectory point X2 is Y5. The second absolute position of the symmetrical point X6 corresponding to the trajectory point X3 is Y6. The terminal sequentially connects the trajectory point located at the second absolute position Y4, the trajectory point located at the second absolute position Y5 and the trajectory located at the second absolute position Y6 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3 Click to generate the second special effect line.

第二种实现方式,如图8所示,终端根据第一特效线条生成与该第一特效线条对称的第二特效线条的过程可以包括以下步骤801至步骤802。For the second implementation manner, as shown in FIG. 8 , the process of generating the second special effect line symmetrical to the first special effect line by the terminal according to the first special effect line may include the following steps 801 to 802 .

在步骤801中,根据人脸关键点的第二位置信息,以及各轨迹点的第一绝对位置,确定各轨迹点的对称点在显示屏幕上的第二绝对位置。轨迹点与对称点以人脸为基准左右对称。In step 801, the second absolute position of the symmetrical point of each trajectory point on the display screen is determined according to the second position information of the face key point and the first absolute position of each trajectory point. The trajectory point and the symmetry point are left and right symmetrical based on the face.

在一些实施例中,在人脸关键点的第二位置信息为人脸关键点在显示屏幕上的绝对坐标的情况下。终端根据人脸关键点的第二位置信息,以及各轨迹点的第一绝对位置,确定各轨迹点的对称点在显示屏幕上的第二绝对位置的过程可以包括以下步骤:In some embodiments, when the second position information of the face key points is the absolute coordinates of the face key points on the display screen. The process that the terminal determines the second absolute position of the symmetrical point of each trajectory point on the display screen according to the second position information of the face key point and the first absolute position of each trajectory point may include the following steps:

根据第一人脸关键点和第二人脸关键点的第二位置信息,得到由第一人脸关键点指向第二人脸关键点的第二向量,以及垂直与第二向量的第三向量;According to the second position information of the first face key point and the second face key point, a second vector pointing from the first face key point to the second face key point, and a third vector perpendicular to the second vector are obtained ;

根据目标人脸关键点的第二位置信息,以及轨迹点的第一绝对位置,得到由目标人脸关键点指向轨迹点的第四向量;According to the second position information of the target face key point and the first absolute position of the track point, the fourth vector pointing to the track point from the target face key point is obtained;

根据第二向量、第三向量、第四向量以及目标人脸关键点的第二位置信息,得到对称点的第二绝 对位置。According to the second vector, the third vector, the fourth vector and the second position information of the target face key point, the second absolute position of the symmetrical point is obtained.

在一些实施例中,根据第一人脸关键点和第二人脸关键点的第二位置信息,得到由第一人脸关键点指向第二人脸关键点的第二向量,以及垂直与第二向量的第三向量。In some embodiments, according to the second position information of the first face key point and the second face key point, a second vector pointing from the first face key point to the second face key point is obtained, and The third vector of two vectors.

示例的,继续以步骤1021中假设的人脸关键点为例,针对假设的轨迹点P进行示意性说明。终端根据第一人脸关键点A的第二位置信息(x a2,y a2),以及第二人脸关键点B的第二位置信息(x b2,y b2),得到第二向量

Figure PCTCN2021134644-appb-000026
为(x a2-x b2,y a2-y b2)。以及垂直与第二向量
Figure PCTCN2021134644-appb-000027
的第三向量
Figure PCTCN2021134644-appb-000028
为(y b2-y a2,x a2-x b2)。 Illustratively, continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P schematically. The terminal obtains the second vector according to the second position information (x a2 , y a2 ) of the first face key point A and the second position information (x b2 , y b2 ) of the second face key point B
Figure PCTCN2021134644-appb-000026
is (x a2 -x b2 ,y a2 -y b2 ). and the vertical with the second vector
Figure PCTCN2021134644-appb-000027
the third vector of
Figure PCTCN2021134644-appb-000028
is (y b2 -y a2 ,x a2 -x b2 ).

在一些实施例中,根据目标人脸关键点的第二位置信息,以及轨迹点的第一绝对位置,得到由目标人脸关键点指向轨迹点的第四向量。In some embodiments, according to the second position information of the target face key point and the first absolute position of the track point, a fourth vector pointing from the target face key point to the track point is obtained.

示例的,继续以步骤1021中假设的人脸关键点为例,针对假设的轨迹点P进行示意性说明。假设轨迹点P的第一绝对位置为(x r,y r)。终端根据目标人脸关键点C的第二位置信息(x c,y c),以及轨迹点P的第一绝对位置(x r,y r),得到由目标人脸关键点指向轨迹点的第四向量

Figure PCTCN2021134644-appb-000029
为(x r-x c,y r-y c) Illustratively, continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P schematically. Assume that the first absolute position of the trajectory point P is (x r , y r ). According to the second position information (x c , y c ) of the target face key point C and the first absolute position (x r , y r ) of the trajectory point P, the terminal obtains the first position from the target face key point to the trajectory point. four vector
Figure PCTCN2021134644-appb-000029
is (x r -x c ,y r -y c ) .

在一些实施例中,根据第二向量、第三向量、第四向量以及目标人脸关键点的第二位置信息,得到对称点的第二绝对位置。In some embodiments, the second absolute position of the symmetrical point is obtained according to the second vector, the third vector, the fourth vector and the second position information of the target face key point.

在一些实施例中,终端根据第二向量、第三向量、第四向量、目标人脸关键点的第二位置信息以及第三公式,得到对称点的第二绝对位置。第三公式包括:In some embodiments, the terminal obtains the second absolute position of the symmetry point according to the second vector, the third vector, the fourth vector, the second position information of the target face key point, and the third formula. The third formula includes:

Figure PCTCN2021134644-appb-000030
Figure PCTCN2021134644-appb-000030

其中,M表示对称点的第二绝对位置,

Figure PCTCN2021134644-appb-000031
表示第二向量、
Figure PCTCN2021134644-appb-000032
表示第三向量、
Figure PCTCN2021134644-appb-000033
表示第四向量以及(x c,y c)表示目标人脸关键点的第二当前位置信息。 where M represents the second absolute position of the symmetry point,
Figure PCTCN2021134644-appb-000031
represents the second vector,
Figure PCTCN2021134644-appb-000032
represents the third vector,
Figure PCTCN2021134644-appb-000033
represents the fourth vector and (x c , y c ) represents the second current position information of the target face key point.

本公开实施例中,第一种实现方式中,利用对称点与轨迹点以人脸为基准左右对称的特征,可以直接将轨迹点的相对坐标中,与人脸图像的对称轴垂直的第一方向的坐标值执行正负数转换处理后,得到的相对坐标确定为该轨迹点的对称点的相对坐标。之后根据与前述步骤103中类似的步骤根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对对称点的相对位置进行转换,得到对称点在显示屏幕上的第二绝对位置。相较于第二种实现方式,简化了确定对称点在显示屏幕上的第二绝对位置的过程,提高了对称点的第二绝对位置的计算效率。In the embodiment of the present disclosure, in the first implementation manner, the symmetrical point and the trajectory point can be directly left and right symmetrical with the face as the reference, and the relative coordinates of the trajectory point can be directly converted to the first one that is perpendicular to the symmetry axis of the face image. After performing positive and negative conversion processing on the coordinate value of the direction, the obtained relative coordinate is determined as the relative coordinate of the symmetrical point of the trajectory point. Then, according to the steps similar to those in the aforementioned step 103, the relative positions of the symmetrical points are converted according to the second position information of the face key points in the face image displayed on the display page after the current moment, to obtain the symmetrical points on the display screen. Second absolute position. Compared with the second implementation manner, the process of determining the second absolute position of the symmetrical point on the display screen is simplified, and the calculation efficiency of the second absolute position of the symmetrical point is improved.

在步骤802中,连接位于各第二绝对位置的对称点,生成第二特效线条。In step 802, the symmetrical points located at the second absolute positions are connected to generate a second special effect line.

在一些实施例中,终端可以根据用户输入的移动轨迹中各轨迹点的排布顺序,连接各轨迹点的分别对应的各对称点中,各第二绝对位置的轨迹点,生成第二特效线条。In some embodiments, the terminal may, according to the arrangement order of the trajectory points in the movement trajectory input by the user, connect the trajectory points of the second absolute positions among the symmetrical points corresponding to the trajectory points, and generate the second special effect line .

示例的,移动轨迹包括按序排列的轨迹点X1、轨迹点X2以及轨迹点X3。轨迹点X1对应的对称点X4的第二绝对位置为Y4。轨迹点X2对应的对称点X5第二绝对位置为Y5。轨迹点X3对应的对称点X6第二绝对位置为Y6。终端按照轨迹点X1、轨迹点X2以及轨迹点X3的排序顺序,依次连接位于第二绝对位置为Y4的轨迹点、位于第二绝对位置为Y5的轨迹点以及位于第二绝对位置为Y6的轨迹点,生成第二特效线条。Exemplarily, the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence. The second absolute position of the symmetrical point X4 corresponding to the trajectory point X1 is Y4. The second absolute position of the symmetrical point X5 corresponding to the trajectory point X2 is Y5. The second absolute position of the symmetrical point X6 corresponding to the trajectory point X3 is Y6. The terminal sequentially connects the trajectory point located at the second absolute position Y4, the trajectory point located at the second absolute position Y5 and the trajectory located at the second absolute position Y6 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3 Click to generate the second special effect line.

在一些实施例中,在显示页面中显示第二特效线条。In some embodiments, the second effect line is displayed in the display page.

终端可以在其当前所显示的显示页面中,显示生成的第二特效线条。The terminal may display the generated second special effect line on the display page currently displayed by the terminal.

本公开实施例中,终端可以通过根据第一特效线条,生成与第一特效线条以人脸图像中人脸为基准左右对称的第二特效线条。并在显示页面中显示第二特效线条。实现了用户在人脸图像中,自主绘制以人脸为基准左右对称特效线条的功能。In this embodiment of the present disclosure, the terminal may generate, according to the first special effect line, a second special effect line that is left-right symmetrical with the first special effect line based on the human face in the face image. And display the second special effect line in the display page. It realizes the function that the user can independently draw the left and right symmetrical special effects lines based on the face in the face image.

并且,终端在获取用户输入的移动轨迹中,各轨迹点的对称点的相对位置之后,采用实时页面所显示的人脸图像中,人脸关键点的第二位置信息以及各对称点的相对位置,确定的各轨迹点的对称点在显示屏幕上的第二绝对位置。从而连接各第二绝对位置的对称点生成的第二特效线条。因而,生成的第二特效线条的显示位置会跟随显示页面实时所显示的人脸图像的显示位置的变化而变化。实现了第二特效线条跟随人脸移动的特效。进而使得绘制的以人脸为基准左右对称特效线条均可以随人脸移动,丰富特效显示效果。In addition, after acquiring the relative positions of the symmetrical points of each trajectory point in the moving trajectory input by the user, the terminal adopts the second position information of the key points of the face and the relative positions of each symmetrical point in the face image displayed on the real-time page. , the determined second absolute position of the symmetrical point of each track point on the display screen. Thereby, the second effect lines generated by connecting the symmetrical points of the second absolute positions are connected. Therefore, the display position of the generated second special effect line will change with the change of the display position of the face image displayed on the display page in real time. The special effect of the second special effect line following the movement of the face is realized. In this way, the left and right symmetrical special effects lines drawn on the basis of the human face can move with the human face, enriching the special effect display effect.

本公开实施例中,可以通过根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的 第一位置信息,以及用户在显示页面中输入的移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置。以便重复执行根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置,并在显示页面中,显示位于各第一绝对位置的轨迹点连接成的第一特效线条的过程。上述技术方案中,第一特效线条是根据用户输入的移动轨迹绘制的,实现了用户自主绘制特效的功能。并且,在获取移动轨迹中各轨迹点与当前人脸图像的相对位置之后,可以采用显示页面实时所显示的人脸图像中,人脸关键点的位置信息以及各轨迹点相对位置,确定各轨迹点在显示屏幕上的第一绝对位置,从而在连接位于各第一绝对位置的轨迹点后生成并显示第一特效线条。这样,生成的第一特效线条的显示位置会随着显示页面实时所显示的人脸图像的显示位置的变化而变化,实现了第一特效线条跟随人脸移动的特效,丰富特效显示效果。In the embodiment of the present disclosure, the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained. Position information, to determine the relative position of each track point relative to the face image. In order to repeat the conversion of the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained. , the process of displaying the first special effect line formed by connecting the track points located at the first absolute positions. In the above technical solution, the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.

图9是根据一示例性实施例示出的一种图像处理装置的框图。参照图9,图像处理装置900包括:获取模块901、确定模块902以及图像特效处理模块903。Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment. Referring to FIG. 9 , the image processing apparatus 900 includes: an acquisition module 901 , a determination module 902 and an image special effect processing module 903 .

获取模块901,用于响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;an acquisition module 901, configured to acquire the movement track input by the user in the display page including the face image in response to the special effect display instruction;

确定模块902,用于根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置;The determining module 902 is configured to determine the relative position of each track point relative to the human face according to the first position information of at least two face key points and the track position information of at least one track point in the moving track in the face image displayed on the display page at the current moment. the relative position of the face image;

图像特效处理模块903,用于重复执行图像特效处理,图像特效处理包括:The image special effect processing module 903 is used to repeatedly perform image special effect processing, and the image special effect processing includes:

根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key point of the face in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen;

连接位于各第一绝对位置的轨迹点,生成第一特效线条;Connect the track points located at the first absolute positions to generate the first special effect line;

在显示页面中,显示第一特效线条。In the display page, the first special effect line is displayed.

在一种可能实现方式中,至少两个人脸关键点包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点,第一人脸关键点与第二人脸关键点关于目标人脸关键点对称,目标人脸关键点为人脸图像的对称轴上的任一人脸关键点。In a possible implementation manner, the at least two face key points include: a first face key point, a second face key point, and a target face key point, and the first face key point and the second face key point are related to The target face key point is symmetrical, and the target face key point is any face key point on the symmetry axis of the face image.

在一种可能实现方式中,确定模块902,还用于:In a possible implementation manner, the determining module 902 is further configured to:

针对各轨迹点,根据目标人脸关键点的第一位置信息和轨迹点的轨迹位置信息,确定轨迹点指向目标人脸关键点的平移向量;For each track point, according to the first position information of the target face key point and the track position information of the track point, determine the translation vector of the track point pointing to the target face key point;

根据第一人脸关键点和第二人脸关键点的第一位置信息,确定人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵;Determine the first rotation matrix and the first scaling matrix of the current face pose in the face image according to the first position information of the first face key point and the second face key point;

根据第一旋转矩阵、第一缩放矩阵以及平移向量,得到轨迹点相对人脸图像的相对位置。According to the first rotation matrix, the first scaling matrix and the translation vector, the relative position of the trajectory point relative to the face image is obtained.

在一种可能实现方式中,第一位置信息和轨迹位置信息均包括在显示屏幕上的绝对坐标,确定模块902,还用于:In a possible implementation manner, both the first position information and the track position information include absolute coordinates on the display screen, and the determining module 902 is further configured to:

根据第一人脸关键点和第二人脸关键点的第一位置信息,以及第一长度,得到第一旋转矩阵,第一长度为第一人脸关键点指向第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point pointing to the second face key point. the length of a vector;

根据第一人脸关键点和第二人脸关键点连线的参考长度,以及第一长度,得到第一缩放矩阵,参考长度为针对人脸图像中处于正视姿态的人脸设定的第一长度。According to the reference length of the line connecting the first face key point and the second face key point, and the first length, a first scaling matrix is obtained, and the reference length is the first scale set for the face in the face-up posture in the face image. length.

在一种可能实现方式中,确定模块902,还用于:In a possible implementation manner, the determining module 902 is further configured to:

根据第一缩放矩阵、第一旋转矩阵、平移向量以及第一公式,得到相对位置,第一公式包括:The relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and the first formula, and the first formula includes:

Figure PCTCN2021134644-appb-000034
Figure PCTCN2021134644-appb-000034

其中,Q表示相对位置、Ms 1表示第一缩放矩阵、Mr 1表示第一旋转矩阵、

Figure PCTCN2021134644-appb-000035
表示平移向量。 Wherein, Q represents the relative position, Ms 1 represents the first scaling matrix, Mr 1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-000035
represents the translation vector.

在一种可能实现方式中,图像特效处理模块903,还用于:In a possible implementation manner, the image special effect processing module 903 is further configured to:

针对各轨迹点,根据第一人脸关键点和第二人脸关键点的第二位置信息,确定人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵;For each track point, according to the second position information of the first face key point and the second face key point, determine the second rotation matrix and the second scaling matrix of the current face posture in the face image;

根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息以及相对位置,得到轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.

在一种可能实现方式中,第二位置信息包括在显示屏幕上的绝对坐标,图像特效处理模块903,还用于:In a possible implementation manner, the second position information includes absolute coordinates on the display screen, and the image special effect processing module 903 is further configured to:

根据第一人脸关键点和第二人脸关键点的第二位置信息,以及第二长度,得到第二旋转矩阵,第二长度为第一人脸关键点指向第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, a second rotation matrix is obtained, and the second length is the first face key point pointing to the second face key point. the length of the two vectors;

根据第一人脸关键点和第二人脸关键点连线的参考长度,以及第二长度,得到第二缩放矩阵,参考长度为针对人脸图像中处于正视姿态的人脸设定的第二长度。According to the reference length of the line connecting the first face key point and the second face key point, as well as the second length, a second scaling matrix is obtained, and the reference length is the second set for the face in the face-up posture in the face image. length.

在一种可能实现方式中,图像特效处理模块903,还用于:In a possible implementation manner, the image special effect processing module 903 is further configured to:

根据第二旋转矩阵、第二缩放矩阵、目标人脸关键点的第二位置信息、相对位置以及第二公式,得到轨迹点的第一绝对位置,第二公式包括:According to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the second formula includes:

R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ;

其中,R表示第一绝对位置、Mr 2表示第二旋转矩阵、Ms 2表示第二缩放矩阵、(x q,y q)表示相对位置、(x c,y c)表示目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position, (x c , y c ) represents the target face key point The second position information, and T represents the transposition process.

在一种可能实现方式中,图像特效处理还包括:In a possible implementation manner, the image special effect processing further includes:

根据第一特效线条,生成与第一特效线条对称的第二特效线条,第二特效线条与第一特效线条以人脸图像中人脸为基准左右对称;According to the first special effect line, a second special effect line symmetrical with the first special effect line is generated, and the second special effect line and the first special effect line are left-right symmetrical based on the face in the face image;

在显示页面中显示第二特效线条。Display the second effect line in the display page.

在一种可能实现方式中,确定模块902,还用于:In a possible implementation manner, the determining module 902 is further configured to:

根据各轨迹点相对人脸图像的相对位置,确定各轨迹点的对称点相对人脸图像的相对位置,对称点与轨迹点以人脸为基准左右对称;According to the relative position of each track point relative to the face image, determine the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point are left and right symmetrical with the face as the benchmark;

图像特效处理模块903,还用于:The image special effect processing module 903 is also used for:

根据人脸关键点的第二位置信息,以及各对称点的相对位置,确定各对称点在显示屏幕上的第二绝对位置;Determine the second absolute position of each symmetrical point on the display screen according to the second position information of the face key point and the relative position of each symmetrical point;

连接位于各第二绝对位置的对称点,生成第二特效线条。Connect the symmetrical points at each second absolute position to generate a second effect line.

在一种可能实现方式中,相对位置包括相对人脸图像的相对坐标,确定模块902,还用于:In a possible implementation manner, the relative position includes relative coordinates relative to the face image, and the determining module 902 is further configured to:

对轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,第一方向与人脸图像的对称轴垂直;更新轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为处理后的坐标值;确定更新后的相对位置为对称点的相对位置。Perform positive and negative conversion processing on the coordinate value of the first direction in the relative coordinates of the trajectory point to obtain the processed coordinate value, and the first direction is perpendicular to the symmetry axis of the face image; update the relative position of the trajectory point, so that the updated The coordinate value of the first direction in the relative position is the processed coordinate value; it is determined that the updated relative position is the relative position of the symmetrical point.

本公开实施例中,可以通过确定模块根据显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及用户在显示页面中输入的移动轨迹中至少一个轨迹点的轨迹位置信息,确定各轨迹点相对人脸图像的相对位置。以便使得图像特效处理模块重复执行根据显示页面在当前时刻之后所显示人脸图像中,人脸关键点的第二位置信息对相对位置进行转换,得到各轨迹点在显示屏幕上的第一绝对位置,并在显示页面中,显示位于各第一绝对位置的轨迹点连接成的第一特效线条的过程。上述技术方案中,第一特效线条是根据用户输入的移动轨迹绘制的,实现了用户自主绘制特效的功能。并且,在获取移动轨迹中各轨迹点与当前人脸图像的相对位置之后,可以采用显示页面实时所显示的人脸图像中,人脸关键点的位置信息以及各轨迹点相对位置,确定各轨迹点在显示屏幕上的第一绝对位置,从而在连接位于各第一绝对位置的轨迹点后生成并显示第一特效线条。这样,生成的第一特效线条的显示位置会随着显示页面实时所显示的人脸图像的显示位置的变化而变化,实现了第一特效线条跟随人脸移动的特效,丰富特效显示效果。In this embodiment of the present disclosure, the determination module can be used to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and at least one trajectory point in the movement trajectory input by the user on the display page. The position information of the trajectory is determined, and the relative position of each trajectory point relative to the face image is determined. In order to make the image special effect processing module repeatedly perform the conversion according to the second position information of the face key points in the face image displayed on the display page after the current moment, the relative positions are converted to obtain the first absolute position of each track point on the display screen. , and on the display page, the process of displaying the first special effect line formed by connecting the trajectory points located at the first absolute positions. In the above technical solution, the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.

图10是根据一示例性实施例示出的一种电子设备的框图。该电子设备可以为终端。该电子设备1000可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。电子设备1000还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment. The electronic device may be a terminal. The electronic device 1000 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compression standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, a moving picture expert compression standard Audio Layer 4) Player, Laptop or Desktop. Electronic device 1000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.

通常,电子设备1000包括有:处理器1001和存储器1002。Generally, the electronic device 1000 includes: a processor 1001 and a memory 1002 .

处理器1001可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1001可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1001也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1001可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1001还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1001 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish. The processor 1001 may also include a main processor and a coprocessor. The main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.

存储器1002可以包括一个或多个非易失性计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1002还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储 设备、闪存存储设备。在一些实施例中,存储器1002中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1001所执行以实现本申请中方法实施例提供的信息显示方法。Memory 1002 may include one or more non-volatile computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1001 to realize the information display provided by the method embodiments in this application. method.

在一些实施例中,电子设备1000还可包括有:外围设备接口1003和至少一个外围设备。处理器1001、存储器1002和外围设备接口1003之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1003相连。在一些实施例中,外围设备包括:射频电路1004、显示屏1005、摄像头1006、音频电路1007、定位组件1008和电源1009中的至少一种。In some embodiments, the electronic device 1000 may further include: a peripheral device interface 1003 and at least one peripheral device. The processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected through a bus or a signal line. Each peripheral device can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board. In some embodiments, the peripheral device includes at least one of a radio frequency circuit 1004 , a display screen 1005 , a camera 1006 , an audio circuit 1007 , a positioning component 1008 and a power supply 1009 .

外围设备接口1003可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1001和存储器1002。在一些实施例中,处理器1001、存储器1002和外围设备接口1003被集成在同一芯片或电路板上;在一些其他实施例中,处理器1001、存储器1002和外围设备接口1003中的任意一个或两个可以在单独的芯片或电路板上实现,本公开的实施例对此不加以限定。The peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002 . In some embodiments, processor 1001, memory 1002, and peripherals interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1001, memory 1002, and peripherals interface 1003 or The two may be implemented on a separate chip or circuit board, which is not limited by the embodiments of the present disclosure.

射频电路1004用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1004通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1004将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。在一些实施例中,射频电路1004包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1004可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1004还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。The radio frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals. The radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals. The radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. In some embodiments, radio frequency circuitry 1004 includes: an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and the like. The radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to, metropolitan area networks, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area networks and/or WiFi (Wireless Fidelity, wireless fidelity) networks. In some embodiments, the radio frequency circuit 1004 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.

显示屏1005用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1005是触摸显示屏时,显示屏1005还具有采集在显示屏1005的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1001进行处理。此时,显示屏1005还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1005可以为一个,设置电子设备1000的前面板;在另一些实施例中,显示屏1005可以为至少两个,分别设置在电子设备1000的不同表面或呈折叠设计;在再一些实施例中,显示屏1005可以是柔性显示屏,设置在电子设备1000的弯曲表面上或折叠面上。甚至,显示屏1005还可以设置成非矩形的不规则图形,也即异形屏。显示屏1005可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。The display screen 1005 is used for displaying UI (User Interface, user interface). The UI can include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to acquire touch signals on or above the surface of the display screen 1005 . The touch signal can be input to the processor 1001 as a control signal for processing. At this time, the display screen 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, there may be one display screen 1005, which is arranged on the front panel of the electronic device 1000; in other embodiments, there may be at least two display screens 1005, which are respectively arranged on different surfaces of the electronic device 1000 or in a folded design. ; In still other embodiments, the display screen 1005 may be a flexible display screen, disposed on a curved surface or a folding surface of the electronic device 1000 . Even, the display screen 1005 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen. The display screen 1005 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).

摄像头组件1006用于采集图像或视频。在一些实施例中,摄像头组件1006包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1006还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。The camera assembly 1006 is used to capture images or video. In some embodiments, camera assembly 1006 includes a front-facing camera and a rear-facing camera. Usually, the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, there are at least two rear cameras, which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions. In some embodiments, the camera assembly 1006 may also include a flash. The flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.

音频电路1007可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1001进行处理,或者输入至射频电路1004以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在电子设备1000的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1001或射频电路1004的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1007还可以包括耳机插孔。Audio circuitry 1007 may include a microphone and speakers. The microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1001 for processing, or to the radio frequency circuit 1004 to realize voice communication. For the purpose of stereo acquisition or noise reduction, there may be multiple microphones, which are respectively disposed in different parts of the electronic device 1000 . The microphone may also be an array microphone or an omnidirectional collection microphone. The speaker is used to convert the electrical signal from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes. In some embodiments, the audio circuit 1007 may also include a headphone jack.

定位组件1008用于定位电子设备1000的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1008可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。The positioning component 1008 is used to locate the current geographic location of the electronic device 1000 to implement navigation or LBS (Location Based Service). The positioning component 1008 may be a positioning component based on the GPS (Global Positioning System, global positioning system) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.

电源1009用于为电子设备1000中的各个组件进行供电。电源1009可以是交流电、直流电、一次性电池或可充电电池。当电源1009包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。Power supply 1009 is used to power various components in electronic device 1000 . The power source 1009 may be alternating current, direct current, disposable batteries or rechargeable batteries. When the power source 1009 includes a rechargeable battery, the rechargeable battery can support wired charging or wireless charging. The rechargeable battery can also be used to support fast charging technology.

在一些实施例中,电子设备1000还包括有一个或多个传感器1010。该一个或多个传感器1010包括但不限于:加速度传感器1011、陀螺仪传感器1012、压力传感器1013、指纹传感器1014、光学传 感器1015以及接近传感器1016。In some embodiments, the electronic device 1000 also includes one or more sensors 1010 . The one or more sensors 1010 include, but are not limited to, an acceleration sensor 1011, a gyro sensor 1012, a pressure sensor 1013, a fingerprint sensor 1014, an optical sensor 1015, and a proximity sensor 1016.

加速度传感器1011可以检测以电子设备1000建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1011可以用于检测重力加速度在三个坐标轴上的分量。处理器1001可以根据加速度传感器1011采集的重力加速度信号,控制显示屏1005以横向视图或纵向视图进行用户界面的显示。加速度传感器1011还可以用于游戏或者用户的运动数据的采集。The acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the electronic device 1000 . For example, the acceleration sensor 1011 can be used to detect the components of the gravitational acceleration on the three coordinate axes. The processor 1001 can control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011 . The acceleration sensor 1011 can also be used for game or user movement data collection.

陀螺仪传感器1012可以检测电子设备1000的机体方向及转动角度,陀螺仪传感器1012可以与加速度传感器1011协同采集用户对电子设备1000的3D动作。处理器1001根据陀螺仪传感器1012采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。The gyroscope sensor 1012 can detect the body direction and rotation angle of the electronic device 1000 , and the gyroscope sensor 1012 can cooperate with the acceleration sensor 1011 to collect the 3D actions of the user on the electronic device 1000 . The processor 1001 can implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.

压力传感器1013可以设置在电子设备1000的侧边框和/或显示屏1005的下层。当压力传感器1013设置在电子设备1000的侧边框时,可以检测用户对电子设备1000的握持信号,由处理器1001根据压力传感器1013采集的握持信号进行左右手识别或快捷操作。当压力传感器1013设置在显示屏1005的下层时,由处理器1001根据用户对显示屏1005的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。The pressure sensor 1013 may be disposed on the side frame of the electronic device 1000 and/or the lower layer of the display screen 1005 . When the pressure sensor 1013 is disposed on the side frame of the electronic device 1000 , the user's holding signal of the electronic device 1000 can be detected, and the processor 1001 can perform left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 1013 . When the pressure sensor 1013 is disposed on the lower layer of the display screen 1005, the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005. The operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.

指纹传感器1014用于采集用户的指纹,由处理器1001根据指纹传感器1014采集到的指纹识别用户的身份,或者,由指纹传感器1014根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1001授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1014可以被设置电子设备1000的正面、背面或侧面。当电子设备1000上设置有物理按键或厂商Logo时,指纹传感器1014可以与物理按键或厂商Logo集成在一起。The fingerprint sensor 1014 is used to collect the user's fingerprint, and the processor 1001 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings. The fingerprint sensor 1014 may be provided on the front, back, or side of the electronic device 1000 . When the electronic device 1000 is provided with physical buttons or a manufacturer's logo, the fingerprint sensor 1014 can be integrated with the physical buttons or the manufacturer's logo.

光学传感器1015用于采集环境光强度。在一个实施例中,处理器1001可以根据光学传感器1015采集的环境光强度,控制显示屏1005的显示亮度。在一些实施例中,当环境光强度较高时,调高显示屏1005的显示亮度;当环境光强度较低时,调低显示屏1005的显示亮度。在另一个实施例中,处理器1001还可以根据光学传感器1015采集的环境光强度,动态调整摄像头组件1006的拍摄参数。The optical sensor 1015 is used to collect ambient light intensity. In one embodiment, the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015 . In some embodiments, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is decreased. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1015 .

接近传感器1016,也称距离传感器,通常设置在电子设备1000的前面板。接近传感器1016用于采集用户与电子设备1000的正面之间的距离。在一个实施例中,当接近传感器1016检测到用户与电子设备1000的正面之间的距离逐渐变小时,由处理器1001控制显示屏1005从亮屏状态切换为息屏状态;当接近传感器1016检测到用户与电子设备1000的正面之间的距离逐渐变大时,由处理器1001控制显示屏1005从息屏状态切换为亮屏状态。A proximity sensor 1016 , also called a distance sensor, is usually provided on the front panel of the electronic device 1000 . The proximity sensor 1016 is used to collect the distance between the user and the front of the electronic device 1000 . In one embodiment, when the proximity sensor 1016 detects that the distance between the user and the front of the electronic device 1000 is gradually decreasing, the processor 1001 controls the display screen 1005 to switch from the bright screen state to the off screen state; when the proximity sensor 1016 detects When the distance between the user and the front of the electronic device 1000 gradually increases, the processor 1001 controls the display screen 1005 to switch from the off-screen state to the bright-screen state.

本领域技术人员可以理解,图10中示出的结构并不构成对电子设备1000的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。Those skilled in the art can understand that the structure shown in FIG. 10 does not constitute a limitation on the electronic device 1000, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.

在示例性实施例中,还提供了一种非易失性计算机可读存储介质,当该存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述各个方法实施例提供的图像处理方法。例如,该计算机可读存储介质可以是ROM(Read-Only Memory,只读内存)、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,只读光盘)、磁带、软盘和光数据存储设备等。In an exemplary embodiment, a non-volatile computer-readable storage medium is also provided, when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the methods provided by the above method embodiments. image processing methods. For example, the computer-readable storage medium can be ROM (Read-Only Memory, read-only memory), RAM (Random Access Memory, random access memory), CD-ROM (Compact Disc Read-Only Memory, read-only optical disk), Tape, floppy disk, and optical data storage devices, etc.

在示例性实施例中,还提供了一种计算机程序产品,包括计算机程序。该计算机程序被处理器执行时能够执行上述各个方法实施例提供的图像处理方法。In an exemplary embodiment, there is also provided a computer program product, including a computer program. When the computer program is executed by the processor, the image processing methods provided by the above method embodiments can be executed.

本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or techniques in the technical field not disclosed by the present disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.

应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围.All the embodiments of the present disclosure can be implemented alone or in combination with other embodiments, which are all regarded as the protection scope required by the present disclosure.

Claims (35)

一种图像处理方法,其中,所述方法包括:An image processing method, wherein the method comprises: 响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;In response to the special effect display instruction, acquiring the movement track input by the user in the display page including the face image; 根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定各所述轨迹点相对所述人脸图像的相对位置;According to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, determine the relative position of each trajectory point relative to all the trajectory points. relative position of the face image; 重复执行图像特效处理,所述图像特效处理包括:The image special effect processing is repeatedly performed, and the image special effect processing includes: 根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current time, to obtain the first position of each track point on the display screen. absolute position; 连接位于各所述第一绝对位置的轨迹点,生成第一特效线条;Connect the trajectory points located at each of the first absolute positions to generate a first special effect line; 在所述显示页面中,显示所述第一特效线条。In the display page, the first special effect line is displayed. 根据权利要求1所述的方法,其中,所述至少两个人脸关键点包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点,所述第一人脸关键点与所述第二人脸关键点关于所述目标人脸关键点对称,所述目标人脸关键点为所述人脸图像的对称轴上的任一人脸关键点。The method according to claim 1, wherein the at least two face key points include: a first face key point, a second face key point and a target face key point, and the first face key point is the same as the The second face key point is symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image. 根据权利要求2所述的方法,其中,所述根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定每个所述轨迹点相对所述人脸图像的相对位置,包括:The method according to claim 2, wherein in the face image displayed at the current moment according to the display page, the first position information of at least two face key points and at least one track point in the movement track The position information of the trajectory, determine the relative position of each of the trajectory points relative to the face image, including: 针对各所述轨迹点,根据所述目标人脸关键点的第一位置信息和所述轨迹点的轨迹位置信息,确定所述轨迹点指向所述目标人脸关键点的平移向量;For each of the track points, according to the first position information of the target face key point and the track position information of the track point, determine the translation vector of the track point pointing to the target face key point; 根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵;According to the first position information of the first face key point and the second face key point, determine the first rotation matrix and the first scaling matrix of the current face posture in the face image; 根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置。According to the first rotation matrix, the first scaling matrix and the translation vector, the relative position of the trajectory point relative to the face image is obtained. 根据权利要求3所述的方法,其中,所述第一位置信息和所述轨迹位置信息均包括在所述显示屏幕上的绝对坐标,The method according to claim 3, wherein the first position information and the track position information both include absolute coordinates on the display screen, 所述根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸的第一旋转矩阵以及第一缩放矩阵,包括:Determining the first rotation matrix and the first scaling matrix of the current face in the face image according to the first position information of the first face key point and the second face key point, including: 根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,以及第一长度,得到所述第一旋转矩阵,所述第一长度为所述第一人脸关键点指向所述第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point; 根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第一长度,得到所述第一缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第一长度。According to the reference length of the connecting line between the first face key point and the second face key point, and the first length, the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture. 根据权利要求3所述的方法,其中,所述根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置,包括:The method according to claim 3, wherein the obtaining the relative position of the trajectory point relative to the face image according to the first rotation matrix, the first scaling matrix and the translation vector comprises: 根据所述第一缩放矩阵、所述第一旋转矩阵、所述平移向量以及第一公式,得到所述相对位置,所述第一公式包括:The relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
Figure PCTCN2021134644-appb-100001
Figure PCTCN2021134644-appb-100001
其中,Q表示所述相对位置、Ms 1表示所述第一缩放矩阵、Mr 1表示所述第一旋转矩阵、
Figure PCTCN2021134644-appb-100002
表示所述平移向量。
Wherein, Q represents the relative position, Ms 1 represents the first scaling matrix, Mr 1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-100002
represents the translation vector.
根据权利要求2所述的方法,其中,所述根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置,包括:The method according to claim 2, wherein the relative position is converted according to the second position information of the face key point in the face image displayed on the display page after the current moment, Obtain the first absolute position of each track point on the display screen, including: 针对各所述轨迹点,根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵;For each of the trajectory points, according to the second position information of the first face key point and the second face key point, determine the second rotation matrix and the second rotation matrix of the current face pose in the face image scaling matrix; 根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position. 根据权利要求6所述的方法,其中,所述第二位置信息包括在显示屏幕上的绝对坐标,所述根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵,包括:The method according to claim 6, wherein the second position information includes absolute coordinates on a display screen, and the second position according to the first face key point and the second face key point information, determine the second rotation matrix and the second scaling matrix of the current face posture in the face image, including: 根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,以及第二长度,得到所述第二旋转矩阵,所述第二长度为所述第一人脸关键点指向所述第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point; 根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第二长度,得到所述第二缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第二长度。According to the reference length of the line connecting the first face key point and the second face key point, and the second length, the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture. 根据权利要求6所述的方法,其中,所述根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置,包括:The method according to claim 6, wherein the trajectory is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point and the relative position The first absolute position of the point, including: 根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息、所述相对位置以及第二公式,得到所述轨迹点的第一绝对位置,所述第二公式包括:According to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes: R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ; 其中,R表示所述第一绝对位置、Mr 2表示所述第二旋转矩阵、Ms 2表示所述第二缩放矩阵、(x q,y q)表示所述相对位置、(x c,y c)表示所述目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position, (x c , y c ) ) represents the second position information of the target face key point, and T represents the transposition process. 根据权利要求1-8任一所述的方法,其中,所述图像特效处理还包括:The method according to any one of claims 1-8, wherein the image special effect processing further comprises: 根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,所述第二特效线条与所述第一特效线条以所述人脸图像中人脸为基准左右对称;generating a second special effect line symmetrical to the first special effect line according to the first special effect line, where the second special effect line and the first special effect line are left-right symmetrical with respect to the face in the face image; 在所述显示页面中显示所述第二特效线条。The second special effect line is displayed in the display page. 根据权利要求9所述的方法,其中,所述方法还包括:The method of claim 9, wherein the method further comprises: 根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,所述对称点与所述轨迹点以所述人脸为基准左右对称;According to the relative position of each track point relative to the face image, determine the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the The benchmark is left and right symmetrical; 所述根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,包括:The generating a second special effect line symmetrical with the first special effect line according to the first special effect line, comprising: 根据所述人脸关键点的第二位置信息,以及各所述对称点的相对位置,确定各所述对称点在所述显示屏幕上的第二绝对位置;Determine the second absolute position of each of the symmetrical points on the display screen according to the second position information of the face key points and the relative positions of each of the symmetrical points; 连接位于各所述第二绝对位置的对称点,生成所述第二特效线条。Connecting the symmetrical points located at each of the second absolute positions to generate the second special effect line. 根据权利要求10所述的方法,其中,所述相对位置包括相对所述人脸图像的相对坐标,所述根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,包括:The method according to claim 10, wherein the relative positions include relative coordinates relative to the face image, and the trajectory points are determined according to the relative positions of the trajectory points relative to the face image. The relative position of the symmetry point relative to the face image, including: 对所述轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,所述第一方向与所述人脸图像的对称轴垂直;Perform positive and negative conversion processing on the coordinate value of the first direction in the relative coordinates of the trajectory point to obtain the processed coordinate value, and the first direction is perpendicular to the symmetry axis of the face image; 更新所述轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为所述处理后的坐标值;updating the relative position of the trajectory point, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value; 确定所述更新后的相对位置为所述对称点的相对位置。The updated relative position is determined as the relative position of the symmetry point. 一种图像处理装置,其中,所述装置包括:An image processing device, wherein the device comprises: 获取模块,用于响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;an acquisition module, used for acquiring the movement track input by the user in the display page including the face image in response to the special effect display instruction; 确定模块,用于根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定各所述轨迹点相对所述人脸图像的相对位置;The determining module is configured to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory. the relative position of the trajectory point relative to the face image; 图像特效处理模块,用于重复执行图像特效处理,所述图像特效处理包括:An image special effect processing module, configured to repeatedly perform image special effect processing, wherein the image special effect processing includes: 根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current time, to obtain the first position of each track point on the display screen. absolute position; 连接位于各所述第一绝对位置的轨迹点,生成第一特效线条;Connect the trajectory points located at each of the first absolute positions to generate a first special effect line; 在所述显示页面中,显示所述第一特效线条。In the display page, the first special effect line is displayed. 根据权利要求12所述的装置,其中,所述至少两个人脸关键点包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点,所述第一人脸关键点与所述第二人脸关键点关于所述目标人脸关键点对称,所述目标人脸关键点为所述人脸图像的对称轴上的任一人脸关键点。The device according to claim 12, wherein the at least two face key points comprise: a first face key point, a second face key point and a target face key point, the first face key point being the same as the The second face key point is symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image. 根据权利要求13所述的装置,其中,所述确定模块,还用于:The apparatus according to claim 13, wherein the determining module is further configured to: 针对各所述轨迹点,根据所述目标人脸关键点的第一位置信息和所述轨迹点的轨迹位置信息,确定所述轨迹点指向所述目标人脸关键点的平移向量;For each of the track points, according to the first position information of the target face key point and the track position information of the track point, determine the translation vector of the track point pointing to the target face key point; 根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵;According to the first position information of the first face key point and the second face key point, determine the first rotation matrix and the first scaling matrix of the current face posture in the face image; 根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置。According to the first rotation matrix, the first scaling matrix and the translation vector, the relative position of the trajectory point relative to the face image is obtained. 根据权利要求14所述的装置,其中,所述第一位置信息和所述轨迹位置信息均包括在所述显 示屏幕上的绝对坐标,所述确定模块,还用于:The device according to claim 14, wherein the first position information and the track position information both include absolute coordinates on the display screen, and the determining module is further configured to: 根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,以及第一长度,得到所述第一旋转矩阵,所述第一长度为所述第一人脸关键点指向所述第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point; 根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第一长度,得到所述第一缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第一长度。According to the reference length of the connecting line between the first face key point and the second face key point, and the first length, the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture. 根据权利要求14所述的装置,其中,所述确定模块,还用于:The apparatus according to claim 14, wherein the determining module is further configured to: 根据所述第一缩放矩阵、所述第一旋转矩阵、所述平移向量以及第一公式,得到所述相对位置,所述第一公式包括:The relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
Figure PCTCN2021134644-appb-100003
Figure PCTCN2021134644-appb-100003
其中,Q表示所述相对位置、Ms 1表示所述第一缩放矩阵、Mr 1表示所述第一旋转矩阵、
Figure PCTCN2021134644-appb-100004
表示所述平移向量。
Wherein, Q represents the relative position, Ms 1 represents the first scaling matrix, Mr 1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-100004
represents the translation vector.
根据权利要求13所述的装置,其中,所述图像特效处理模块,还用于:The device according to claim 13, wherein the image special effect processing module is further configured to: 针对各所述轨迹点,根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵;For each of the trajectory points, according to the second position information of the first face key point and the second face key point, determine the second rotation matrix and the second rotation matrix of the current face pose in the face image scaling matrix; 根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position. 根据权利要求17所述的装置,其中,所述第二位置信息包括在显示屏幕上的绝对坐标,所述图像特效处理模块,还用于:The device according to claim 17, wherein the second position information includes absolute coordinates on the display screen, and the image special effect processing module is further configured to: 根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,以及第二长度,得到所述第二旋转矩阵,所述第二长度为所述第一人脸关键点指向所述第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point; 根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第二长度,得到所述第二缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第二长度。According to the reference length of the line connecting the first face key point and the second face key point, and the second length, the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture. 根据权利要求17所述的装置,其中,所述图像特效处理模块,还用于:The device according to claim 17, wherein the image special effect processing module is further configured to: 根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息、所述相对位置以及第二公式,得到所述轨迹点的第一绝对位置,所述第二公式包括:According to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes: R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ; 其中,R表示所述第一绝对位置、Mr 2表示所述第二旋转矩阵、Ms 2表示所述第二缩放矩阵、(x q,y q)表示所述相对位置、(x c,y c)表示所述目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position, (x c , y c ) ) represents the second position information of the target face key point, and T represents the transposition process. 根据权利要求12-19任一所述的装置,其中,所述图像特效处理还包括:The device according to any one of claims 12-19, wherein the image special effect processing further comprises: 根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,所述第二特效线条与所述第一特效线条以所述人脸图像中人脸为基准左右对称;generating a second special effect line symmetrical to the first special effect line according to the first special effect line, where the second special effect line and the first special effect line are left-right symmetrical with respect to the face in the face image; 在所述显示页面中显示所述第二特效线条。The second special effect line is displayed in the display page. 根据权利要求20所述的装置,其中,所述确定模块,还用于:The apparatus according to claim 20, wherein the determining module is further configured to: 根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,所述对称点与所述轨迹点以所述人脸为基准左右对称;According to the relative position of each track point relative to the face image, determine the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the The benchmark is left and right symmetrical; 所述图像特效处理模块,还用于:The image special effect processing module is also used for: 根据所述人脸关键点的第二位置信息,以及各所述对称点的相对位置,确定各所述对称点在所述显示屏幕上的第二绝对位置;Determine the second absolute position of each of the symmetrical points on the display screen according to the second position information of the face key points and the relative positions of each of the symmetrical points; 连接位于各所述第二绝对位置的对称点,生成所述第二特效线条。Connecting the symmetrical points located at each of the second absolute positions to generate the second special effect line. 根据权利要求21所述的装置,其中,所述相对位置包括相对所述人脸图像的相对坐标,所述确定模块,还用于:The apparatus according to claim 21, wherein the relative position includes relative coordinates relative to the face image, and the determining module is further configured to: 对所述轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,所述第一方向与所述人脸图像的对称轴垂直;Perform positive and negative conversion processing on the coordinate value of the first direction in the relative coordinates of the trajectory point to obtain the processed coordinate value, and the first direction is perpendicular to the symmetry axis of the face image; 更新所述轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为所述处理后的坐标值;updating the relative position of the trajectory point, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value; 确定所述更新后的相对位置为所述对称点的相对位置。The updated relative position is determined as the relative position of the symmetry point. 一种电子设备,其中,包括:An electronic device comprising: 一个或多个处理器;one or more processors; 用于存储所述一个或多个处理器可执行指令的一个或多个存储器;one or more memories for storing the one or more processor-executable instructions; 其中,所述一个或多个处理器被配置为执行所述可执行指令,实现以下步骤:wherein the one or more processors are configured to execute the executable instructions to implement the following steps: 响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;In response to the special effect display instruction, acquiring the movement track input by the user in the display page including the face image; 根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定各所述轨迹点相对所述人脸图像的相对位置;According to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, determine the relative position of each trajectory point relative to all the trajectory points. relative position of the face image; 重复执行图像特效处理,所述图像特效处理包括:The image special effect processing is repeatedly performed, and the image special effect processing includes: 根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current time, to obtain the first position of each track point on the display screen. absolute position; 连接位于各所述第一绝对位置的轨迹点,生成第一特效线条;Connect the trajectory points located at each of the first absolute positions to generate a first special effect line; 在所述显示页面中,显示所述第一特效线条。In the display page, the first special effect line is displayed. 根据权利要求23所述的电子设备,其中,所述至少两个人脸关键点包括:第一人脸关键点、第二人脸关键点以及目标人脸关键点,所述第一人脸关键点与所述第二人脸关键点关于所述目标人脸关键点对称,所述目标人脸关键点为所述人脸图像的对称轴上的任一人脸关键点。The electronic device according to claim 23, wherein the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point Symmetrical with the second face key point about the target face key point, the target face key point is any face key point on the symmetry axis of the face image. 根据权利要求24所述的电子设备,其中,所述根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定每个所述轨迹点相对所述人脸图像的相对位置,包括:The electronic device according to claim 24, wherein in the face image displayed at the current moment according to the display page, the first position information of at least two face key points, and at least one trajectory in the movement trajectory The track position information of the point, to determine the relative position of each of the track points relative to the face image, including: 针对各所述轨迹点,根据所述目标人脸关键点的第一位置信息和所述轨迹点的轨迹位置信息,确定所述轨迹点指向所述目标人脸关键点的平移向量;For each of the track points, according to the first position information of the target face key point and the track position information of the track point, determine the translation vector of the track point pointing to the target face key point; 根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸姿态的第一旋转矩阵以及第一缩放矩阵;According to the first position information of the first face key point and the second face key point, determine the first rotation matrix and the first scaling matrix of the current face posture in the face image; 根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置。According to the first rotation matrix, the first scaling matrix and the translation vector, the relative position of the trajectory point relative to the face image is obtained. 根据权利要求25所述的电子设备,其中,所述第一位置信息和所述轨迹位置信息均包括在所述显示屏幕上的绝对坐标,The electronic device according to claim 25, wherein the first position information and the track position information both include absolute coordinates on the display screen, 所述根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,确定所述人脸图像中当前人脸的第一旋转矩阵以及第一缩放矩阵,包括:Determining the first rotation matrix and the first scaling matrix of the current face in the face image according to the first position information of the first face key point and the second face key point, including: 根据所述第一人脸关键点和所述第二人脸关键点的第一位置信息,以及第一长度,得到所述第一旋转矩阵,所述第一长度为所述第一人脸关键点指向所述第二人脸关键点的第一向量的长度;According to the first position information of the first face key point and the second face key point, and the first length, the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point; 根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第一长度,得到所述第一缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第一长度。According to the reference length of the connecting line between the first face key point and the second face key point, and the first length, the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture. 根据权利要求25所述的电子设备,其中,所述根据所述第一旋转矩阵、所述第一缩放矩阵以及所述平移向量,得到所述轨迹点相对所述人脸图像的相对位置,包括:The electronic device according to claim 25, wherein the obtaining the relative position of the trajectory point relative to the face image according to the first rotation matrix, the first scaling matrix and the translation vector, comprising: : 根据所述第一缩放矩阵、所述第一旋转矩阵、所述平移向量以及第一公式,得到所述相对位置,所述第一公式包括:The relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
Figure PCTCN2021134644-appb-100005
Figure PCTCN2021134644-appb-100005
其中,Q表示所述相对位置、Ms 1表示所述第一缩放矩阵、Mr 1表示所述第一旋转矩阵、
Figure PCTCN2021134644-appb-100006
表示所述平移向量。
Wherein, Q represents the relative position, Ms 1 represents the first scaling matrix, Mr 1 represents the first rotation matrix,
Figure PCTCN2021134644-appb-100006
represents the translation vector.
根据权利要求25所述的电子设备,其中,所述根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置,包括:The electronic device according to claim 25, wherein the relative position is converted according to the second position information of the face key point in the face image displayed on the display page after the current time , to obtain the first absolute position of each track point on the display screen, including: 针对各所述轨迹点,根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵;For each of the trajectory points, according to the second position information of the first face key point and the second face key point, determine the second rotation matrix and the second rotation matrix of the current face pose in the face image scaling matrix; 根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置。The first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position. 根据权利要求28所述的电子设备,其中,所述第二位置信息包括在显示屏幕上的绝对坐标,所述根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,确定所述人脸图像中当前人脸姿态的第二旋转矩阵以及第二缩放矩阵,包括:The electronic device according to claim 28, wherein the second position information includes absolute coordinates on a display screen, and the second position based on the first face key point and the second face key point The position information determines the second rotation matrix and the second scaling matrix of the current face posture in the face image, including: 根据所述第一人脸关键点和所述第二人脸关键点的第二位置信息,以及第二长度,得到所述第二旋转矩阵,所述第二长度为所述第一人脸关键点指向所述第二人脸关键点的第二向量的长度;According to the second position information of the first face key point and the second face key point, and the second length, the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point; 根据所述第一人脸关键点和所述第二人脸关键点连线的参考长度,以及所述第二长度,得到所述 第二缩放矩阵,所述参考长度为针对所述人脸图像中处于正视姿态的人脸设定的所述第二长度。According to the reference length of the line connecting the first face key point and the second face key point, and the second length, the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture. 根据权利要求28所述的电子设备,其中,所述根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息以及所述相对位置,得到所述轨迹点的第一绝对位置,包括:The electronic device according to claim 28, wherein the obtaining of the The first absolute position of the trajectory point, including: 根据所述第二旋转矩阵、所述第二缩放矩阵、所述目标人脸关键点的第二位置信息、所述相对位置以及第二公式,得到所述轨迹点的第一绝对位置,所述第二公式包括:According to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes: R=Mr 2·Ms 2·(x q,y q) T+(x c,y c) TR=Mr 2 ·Ms 2 ·(x q , y q ) T +(x c , y c ) T ; 其中,R表示所述第一绝对位置、Mr 2表示所述第二旋转矩阵、Ms 2表示所述第二缩放矩阵、(x q,y q)表示所述相对位置、(x c,y c)表示所述目标人脸关键点的第二位置信息,以及T表示转置处理。 Wherein, R represents the first absolute position, Mr 2 represents the second rotation matrix, Ms 2 represents the second scaling matrix, (x q , y q ) represents the relative position, (x c , y c ) ) represents the second position information of the target face key point, and T represents the transposition process. 根据权利要求23-30任一所述的电子设备,其中,所述图像特效处理还包括:The electronic device according to any one of claims 23-30, wherein the image special effect processing further comprises: 根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,所述第二特效线条与所述第一特效线条以所述人脸图像中人脸为基准左右对称;generating a second special effect line symmetrical to the first special effect line according to the first special effect line, where the second special effect line and the first special effect line are left-right symmetrical with respect to the face in the face image; 在所述显示页面中显示所述第二特效线条。The second special effect line is displayed in the display page. 根据权利要求31所述的电子设备,其中,所述一个或多个处理器被配置为执行所述可执行指令,实现以下步骤::The electronic device of claim 31 , wherein the one or more processors are configured to execute the executable instructions to: 根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,所述对称点与所述轨迹点以所述人脸为基准左右对称;According to the relative position of each track point with respect to the face image, the relative position of the symmetry point of each track point with respect to the face image is determined, and the symmetry point and the track point take the face as the The benchmark is left and right symmetrical; 所述根据所述第一特效线条,生成与所述第一特效线条对称的第二特效线条,包括:The generating a second special effect line symmetrical with the first special effect line according to the first special effect line, comprising: 根据所述人脸关键点的第二位置信息,以及各所述对称点的相对位置,确定各所述对称点在所述显示屏幕上的第二绝对位置;Determine the second absolute position of each of the symmetrical points on the display screen according to the second position information of the face key points and the relative positions of each of the symmetrical points; 连接位于各所述第二绝对位置的对称点,生成所述第二特效线条。Connecting the symmetrical points located at each of the second absolute positions to generate the second special effect line. 根据权利要求32所述的电子设备,其中,所述相对位置包括相对所述人脸图像的相对坐标,所述根据各所述轨迹点相对所述人脸图像的相对位置,确定各所述轨迹点的对称点相对所述人脸图像的相对位置,包括:The electronic device according to claim 32, wherein the relative position includes relative coordinates relative to the face image, and the trajectory is determined according to the relative position of each trajectory point relative to the face image. The relative position of the symmetrical point of the point relative to the face image, including: 对所述轨迹点的相对坐标中第一方向的坐标值执行正负数转换处理,得到处理后的坐标值,所述第一方向与所述人脸图像的对称轴垂直;Perform positive and negative conversion processing on the coordinate value of the first direction in the relative coordinates of the trajectory point to obtain the processed coordinate value, and the first direction is perpendicular to the symmetry axis of the face image; 更新所述轨迹点的相对位置,使得更新后的相对位置中第一方向的坐标值为所述处理后的坐标值;updating the relative position of the trajectory point, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value; 确定所述更新后的相对位置为所述对称点的相对位置。The updated relative position is determined as the relative position of the symmetry point. 一种非易失性计算机可读存储介质,当所述非易失性计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够实现以下步骤:A non-volatile computer-readable storage medium, when the instructions in the non-volatile computer-readable storage medium are executed by a processor of an electronic device, the electronic device can implement the following steps: 响应于特效显示指令,获取在包括人脸图像的显示页面中用户输入的移动轨迹;In response to the special effect display instruction, acquiring the movement track input by the user in the display page including the face image; 根据所述显示页面在当前时刻所显示人脸图像中,至少两个人脸关键点的第一位置信息,以及所述移动轨迹中至少一个轨迹点的轨迹位置信息,确定各所述轨迹点相对所述人脸图像的相对位置;According to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, determine the relative position of each trajectory point relative to all the trajectory points. relative position of the face image; 重复执行图像特效处理,所述图像特效处理包括:The image special effect processing is repeatedly performed, and the image special effect processing includes: 根据所述显示页面在所述当前时刻之后所显示人脸图像中,所述人脸关键点的第二位置信息对所述相对位置进行转换,得到各所述轨迹点在显示屏幕上的第一绝对位置;Convert the relative position according to the second position information of the face key points in the face image displayed on the display page after the current time, to obtain the first position of each track point on the display screen. absolute position; 连接位于各所述第一绝对位置的轨迹点,生成第一特效线条;Connect the trajectory points located at each of the first absolute positions to generate a first special effect line; 在所述显示页面中,显示所述第一特效线条。In the display page, the first special effect line is displayed. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-11任一项所述的图像处理方法。A computer program product, comprising a computer program, wherein when the computer program is executed by a processor, the image processing method according to any one of claims 1-11 is implemented.
PCT/CN2021/134644 2021-03-26 2021-11-30 Image processing method and device Ceased WO2022199102A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110328694.1A CN113160031B (en) 2021-03-26 2021-03-26 Image processing method, device, electronic equipment and storage medium
CN202110328694.1 2021-03-26

Publications (1)

Publication Number Publication Date
WO2022199102A1 true WO2022199102A1 (en) 2022-09-29

Family

ID=76885649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134644 Ceased WO2022199102A1 (en) 2021-03-26 2021-11-30 Image processing method and device

Country Status (2)

Country Link
CN (1) CN113160031B (en)
WO (1) WO2022199102A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240289919A1 (en) * 2021-09-16 2024-08-29 Beijing Zitiao Network Technology Co., Ltd. Method, apparatus, electronic device, and storage medium for image processing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160031B (en) * 2021-03-26 2024-05-14 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN116567360B (en) * 2023-06-06 2025-09-02 广州博冠信息科技有限公司 Live broadcast special effects processing method, device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060093309A1 (en) * 2004-10-05 2006-05-04 Magix Ag System and method for creating a photo movie
CN106231434A (en) * 2016-07-25 2016-12-14 武汉斗鱼网络科技有限公司 A kind of living broadcast interactive specially good effect realization method and system based on Face datection
CN107888845A (en) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 A kind of method of video image processing, device and terminal
CN107948667A (en) * 2017-12-05 2018-04-20 广州酷狗计算机科技有限公司 The method and apparatus that special display effect is added in live video
CN111242881A (en) * 2020-01-07 2020-06-05 北京字节跳动网络技术有限公司 Method, device, storage medium and electronic equipment for displaying special effects
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium
CN111954055A (en) * 2020-07-01 2020-11-17 北京达佳互联信息技术有限公司 Video special effect display method and device, electronic equipment and storage medium
CN112035041A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113160031A (en) * 2021-03-26 2021-07-23 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1895393A1 (en) * 2006-09-01 2008-03-05 Research In Motion Limited Method for facilitating navigation and selection functionalities of a trackball
CN110809089B (en) * 2019-10-30 2021-11-16 联想(北京)有限公司 Processing method and processing apparatus
CN112017254B (en) * 2020-06-29 2023-12-15 浙江大学 A hybrid ray tracing rendering method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060093309A1 (en) * 2004-10-05 2006-05-04 Magix Ag System and method for creating a photo movie
CN106231434A (en) * 2016-07-25 2016-12-14 武汉斗鱼网络科技有限公司 A kind of living broadcast interactive specially good effect realization method and system based on Face datection
CN107888845A (en) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 A kind of method of video image processing, device and terminal
CN107948667A (en) * 2017-12-05 2018-04-20 广州酷狗计算机科技有限公司 The method and apparatus that special display effect is added in live video
CN111242881A (en) * 2020-01-07 2020-06-05 北京字节跳动网络技术有限公司 Method, device, storage medium and electronic equipment for displaying special effects
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium
CN111954055A (en) * 2020-07-01 2020-11-17 北京达佳互联信息技术有限公司 Video special effect display method and device, electronic equipment and storage medium
CN112035041A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113160031A (en) * 2021-03-26 2021-07-23 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240289919A1 (en) * 2021-09-16 2024-08-29 Beijing Zitiao Network Technology Co., Ltd. Method, apparatus, electronic device, and storage medium for image processing

Also Published As

Publication number Publication date
CN113160031A (en) 2021-07-23
CN113160031B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN110502954B (en) Method and device for video analysis
CN109166150B (en) Pose acquisition method and device storage medium
CN110427110B (en) Live broadcast method and device and live broadcast server
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN111897429B (en) Image display method, device, computer equipment and storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN111768454A (en) Pose determination method, device, equipment and storage medium
CN113384880B (en) Virtual scene display method, device, computer equipment and storage medium
CN111565309B (en) Display device and distortion parameter determination method, device and system thereof, and storage medium
WO2022134632A1 (en) Work processing method and apparatus
CN110673944A (en) Method and apparatus for performing tasks
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN111385525B (en) Video monitoring method, device, terminal and system
WO2022199102A1 (en) Image processing method and device
CN110839128A (en) Photographing behavior detection method and device and storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN108848405B (en) Image processing method and device
CN112396076A (en) License plate image generation method and device and computer storage medium
CN110349527B (en) Virtual reality display method, device and system and storage medium
CN109714585B (en) Image transmission method and device, display method and device, and storage medium
CN111723615B (en) Method and device for detecting object matching judgment on detected object image
CN110889391A (en) Method, device, computing device and storage medium for face image processing
CN108881715B (en) Method, device, terminal and storage medium for enabling shooting mode
CN110851435B (en) A method and device for storing data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.01.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 21932710

Country of ref document: EP

Kind code of ref document: A1