CN108052813A - Unlocking method and device of terminal equipment and mobile terminal - Google Patents
Unlocking method and device of terminal equipment and mobile terminal Download PDFInfo
- Publication number
- CN108052813A CN108052813A CN201711242470.9A CN201711242470A CN108052813A CN 108052813 A CN108052813 A CN 108052813A CN 201711242470 A CN201711242470 A CN 201711242470A CN 108052813 A CN108052813 A CN 108052813A
- Authority
- CN
- China
- Prior art keywords
- information
- current user
- user
- voiceprint
- facial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
技术领域technical field
本申请涉及通信技术领域,尤其涉及一种终端设备的解锁方法、装置及移动终端。The present application relates to the technical field of communications, and in particular to a terminal device unlocking method, device and mobile terminal.
背景技术Background technique
随着科技水平的发展,手机、平板电脑等终端设备的功能日益强大,终端设备越来越成为人们日常生活和工作中的重要工具之一。With the development of science and technology, terminal devices such as mobile phones and tablet computers have increasingly powerful functions, and terminal devices have increasingly become one of the important tools in people's daily life and work.
目前,大部分的终端设备都具有屏幕解锁功能,在终端设备的屏幕处于锁定状态时,当用户要使用终端设备时,需先进行用户身份的合法性验证,只有验证为合法的用户才能解锁终端设备的屏幕并使用终端设备,这样来保证终端设备的使用安全性。At present, most terminal devices have the function of screen unlocking. When the screen of the terminal device is locked, when the user wants to use the terminal device, the legality of the user identity needs to be verified first. Only verified users can unlock the terminal. The screen of the device and use the terminal equipment, so as to ensure the safety of the terminal equipment.
比较常见的解锁方式有滑屏解锁、图案解锁、指纹解锁、语音解锁、人脸解锁等方式。然而,单一的解锁方式在保证终端设备的使用安全性的方面还是有所欠缺,因此,如何更好地保证终端设备的使用安全性成为亟待解决的技术问题。The more common unlocking methods include sliding screen unlocking, pattern unlocking, fingerprint unlocking, voice unlocking, and face unlocking. However, a single unlocking method is still lacking in ensuring the security of use of the terminal device. Therefore, how to better ensure the security of use of the terminal device has become an urgent technical problem to be solved.
发明内容Contents of the invention
本申请的目的旨在至少在一定程度上解决上述的技术问题之一。The purpose of this application is to solve one of the above-mentioned technical problems at least to a certain extent.
为此,本申请的第一个目的在于提出一种终端设备的解锁方法,该方法通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。For this reason, the first purpose of this application is to propose a method for unlocking a terminal device. This method verifies whether the user is a legitimate user by simultaneously identifying the voiceprint information and three-dimensional face information of the current user. Only legitimate users can Unlocking the terminal device can better ensure the security of the terminal device and improve the user experience compared to only unlocking the voiceprint or face unlocking.
本申请的第二个目的在于提出一种终端设备的解锁装置。The second purpose of the present application is to propose an unlocking device for a terminal device.
本申请的第三个目的在于提出一种计算机可读存储介质。The third object of the present application is to provide a computer-readable storage medium.
本申请的第四个目的在于提出一种移动终端。The fourth object of the present application is to provide a mobile terminal.
本申请的第五个目的在于提出一种计算机程序。A fifth object of the present application is to propose a computer program.
本申请第一方面实施例的终端设备的解锁方法,包括:在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。The method for unlocking a terminal device according to the embodiment of the first aspect of the present application includes: when it is detected that the user performs an unlocking operation on the terminal device, acquiring the current user's voiceprint information and facial three-dimensional information, wherein the facial three-dimensional information is obtained using Acquired by structured light; judge whether the current user is legal according to the preset voiceprint database and the preset three-dimensional face information database; if legal, unlock the terminal device.
根据本申请实施例的终端设备的解锁方法,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。该方法通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。According to the method for unlocking a terminal device according to an embodiment of the present application, when it is detected that the user performs an unlocking operation on the terminal device, the current user's voiceprint information and facial three-dimensional information are obtained, wherein the facial three-dimensional information is obtained by using structured light ; According to the preset voiceprint database and the preset three-dimensional face information database respectively, it is judged whether the current user is legal; if legal, the terminal device is unlocked. This method simultaneously identifies the current user's voiceprint information and facial three-dimensional information to verify whether the user is a legitimate user. Only legitimate users can unlock the terminal device. Compared with only voiceprint unlocking or face unlocking, it can be more efficient Ensure the safety of terminal equipment, and improve user experience.
本申请第二方面实施例的终端设备的解锁装置,包括:第一获取模块,用于在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息;第二获取模块,用于在检测到用户对终端设备进行解锁操作时,获取当前用户的脸部三维信息,其中所述脸部三维信息是利用结构光获取的;判断模块,用于分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;解锁模块,用于若合法,则对所述终端设备进行解锁。The device for unlocking a terminal device according to the embodiment of the second aspect of the present application includes: a first obtaining module, configured to obtain voiceprint information of the current user when it is detected that the user performs an unlocking operation on the terminal device; a second obtaining module, configured to When it is detected that the user performs an unlocking operation on the terminal device, the three-dimensional face information of the current user is obtained, wherein the three-dimensional face information is obtained by using structured light; A three-dimensional face information database is provided to judge whether the current user is legal; an unlocking module is used to unlock the terminal device if it is legal.
根据本申请实施例的终端设备的解锁装置,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。该装置通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。According to the unlocking device of the terminal device according to the embodiment of the present application, when it is detected that the user performs an unlocking operation on the terminal device, the current user's voiceprint information and facial three-dimensional information are obtained, wherein the facial three-dimensional information is obtained by using structured light ; According to the preset voiceprint database and the preset three-dimensional face information database respectively, it is judged whether the current user is legal; if legal, the terminal device is unlocked. The device verifies whether the user is a legitimate user by simultaneously identifying the current user's voiceprint information and facial three-dimensional information. Only legitimate users can unlock the terminal device. Compared with only voiceprint unlocking or face unlocking, it can be more efficient. Ensure the safety of terminal equipment, and improve user experience.
本申请第三方面实施例提供了一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行本申请第一方面实施例的终端设备的解锁方法。The embodiment of the third aspect of the present application provides one or more non-volatile computer-readable storage media containing computer-executable instructions. When the computer-executable instructions are executed by one or more processors, the processing The device executes the unlocking method of the terminal device according to the embodiment of the first aspect of the present application.
本申请第四方面实施例的移动终端,所述移动终端包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行本申请第一方面实施例的终端设备的解锁方法。In the mobile terminal according to the embodiment of the fourth aspect of the present application, the mobile terminal includes a memory and a processor, the memory stores computer-readable instructions, and when the instructions are executed by the processor, the processor executes the present invention. The method for unlocking a terminal device according to the embodiment of the first aspect of the application.
根据本申请实施例的移动终端,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。该移动终端通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。According to the mobile terminal of the embodiment of the present application, when it is detected that the user unlocks the terminal device, the current user's voiceprint information and facial three-dimensional information are obtained, wherein the facial three-dimensional information is obtained by using structured light; The preset voiceprint database and the preset three-dimensional face information database are used to judge whether the current user is legal; if legal, the terminal device is unlocked. The mobile terminal verifies whether the user is a legitimate user by simultaneously identifying the current user's voiceprint information and facial three-dimensional information. Only legitimate users can unlock the terminal device. Compared with only voiceprint unlocking or face unlocking, it can Better guarantee the safety of terminal equipment, thereby improving user experience.
本申请第五方面实施例提供了一种计算机程序产品,当所述计算机程序产品中的指令处理器执行时,执行本申请第一方面实施例的终端设备的解锁方法。The embodiment of the fifth aspect of the present application provides a computer program product, and when the instruction processor in the computer program product executes, executes the terminal device unlocking method of the embodiment of the first aspect of the present application.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中,The above and/or additional aspects and advantages of the present application will become apparent and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein,
图1是根据本申请一个实施例的终端设备的解锁方法的流程图;FIG. 1 is a flowchart of a method for unlocking a terminal device according to an embodiment of the present application;
图2是本申请示例性的获取当前用户的脸部三维信息的流程图;FIG. 2 is an exemplary flowchart of obtaining the three-dimensional face information of the current user in the present application;
图3是本申请示例性的获取当前用户的头部的深度图像的流程图;FIG. 3 is an exemplary flow chart of obtaining the depth image of the current user's head in the present application;
图4是本申请示例性的解调结构光图像的各个像素对应的相位信息以得到当前用户的头部的深度图像的深度图像的流程图;FIG. 4 is an exemplary flow chart of the present application for demodulating the phase information corresponding to each pixel of the structured light image to obtain the depth image of the current user's head depth image;
图5(a)至图5(e)是根据本申请一个实施例的结构光测量的场景示意图;5(a) to 5(e) are schematic diagrams of scenes of structured light measurement according to an embodiment of the present application;
图6(a)和图6(b)是根据本申请一个实施例的结构光测量的场景示意图;FIG. 6(a) and FIG. 6(b) are schematic diagrams of scenes of structured light measurement according to an embodiment of the present application;
图7是本申请示例性的处理场景图像和深度图像以获取当前用户的脸部三维信息的流程图;FIG. 7 is an exemplary flowchart of processing a scene image and a depth image to obtain the 3D face information of the current user in the present application;
图8是根据本申请示例性的判断当前用户是否合法的流程图;Fig. 8 is an exemplary flow chart of judging whether the current user is legal according to the present application;
图9是根据本申请一个实施例的终端设备的解锁装置的结构示意图;FIG. 9 is a schematic structural diagram of an unlocking device for a terminal device according to an embodiment of the present application;
图10是根据本申请一个实施例的图像处理电路的示意图。Fig. 10 is a schematic diagram of an image processing circuit according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。Embodiments of the present application are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary, and are intended to explain the present application, and should not be construed as limiting the present application.
下面参考附图描述本申请实施例的终端设备的解锁方法、装置、移动终端和计算机可读存储介质。The method, device, mobile terminal, and computer-readable storage medium for unlocking a terminal device according to the embodiments of the present application are described below with reference to the accompanying drawings.
图1是根据本申请一个实施例的终端设备的解锁方法的流程图。该实施例的终端设备的解锁方法应用在终端设备中。其中,终端设备可以包括手机、平板电脑、智能穿戴式设备等具有各种操作系统的硬件设备。Fig. 1 is a flowchart of a method for unlocking a terminal device according to an embodiment of the present application. The method for unlocking a terminal device in this embodiment is applied to a terminal device. Wherein, the terminal device may include a mobile phone, a tablet computer, a smart wearable device, and other hardware devices with various operating systems.
如图1所示,该终端设备的解锁方法包括以下步骤:As shown in Figure 1, the unlocking method of the terminal device includes the following steps:
S1,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的。S1. When it is detected that the user performs an unlocking operation on the terminal device, acquire voiceprint information and three-dimensional facial information of the current user, wherein the three-dimensional facial information is acquired by using structured light.
具体地,声纹(Voiceprint),是用电声学仪器显示的携带言语信息的声波频谱,现代科学研究表明,声纹不仅具有特定性,而且有相对稳定性的特点,因此,声纹可视为人体身份证。同时由于人脸识别具有自然性和不被被测个体察觉的优势,可以有效防范被伪装欺骗等行为。在本实施例中的终端设备可以通过配置的语音识别模组来提取用户的声纹信息,可以通过配置的取像装置来提取的用户的脸部三维信息。Specifically, voiceprint (Voiceprint) is the sound wave spectrum that carries speech information displayed by electroacoustic instruments. Modern scientific research shows that voiceprint is not only specific, but also relatively stable. Therefore, voiceprint can be regarded as Human identity card. At the same time, because face recognition has the advantages of being natural and not detected by the individual, it can effectively prevent behaviors such as being deceived by camouflage. The terminal device in this embodiment can extract the user's voiceprint information through the configured voice recognition module, and can extract the three-dimensional information of the user's face through the configured imaging device.
例如,当用户抬起终端设备或按压终端设备上的POWER电源键时,在终端设备的屏幕上实时显示随机动态密码,当用户读取随机动态密码时,通过终端设备上的扬声器采集用户读取随机动态密码时对应的用户语音,利用配置的语音识别模型来提取该用户的声纹信息。同时,设置在终端设备上的取像装置先获取用户的脸部图像,再通过对脸部图像的图像处理获取用户的脸部三维信息。需要说明的是,用户对终端设备进行解锁操作的方式不限于举例说明的用户抬起终端设备或按压终端设备上的POWER电源键。For example, when the user lifts up the terminal device or presses the POWER button on the terminal device, a random dynamic password is displayed on the screen of the terminal device in real time. The user's voice corresponding to the random dynamic password, use the configured voice recognition model to extract the user's voiceprint information. At the same time, the imaging device installed on the terminal device first acquires the user's facial image, and then acquires the user's facial three-dimensional information through image processing on the facial image. It should be noted that the manner in which the user unlocks the terminal device is not limited to the illustrated example where the user lifts the terminal device or presses the POWER button on the terminal device.
图2是本申请示例性的获取当前用户的脸部三维信息的流程图。如图2所示,在一种可能的实现方式中,“获取当前用户的脸部三维信息”的具体实现方式为:FIG. 2 is an exemplary flow chart of the present application for acquiring facial 3D information of a current user. As shown in Figure 2, in a possible implementation, the specific implementation of "obtaining the 3D face information of the current user" is as follows:
S10,获取所述当前用户的头部的场景图像。S10. Acquire a scene image of the head of the current user.
举例来说,终端设备的取像装置包括可见光摄像头123,通过可见光摄像头123获取当前用户的头部的场景图像。可见光摄像头123可以是RGB摄像头,所拍摄出的图像可以为彩色图像。For example, the image capturing device of the terminal device includes a visible light camera 123, and the scene image of the head of the current user is acquired through the visible light camera 123. The visible light camera 123 may be an RGB camera, and the captured image may be a color image.
S11,向所述当前用户的头部投射结构光以获取所述当前用户的头部的深度图像。S11. Project structured light onto the head of the current user to acquire a depth image of the head of the current user.
图3是本申请示例性的获取当前用户的头部的深度图像的流程图。如图3所示,在一种可能的实现方式中,步骤S11的具体实现方式为:Fig. 3 is an exemplary flow chart of acquiring the depth image of the current user's head in the present application. As shown in Figure 3, in a possible implementation, the specific implementation of step S11 is:
S110,向当前用户的头部投射结构光。S110. Project structured light to the head of the current user.
S111,拍摄经当前用户的头部调制的结构光图像。S111. Capture a structured light image modulated by the current user's head.
S112,解调结构光图像的各个像素对应的相位信息以得到当前用户的头部的深度图像。S112. Demodulate phase information corresponding to each pixel of the structured light image to obtain a depth image of the current user's head.
举例来说,终端设备的取像装置包括结构光投射器,在检测到用户对终端设备进行解锁操作时,终端设备中的结构光投射器可向用户的头部投射结构光,然后,终端设备中的结构光摄像头可拍摄经用户的头部调制的结构光图像,以及解调结构光图像的各个像素对应的相位信息以得到当前用户的头部的深度图像。For example, the imaging device of the terminal device includes a structured light projector. When it is detected that the user unlocks the terminal device, the structured light projector in the terminal device can project structured light to the user's head, and then the terminal device The structured light camera in the device can capture the structured light image modulated by the user's head, and demodulate the phase information corresponding to each pixel of the structured light image to obtain the depth image of the current user's head.
具体而言,结构光投射器将一定模式的结构光投射到用户的头部上后,在用户的头部的表面会形成由用户的头部调制后的结构光图像。结构光摄像头拍摄经调制后的结构光图像,再对结构光图像进行解调以得到当前用户的头部的深度图像。Specifically, after the structured light projector projects a certain pattern of structured light onto the user's head, a structured light image modulated by the user's head will be formed on the surface of the user's head. The structured light camera captures the modulated structured light image, and then demodulates the structured light image to obtain a depth image of the current user's head.
其中,结构光的模式可以是激光条纹、格雷码、正弦条纹、非均匀散斑等。Among them, the pattern of structured light can be laser stripes, gray codes, sinusoidal stripes, non-uniform speckle, etc.
图4是本申请示例性的解调结构光图像的各个像素对应的相位信息以得到当前用户的头部的深度图像的深度图像的流程图。如图4所示,在一种可能的实现方式中,步骤S112的具体实现方式为:FIG. 4 is an exemplary flow chart of the present application for demodulating phase information corresponding to each pixel of a structured light image to obtain a depth image of a current user's head depth image. As shown in Figure 4, in a possible implementation, the specific implementation of step S112 is:
S1120,解调结构光图像中各个像素对应的相位信息。S1120. Demodulate phase information corresponding to each pixel in the structured light image.
S1121,将相位信息转化为深度信息。S1121. Convert phase information into depth information.
S1122,根据深度信息生成当前用户的头部的深度图像。S1122. Generate a depth image of the current user's head according to the depth information.
具体地,与未经调制的结构光相比,调制后的结构光的相位信息发生了变化,在结构光图像中呈现出的结构光是产生了畸变之后的结构光,其中,变化的相位信息即可表征物体的深度信息。因此,结构光摄像头首先解调出结构光图像中各个像素对应的相位信息,再根据相位信息计算出深度信息,从而得到当前用户的头部的深度图像。Specifically, compared with unmodulated structured light, the phase information of the modulated structured light has changed, and the structured light presented in the structured light image is the structured light after distortion, wherein the changed phase information It can represent the depth information of the object. Therefore, the structured light camera first demodulates the phase information corresponding to each pixel in the structured light image, and then calculates the depth information according to the phase information, thereby obtaining the depth image of the current user's head.
为了使本领域的技术人员更加清楚的了解根据结构光来采集第一用户的面部及躯体的深度图像的过程,下面以一种应用广泛的光栅投影技术(条纹投影技术)为例来阐述其具体原理。其中,光栅投影技术属于广义上的面结构光。In order to make those skilled in the art more clearly understand the process of collecting the depth images of the first user's face and body according to the structured light, the following uses a widely used grating projection technology (stripe projection technology) as an example to illustrate its details. principle. Among them, grating projection technology belongs to surface structured light in a broad sense.
如图5(a)所示,在使用面结构光投影的时候,首先通过计算机编程产生正弦条纹,并将正弦条纹通过结构光投射器投射至被测物,再利用结构光摄像头拍摄条纹受物体调制后的弯曲程度,随后解调该弯曲条纹得到相位,再将相位转化为深度信息即可获取深度图像。为避免产生误差或误差耦合的问题,使用结构光进行深度信息采集前需对结构光摄像头与结构光投射器进行参数标定,标定包括几何参数(例如,结构光摄像头与结构光投射器之间的相对位置参数等)的标定、结构光摄像头的内部参数以及结构光投射器的内部参数的标定等。As shown in Figure 5(a), when using surface structured light projection, firstly generate sinusoidal fringes through computer programming, and project the sinusoidal fringes to the object through the structured light projector, and then use the structured light camera to capture the fringes of the object. The degree of curvature after modulation is then demodulated to obtain the phase, and then the phase is converted into depth information to obtain a depth image. In order to avoid the problem of errors or error coupling, before using structured light for depth information acquisition, it is necessary to calibrate the parameters of the structured light camera and the structured light projector. The calibration includes geometric parameters (for example, the distance between the structured light camera and the structured light projector). Relative position parameters, etc.), the internal parameters of the structured light camera and the internal parameters of the structured light projector, etc.
具体而言,第一步,计算机编程产生正弦条纹。由于后续需要利用畸变的条纹获取相位,比如采用四步移相法获取相位,因此这里产生四幅相位差为的条纹,然后结构光投射器将该四幅条纹分时投射到被测物(图5(a)所示的面具)上,结构光摄像头采集到如图5(b)左边的图,同时要读取如图5(b)右边所示的参考面的条纹。Specifically, in the first step, the computer is programmed to generate sinusoidal fringes. Since the distorted fringes need to be used to obtain the phase later, for example, the four-step phase shift method is used to obtain the phase, so the four phase differences generated here are stripes, and then the structured light projector time-sharingly projects the four stripes onto the measured object (the mask shown in Figure 5(a)), and the structured light camera captures the picture on the left side of Figure 5(b), and at the same time reads Take the fringes of the reference surface shown on the right side of Fig. 5(b).
第二步,进行相位恢复。结构光摄像头根据采集到的四幅受调制的条纹图(即结构光图像)计算出被调制相位,此时得到的相位图是截断相位图。因为四步移相算法得到的结果是由反正切函数计算所得,因此结构光调制后的相位被限制在[-π,π]之间,也就是说,每当调制后的相位超过[-π,π],其又会重新开始。最终得到的相位主值如图5(c)所示。The second step is to perform phase recovery. The structured light camera calculates the modulated phase according to the collected four modulated fringe images (ie, the structured light image), and the phase image obtained at this time is a truncated phase image. Because the result obtained by the four-step phase-shift algorithm is calculated by the arctangent function, the modulated phase of the structured light is limited to [-π, π], that is, whenever the modulated phase exceeds [-π , π], which will start all over again. The resulting main value of the phase is shown in Fig. 5(c).
其中,在进行相位恢复过程中,需要进行消跳变处理,即将截断相位恢复为连续相位。如图5(d)所示,左边为受调制的连续相位图,右边是参考连续相位图。Wherein, during the phase recovery process, transition elimination processing is required, that is, the truncated phase is restored to a continuous phase. As shown in Figure 5(d), the modulated continuous phase map is on the left, and the reference continuous phase map is on the right.
第三步,将受调制的连续相位和参考连续相位相减得到相位差(即相位信息),该相位差表征了被测物相对参考面的深度信息,再将相位差代入相位与深度的转化公式(公式中涉及到的参数经过标定),即可得到如图5(e)所示的待测物体的三维模型。The third step is to subtract the modulated continuous phase from the reference continuous phase to obtain the phase difference (that is, phase information), which represents the depth information of the measured object relative to the reference plane, and then substitute the phase difference into the conversion of phase and depth formula (the parameters involved in the formula have been calibrated), the three-dimensional model of the object to be measured can be obtained as shown in Figure 5(e).
应当理解的是,在实际应用中,根据具体应用场景的不同,本申请实施例中所采用的结构光除了上述光栅之外,还可以是其他任意图案。It should be understood that, in practical applications, according to different specific application scenarios, the structured light used in the embodiment of the present application may be in any other pattern besides the above-mentioned grating.
作为一种可能的实现方式,本申请还可使用散斑结构光进行当前用户的头部的深度图像的采集。As a possible implementation manner, the present application may also use speckle structured light to collect a depth image of the current user's head.
具体地,散斑结构光获取深度信息的方法是使用一基本为平板的衍射元件,该衍射元件具有特定相位分布的浮雕衍射结构,横截面为具有两个或多个凹凸的台阶浮雕结构。衍射元件中基片的厚度大致为1微米,各个台阶的高度不均匀,高度的取值范围可为0.5微米~0.9微米。图6(a)所示结构为本实施例的准直分束元件的局部衍射结构。图6(b)为沿截面A-A的剖面侧视图,横坐标和纵坐标的单位均为微米。散斑结构光生成的散斑图案具有高度的随机性,并且会随着距离的不同而变换图案。因此,在使用散斑结构光获取深度信息前,首先需要标定出空间中的散斑图案,例如,在距离结构光摄像头的0~4米的范围内,每隔1厘米取一个参考平面,则标定完毕后就保存了400幅散斑图像,标定的间距越小,获取的深度信息的精度越高。随后,结构光投射器将散斑结构光投射到被测物(即第一用户)上,被测物表面的高度差使得投射到被测物上的散斑结构光的散斑图案发生变化。结构光摄像头拍摄投射到被测物上的散斑图案(即结构光图像)后,再将散斑图案与前期标定后保存的400幅散斑图像逐一进行互相关运算,进而得到400幅相关度图像。空间中被测物体所在的位置会在相关度图像上显示出峰值,把上述峰值叠加在一起并经过插值运算后即可得到被测物的深度信息。Specifically, the method for obtaining depth information by speckle structured light is to use a substantially flat diffraction element, which has a relief diffraction structure with a specific phase distribution, and a stepped relief structure with two or more concavo-convex cross sections. The thickness of the substrate in the diffraction element is roughly 1 micron, and the height of each step is not uniform, and the height can range from 0.5 micron to 0.9 micron. The structure shown in Fig. 6(a) is the partial diffraction structure of the collimating beam splitting element of this embodiment. Fig. 6(b) is a cross-sectional side view along section A-A, and the units of the abscissa and ordinate are both micrometers. The speckle pattern generated by speckle structured light is highly random, and the pattern will change with the distance. Therefore, before using speckle structured light to obtain depth information, it is first necessary to calibrate the speckle pattern in space. For example, within the range of 0 to 4 meters from the structured light camera, take a reference plane every 1 cm, then After the calibration is completed, 400 speckle images are saved. The smaller the calibration interval, the higher the accuracy of the obtained depth information. Subsequently, the structured light projector projects the speckle structured light onto the object under test (that is, the first user), and the height difference of the surface of the object under test changes the speckle pattern of the speckle structured light projected on the object under test. After the structured light camera shoots the speckle pattern projected on the object to be measured (i.e. the structured light image), the cross-correlation operation is performed on the speckle pattern and the 400 speckle images saved after previous calibration, and then 400 correlations are obtained. image. The position of the measured object in the space will show a peak on the correlation image, and the depth information of the measured object can be obtained by superimposing the above peaks and interpolating.
由于普通的衍射元件对光束进行衍射后得到多数衍射光,但每束衍射光光强差别大,对人眼伤害的风险也大。即便是对衍射光进行二次衍射,得到的光束的均匀性也较低。因此,利用普通衍射元件衍射的光束对被测物进行投射的效果较差。本实施例中采用准直分束元件,该元件不仅具有对非准直光束进行准直的作用,还具有分光的作用,即经反射镜反射的非准直光经过准直分束元件后往不同的角度射出多束准直光束,且出射的多束准直光束的截面面积近似相等,能量通量近似相等,进而使得利用该光束衍射后的散点光进行投射的效果更好。同时,激光出射光分散至每一束光,进一步降低了伤害人眼的风险,且散斑结构光相对于其他排布均匀的结构光来说,达到同样的采集效果时,散斑结构光消耗的电量更低。Since the ordinary diffraction element diffracts the light beam to obtain most of the diffracted light, but the intensity of each diffracted light varies greatly, and the risk of damage to human eyes is also large. Even if the diffracted light is diffracted twice, the uniformity of the obtained beam is low. Therefore, the projection effect of the light beam diffracted by the common diffraction element on the measured object is relatively poor. In this embodiment, a collimating beam-splitting element is used, which not only has the function of collimating the uncollimated beam, but also has the function of splitting light, that is, the uncollimated light reflected by the mirror passes through the collimating beam-splitting element and then Multiple collimated beams are emitted from different angles, and the cross-sectional areas of the emitted multiple collimated beams are approximately equal, and the energy flux is approximately equal, so that the projection effect of the scattered light after the diffraction of the beam is better. At the same time, the emitted laser light is dispersed to each beam, which further reduces the risk of damage to human eyes. Compared with other evenly arranged structured light, when the speckle structured light achieves the same collection effect, the speckle structured light consumes less lower power.
S12,处理所述场景图像和所述深度图像以获取当前用户的脸部三维信息。S12. Process the scene image and the depth image to obtain facial three-dimensional information of the current user.
由于都是对当前用户进行拍摄,场景图像的场景范围与深度图像的场景范围基本一致,且场景图像中的各个像素均能在深度图像中找到对应该像素的深度信息。Since all images are taken of the current user, the scene range of the scene image is basically the same as that of the depth image, and each pixel in the scene image can find the depth information corresponding to the pixel in the depth image.
图7是本申请示例性的处理场景图像和深度图像以获取当前用户的脸部三维信息的流程图。如图7所示,在一种可能的实现方式中,步骤S12的具体实现方式为:FIG. 7 is an exemplary flow chart of processing a scene image and a depth image to obtain the 3D face information of the current user in the present application. As shown in Figure 7, in a possible implementation, the specific implementation of step S12 is:
S120,识别所述场景图像中的人脸区域;S120, identifying a face area in the scene image;
S121,从所述深度图像中获取与所述人脸区域对应的深度信息;S121. Obtain depth information corresponding to the face area from the depth image;
S122,根据所述深度信息生成当前用户的脸部三维信息。S122. Generate facial three-dimensional information of the current user according to the depth information.
采用已训练好的深度学习模型识别出场景图像中的人脸区域,随后根据场景图像与深度图像的对应关系可确定出人脸区域的深度信息。由于人脸区域包括鼻子、眼睛、耳朵、嘴唇等特征,因此,人脸区域中的各个特征在深度图像中所对应的深度数据是不同的,例如,在人脸正对结构光摄像头时,结构光摄像头拍摄得的深度图像中,鼻子对应的深度数据可能较小,而耳朵对应的深度数据可能较大。因此,上述的人脸区域的深度信息可能为一个数值或是一个数值范围。其中,当人脸区域的深度信息为一个数值时,该数值可通过对人脸区域的深度数据取平均值得到;或者,可以通过对人脸区域的深度数据取中值得到。根据深度信息生成对应的三维信息可参见现有技术,在此不再赘述。The trained deep learning model is used to identify the face area in the scene image, and then the depth information of the face area can be determined according to the corresponding relationship between the scene image and the depth image. Since the face area includes features such as nose, eyes, ears, and lips, the depth data corresponding to each feature in the face area in the depth image is different. For example, when the face is facing the structured light camera, the structure In the depth image captured by the optical camera, the depth data corresponding to the nose may be small, while the depth data corresponding to the ear may be large. Therefore, the above-mentioned depth information of the face area may be a value or a range of values. Wherein, when the depth information of the face area is a value, the value may be obtained by taking an average value of the depth data of the face area; or, may be obtained by taking a median value of the depth data of the face area. For generating corresponding 3D information according to the depth information, reference may be made to the prior art, which will not be repeated here.
S2,分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法。S2. Determine whether the current user is legal according to the preset voiceprint database and the preset three-dimensional face information database respectively.
具体地,预设的声纹库中保存有合法用户的声纹信息,若当前用户的声纹信息在预设的声纹库中存在与其匹配的声纹信息,说明当前用户的声纹为合法声纹;预设的人脸三维信息库中保存有合法用户的脸部三维信息,若当前用户的脸部三维信息在预设的人脸三维信息库中存在与其匹配的脸部三维信息,说明当前用户的脸部三维信息为合法脸部三维信息。在本实施例中,只有当前用户的声纹被确定为合法声纹且当前用户的脸部三维信息被确定为合法脸部三维信息时,则判断当前用户为合法用户,从而可以尽可能地避免单一的解锁方式导致的终端设备的使用安全性低,单一的解锁方式可能会由于采用伪装的假声纹或采用伪装的假脸部而解锁终端设备,具有一定的安全风险。Specifically, the voiceprint information of the legal user is stored in the preset voiceprint library, and if the voiceprint information of the current user has matching voiceprint information in the preset voiceprint library, it means that the voiceprint information of the current user is legal. Voiceprint; the preset facial 3D information database stores the legal user’s facial 3D information. If the current user’s facial 3D information has matching facial 3D information in the preset facial 3D information database, it means The current user's facial 3D information is legitimate facial 3D information. In this embodiment, only when the voiceprint of the current user is determined to be a legitimate voiceprint and the 3D face information of the current user is determined to be legal, the current user is judged to be a legitimate user, thereby avoiding as much as possible A single unlocking method leads to low security of use of the terminal device, and a single unlocking method may unlock the terminal device due to the use of a fake voiceprint or a fake face, which poses a certain security risk.
进一步地,在步骤S1之后且步骤S2之前,还可以包括以下步骤:Further, after step S1 and before step S2, the following steps may also be included:
S0,确定获取所述声纹信息的第一时刻与获取所述脸部三维信息的第二时刻之间的时间间隔在预设范围内。S0. Determine that the time interval between the first moment of acquiring the voiceprint information and the second moment of acquiring the facial three-dimensional information is within a preset range.
在实际的情形当中,出现触控终端设备的经常发生,例如,用户无意间触控了POWER电源键并说出了一段语音,经过一段较长的时长之后,用户又抬起终端设备,因此,需要对用户触控终端设备进行分析。In actual situations, touch terminal devices often occur. For example, the user accidentally touches the POWER power button and speaks a voice. After a long period of time, the user lifts up the terminal device. Therefore, It is necessary to analyze the user touch terminal equipment.
在本实施例中,若确定获取声纹信息的第一时刻与获取脸部三维信息的第二时刻之间的时间间隔在预设范围内,说明用户存在解锁终端设备的意图;反之,说明只是用户无意间触控了终端设备,并无解锁终端设备的意图。In this embodiment, if it is determined that the time interval between the first moment of acquiring voiceprint information and the second moment of acquiring facial 3D information is within a preset range, it means that the user has the intention to unlock the terminal device; The user accidentally touches the terminal device without intending to unlock the terminal device.
图8是根据本申请示例性的判断当前用户是否合法的流程图。如图8所示,在一种可能的实现方式中,步骤S2的具体实现方式为:Fig. 8 is an exemplary flow chart of judging whether the current user is legal according to the present application. As shown in Figure 8, in a possible implementation, the specific implementation of step S2 is:
S21,分别计算所述当前用户的声纹信息与所述预设的声纹库中各预设声纹之间的各第一相似度。S21. Calculate each first similarity between the voiceprint information of the current user and each preset voiceprint in the preset voiceprint library.
举例来说,可能存在多个用户拥有对终端设备的使用权限,在预设的声纹库中存在多个预设声纹。将当前用户的声纹信息逐一与各个预设声纹进行特征比较,并计算当前用户的声纹信息与预设声纹之间的第一相似度。例如,预设的声纹库中保存可5个预设声纹,则计算得到5个第一相似度。For example, there may be multiple users who have permission to use the terminal device, and multiple preset voiceprints exist in the preset voiceprint library. The voiceprint information of the current user is compared with each preset voiceprint one by one, and the first similarity between the voiceprint information of the current user and the preset voiceprints is calculated. For example, if five preset voiceprints are stored in the preset voiceprint library, five first similarities are calculated.
S22,若所述当前用户的声纹信息与任一预设声纹间的第一相似度大于第一阈值,则确定所述任一预设声纹对应的第一用户标识。S22. If the first similarity between the voiceprint information of the current user and any preset voiceprint is greater than a first threshold, determine a first user identifier corresponding to any preset voiceprint.
举例来说,对步骤S21中得到的多个第一相似度进行排序,选出相似度最大的第一相似度与第一阈值比较,若高于第一阈值,则说明当前用户的声纹为合法声纹,若低于第一阈值,则说明当前用户的声纹为非法声纹。需要指出的是,第一阈值的设置根据实际需求进行设置。For example, sort the multiple first similarities obtained in step S21, select the first similarity with the largest similarity and compare it with the first threshold, if it is higher than the first threshold, it means that the voiceprint of the current user is If the legal voiceprint is lower than the first threshold, it means that the current user's voiceprint is an illegal voiceprint. It should be noted that the setting of the first threshold is set according to actual requirements.
需要指出的是,预设的声纹库中保存了预设声纹与用户标识的对应关系,在确定当前用户的声纹为合法声纹之后,根据预设声纹与用户标识的对应关系确定匹配的预设声纹对应的第一用户标识。It should be pointed out that the preset voiceprint library stores the correspondence between preset voiceprints and user IDs. After determining that the current user's voiceprint is a legitimate voiceprint, determine the The first user ID corresponding to the matched preset voiceprint.
S23,根据所述第一用户标识,从所述预设的人脸三维信息库中获取与所述第一用户标识对应的脸部三维信息。S23. Acquire, according to the first user identifier, facial three-dimensional information corresponding to the first user identifier from the preset human face three-dimensional information database.
在本实施例中,预设的人脸三维信息库中保存了脸部三维信息与用户标识的对应关系,根据确定的第一用户标识,从预设的人脸三维信息库中提取与第一用户标识对应的脸部三维信息。In this embodiment, the preset three-dimensional face information database stores the corresponding relationship between the three-dimensional face information and the user identification, and according to the determined first user identification, extracts from the preset three-dimensional information base of the face. Three-dimensional face information corresponding to the user ID.
需要指出的是,可能存在多个用户拥有对终端设备的使用权限。假如用户A和用户B都有使用终端设备的使用权限,用户A的脸部对着终端设备的取像装置,用户B对着终端屏幕上显示的随机动态密码说出了一段语音,由于用户A和用户B都是合法用户,终端设备判断出合法声纹且合法脸部三维信息,这时,终端设备会解锁成功。It should be pointed out that there may be multiple users who have the right to use the terminal device. If both user A and user B have the right to use the terminal equipment, user A faces the imaging device of the terminal equipment, and user B speaks a voice to the random dynamic password displayed on the terminal screen, because user A Both user B and user B are legitimate users, and the terminal device judges the legal voiceprint and legal facial 3D information. At this time, the terminal device will be unlocked successfully.
本实施例在确定用户的声纹为合法声纹之后,根据用户标识从预设的人脸信息库中提取与用户标识对应的脸部三维信息,从而保证只有同一用户的声纹信息和脸部三维信息匹配成功,才能解锁终端设备,进而避免不同合法用户通过组合的方式解锁终端设备的情形发生,进一步地保证终端设备的使用安全性,提升用户体验度。In this embodiment, after determining that the user's voiceprint is a legal voiceprint, the three-dimensional information of the face corresponding to the user's identification is extracted from the preset face information database according to the user's identification, so as to ensure that only the voiceprint information and face information of the same user Only when the three-dimensional information is successfully matched can the terminal device be unlocked, thereby avoiding the situation where different legal users unlock the terminal device through combination, further ensuring the security of the terminal device and improving user experience.
S24、判断所述当前用户的脸部三维信息,与所述第一用户标识对应的脸部三维信息之间的第二相似度是否大于第二阈值。S24. Determine whether a second similarity between the three-dimensional facial information of the current user and the three-dimensional facial information corresponding to the first user identifier is greater than a second threshold.
具体地,将当前用户的脸部三维信息与第一用户标识对应的脸部三维信息进行特征比对,并计算两者之间的第二相似度,若第二相似度高于第二阈值,则说明当前用户的脸部三维信息为合法脸部三维信息,若第二相似度低于第二阈值,则说明当前用户的脸部三维信息为非法脸部三维信息。需要指出的是,第二阈值的设置根据实际需求进行设置。Specifically, the facial three-dimensional information of the current user is compared with the facial three-dimensional information corresponding to the first user identifier, and the second similarity between the two is calculated. If the second similarity is higher than the second threshold, It means that the 3D facial information of the current user is legal 3D facial information, and if the second similarity is lower than the second threshold, it means that the 3D facial information of the current user is illegal 3D facial information. It should be noted that the setting of the second threshold is set according to actual requirements.
S3,若合法,则对所述终端设备进行解锁。S3. If legal, unlock the terminal device.
具体地,若当前用户为合法用户,说明当前用户拥有使用终端设备的权限,这时解锁终端设备供当前用户使用;反之,若当前用户为非法用户,说明当前用户不拥有使用终端设备的权限,这时终端设备继续锁定。Specifically, if the current user is a legal user, it means that the current user has the authority to use the terminal device, and at this time the terminal device is unlocked for the current user to use; otherwise, if the current user is an illegal user, it means that the current user does not have the authority to use the terminal device. At this point the terminal device remains locked.
本实施例提供的终端设备的解锁方法,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。该方法通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。The method for unlocking a terminal device provided in this embodiment acquires voiceprint information and three-dimensional facial information of the current user when it is detected that the user performs an unlocking operation on the terminal device, wherein the three-dimensional facial information is obtained by using structured light; According to the preset voiceprint database and the preset three-dimensional face information database respectively, it is judged whether the current user is legal; if legal, the terminal device is unlocked. This method simultaneously identifies the current user's voiceprint information and facial three-dimensional information to verify whether the user is a legitimate user. Only legitimate users can unlock the terminal device. Compared with only voiceprint unlocking or face unlocking, it can be more efficient Ensure the safety of terminal equipment, and improve user experience.
为了实现上述实施例,本申请还提出了一种本申请实施例的终端设备的解锁装置。In order to implement the foregoing embodiments, the present application further proposes an unlocking apparatus for a terminal device according to the embodiments of the present application.
图9是根据本申请一个实施例的终端设备的解锁装置的结构示意图。Fig. 9 is a schematic structural diagram of an unlocking device for a terminal device according to an embodiment of the present application.
如图9所示,该本申请实施例的终端设备的解锁装置可以包括第一获取模块、第二获取模块、判断模块、解锁模块。As shown in FIG. 9 , the unlocking apparatus for a terminal device in this embodiment of the present application may include a first acquiring module, a second acquiring module, a judging module, and an unlocking module.
其中,第一获取模块,用于在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息。Wherein, the first acquiring module is configured to acquire the voiceprint information of the current user when it is detected that the user unlocks the terminal device.
其中,第二获取模块,用于在检测到用户对终端设备进行解锁操作时,获取当前用户的脸部三维信息,其中所述脸部三维信息是利用结构光获取的;Wherein, the second acquisition module is configured to acquire the current user's facial three-dimensional information when it is detected that the user performs an unlocking operation on the terminal device, wherein the facial three-dimensional information is acquired by using structured light;
其中,判断模块,用于分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法。Wherein, the judging module is used to judge whether the current user is legal according to the preset voiceprint database and the preset three-dimensional face information database respectively.
其中,解锁模块,用于若合法,则对所述终端设备进行解锁。Wherein, the unlocking module is configured to unlock the terminal device if it is legal.
进一步地,所述装置还还包括:确定模块,用于所述判断模块判断所述当前用户是否合法之前,确定获取所述声纹信息的第一时刻与获取所述脸部三维信息的第二时刻之间的时间间隔在预设范围内。Further, the device further includes: a determining module, used for determining whether the current user is legal or not, before the judging module determines whether the first moment of acquiring the voiceprint information and the second moment of acquiring the facial three-dimensional information The time interval between moments is within a preset range.
进一步地,所述判断模块,具体用于:Further, the judging module is specifically used for:
分别计算所述当前用户的声纹信息与所述预设的声纹库中各预设声纹之间的各第一相似度;respectively calculating the first similarities between the voiceprint information of the current user and each preset voiceprint in the preset voiceprint library;
若所述当前用户的声纹信息与任一预设声纹间的第一相似度大于第一阈值,则确定所述任一预设声纹对应的第一用户标识;If the first similarity between the voiceprint information of the current user and any preset voiceprint is greater than a first threshold, then determine the first user identifier corresponding to any preset voiceprint;
根据所述第一用户标识,从所述预设的人脸三维信息库中获取与所述第一用户标识对应的脸部三维信息;According to the first user identifier, acquire facial three-dimensional information corresponding to the first user identifier from the preset facial three-dimensional information database;
判断所述当前用户的脸部三维信息,与所述第一用户标识对应的脸部三维信息之间间的第二相似度是否大于第二阈值。It is judged whether the second similarity between the facial three-dimensional information of the current user and the facial three-dimensional information corresponding to the first user identifier is greater than a second threshold.
进一步地,所述第二获取模块包括:第一图像采集单元、第二图像采集单元、处理单元;Further, the second acquisition module includes: a first image acquisition unit, a second image acquisition unit, and a processing unit;
所述第一图像采集单元,用于向所述当前用户的头部投射可见光以获取所述当前用户的头部的场景图像;The first image acquisition unit is configured to project visible light onto the head of the current user to acquire a scene image of the head of the current user;
所述第二图像采集单元,用于向所述当前用户的头部投射结构光以获取所述当前用户的头部的深度图像;The second image acquisition unit is configured to project structured light onto the head of the current user to acquire a depth image of the head of the current user;
所述处理单元,用于处理所述场景图像和所述深度图像以获取当前用户的脸部三维信息。The processing unit is configured to process the scene image and the depth image to acquire the 3D face information of the current user.
进一步地,所述处理单元,具体用于:Further, the processing unit is specifically used for:
识别所述场景图像中的脸部区域;identifying face regions in the image of the scene;
从所述深度图像中获取与所述脸部区域对应的深度信息;acquiring depth information corresponding to the face region from the depth image;
根据所述深度信息生成所述当前用户的脸部三维信息。Generate facial three-dimensional information of the current user according to the depth information.
其中,需要说明的是,前述对终端设备的解锁方法实施例的解释说明也适用于该实施例的终端设备的解锁装置,其实现原理类似,此处不再赘述。Wherein, it should be noted that the foregoing explanations on the embodiment of the terminal device unlocking method are also applicable to the device for unlocking the terminal device in this embodiment, and its implementation principles are similar, so details are not repeated here.
本实施例提供的终端设备的解锁装置,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。该装置通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。The device for unlocking a terminal device provided in this embodiment acquires voiceprint information and three-dimensional facial information of the current user when detecting that the user performs an unlocking operation on the terminal device, wherein the three-dimensional facial information is obtained by using structured light; According to the preset voiceprint database and the preset three-dimensional face information database respectively, it is judged whether the current user is legal; if legal, the terminal device is unlocked. The device verifies whether the user is a legitimate user by simultaneously identifying the current user's voiceprint information and facial three-dimensional information. Only legitimate users can unlock the terminal device. Compared with only voiceprint unlocking or face unlocking, it can be more efficient. Ensure the safety of terminal equipment, and improve user experience.
为了实现上述实施例,本申请还提出一种移动终端。In order to implement the above embodiments, the present application also proposes a mobile terminal.
一种移动终端,包括本申请第二方面实施例的终端设备的解锁装置。A mobile terminal includes the device for unlocking a terminal device according to the embodiment of the second aspect of the present application.
根据本申请实施例的移动终端,在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;若合法,则对所述终端设备进行解锁。通过同时对当前用户的声纹信息、脸部三维信息进行识别来验证用户是否为合法用户,只有合法用户才能解锁终端设备,相比仅仅进行声纹解锁或仅仅进行人脸解锁,能够更好地保证终端设备的使用安全性,进而提升用户体验度。According to the mobile terminal of the embodiment of the present application, when it is detected that the user unlocks the terminal device, the current user's voiceprint information and facial three-dimensional information are obtained, wherein the facial three-dimensional information is obtained by using structured light; The preset voiceprint database and the preset three-dimensional face information database are used to judge whether the current user is legal; if legal, the terminal device is unlocked. By simultaneously identifying the current user's voiceprint information and facial 3D information to verify whether the user is a legitimate user, only legitimate users can unlock the terminal device, which is better than unlocking only voiceprint or face unlocking. Ensure the safety of terminal equipment, thereby improving user experience.
本申请实施例还提供了一种计算机可读存储介质,一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行前述的终端设备的解锁方法。The embodiment of the present application also provides a computer-readable storage medium, one or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, The processor is made to execute the aforementioned method for unlocking the terminal device.
为了实现上述实施例,本申请还提出一种移动终端。In order to implement the above embodiments, the present application also proposes a mobile terminal.
上述移动终端中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图10是根据本申请一个实施例的图像处理电路的示意图。如图10所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。The above-mentioned mobile terminal includes an image processing circuit, which may be implemented by hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing, image signal processing) pipeline. Fig. 10 is a schematic diagram of an image processing circuit according to an embodiment of the present application. As shown in FIG. 10 , for ease of description, only various aspects of the image processing technology related to the embodiment of the present application are shown.
如图10所示,移动终端1200的图像处理电路包括成像设备10、ISP处理器30和控制逻辑器40。成像设备10可以包括可见光摄像头123、结构光投射器121、结构光摄像头122。As shown in FIG. 10 , the image processing circuit of the mobile terminal 1200 includes an imaging device 10 , an ISP processor 30 and a control logic 40 . The imaging device 10 may include a visible light camera 123 , a structured light projector 121 , and a structured light camera 122 .
成像设备10包括可见光摄像头11和深度图像采集组件12。The imaging device 10 includes a visible light camera 11 and a depth image acquisition component 12 .
具体地,可见光摄像头123包括图像传感器1231和透镜1232,可见光摄像头123可用于捕捉当前用户的彩色信息以获得场景图像,其中,图像传感器1231包括彩色滤镜阵列(如Bayer滤镜阵列),透镜1232的个数可为一个或多个。可见光摄像头123在获取场景图像过程中,图像传感器1231中的每一个成像像素感应来自拍摄场景中的光强度和波长信息,生成一组原始图像数据;图像传感器1231将该组原始图像数据发送至ISP处理器30中,ISP处理器30对原始图像数据进行去噪、插值等运算后即得到彩色的场景图像。ISP处理器30可按多种格式对原始图像数据中的每个图像像素逐一处理,例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器30可按相同或不同的位深度对每一个图像像素进行处理。Specifically, the visible light camera 123 includes an image sensor 1231 and a lens 1232. The visible light camera 123 can be used to capture the color information of the current user to obtain a scene image, wherein the image sensor 1231 includes a color filter array (such as a Bayer filter array), and the lens 1232 The number of can be one or more. When the visible light camera 123 acquires the scene image, each imaging pixel in the image sensor 1231 senses the light intensity and wavelength information from the shooting scene to generate a set of original image data; the image sensor 1231 sends the set of original image data to the ISP In the processor 30, the ISP processor 30 performs operations such as denoising and interpolation on the original image data to obtain a color scene image. The ISP processor 30 can process each image pixel in the raw image data one by one in a variety of formats, for example, each image pixel can have a bit depth of 8, 10, 12 or 14 bits, and the ISP processor 30 can process each image pixel in the same or Different bit depths are processed for each image pixel.
具体地,结构光投射器121将结构光投影至当前用户的头部。其中,该结构光图案可为激光条纹、格雷码、正弦条纹、或者,随机排列的散斑图案等。结构光摄像头122可以包括图像传感器1221和透镜1222。其中,透镜1222的个数可为一个或多个。图像传感器1221用于捕捉结构光投射器121投射至当前用户的头部上的结构光图像。结构光图像可由图像采集模块120发送至ISP处理器30进行解调、相位恢复、相位信息计算等处理以获取第一用户的深度信息。Specifically, the structured light projector 121 projects the structured light onto the current user's head. Wherein, the structured light pattern may be laser stripes, gray codes, sinusoidal stripes, or randomly arranged speckle patterns and the like. The structured light camera 122 may include an image sensor 1221 and a lens 1222 . Wherein, the number of lenses 1222 may be one or more. The image sensor 1221 is used to capture the structured light image projected by the structured light projector 121 onto the head of the current user. The structured light image can be sent by the image acquisition module 120 to the ISP processor 30 for demodulation, phase recovery, phase information calculation and other processing to obtain the depth information of the first user.
其中,上述图像传感器1221还用于捕捉结构光投射器121投射至当前用户的头部的结构光图像,并将结构光图像发送至ISP处理器30,由ISP处理器30对结构光图像进行解调获取被测物(本实施例中被测物指当前用户的头部)的深度信息。同时,图像传感器1221也可以捕捉被测物的色彩信息。当然,也可以由两个图像传感器1221分别捕捉被测物的结构光图像和色彩信息。Wherein, the above-mentioned image sensor 1221 is also used to capture the structured light image projected by the structured light projector 121 onto the head of the current user, and send the structured light image to the ISP processor 30, and the ISP processor 30 decodes the structured light image. Call to obtain the depth information of the measured object (the measured object refers to the current user's head in this embodiment). At the same time, the image sensor 1221 can also capture the color information of the measured object. Of course, the two image sensors 1221 can also capture the structured light image and color information of the object under test respectively.
其中,以散斑结构光为例,ISP处理器30对结构光图像进行解调,具体包括,从该结构光图像中采集被测物的散斑图像,将被测物的散斑图像与参考散斑图像按照预定算法进行图像数据计算,获取被测物上散斑图像的各个散斑点相对于参考散斑图像中的参考散斑点的移动距离。利用三角法转换计算得到散斑图像的各个散斑点的深度值,并根据该深度值得到被测物的深度信息。Wherein, taking speckle structured light as an example, the ISP processor 30 demodulates the structured light image, which specifically includes collecting the speckle image of the object under test from the structured light image, and combining the speckle image of the object under test with the reference The image data of the speckle image is calculated according to a predetermined algorithm, and the moving distance of each speckle point in the speckle image on the object under test relative to the reference speckle point in the reference speckle image is obtained. The depth value of each speckle point of the speckle image is converted and calculated by using the triangulation method, and the depth information of the measured object is obtained according to the depth value.
当然,还可以通过双目视觉的方法或基于飞行时差TOF的方法来获取该深度图像信息等,在此不做限定,只要能够获取或通过计算得到被测物的深度信息的方法都属于本实施方式包含的范围。Of course, the depth image information can also be obtained through binocular vision or a method based on time-of-flight TOF, and there is no limitation here, as long as the method that can obtain or calculate the depth information of the measured object belongs to this implementation The range covered by the method.
在ISP处理器30接收到图像传感器1221捕捉到的被测物的色彩信息之后,可被测物的色彩信息对应的图像数据进行处理。ISP处理器30对图像数据进行分析以获取可用于确定和/或成像设备10的一个或多个控制参数的图像统计信息。图像传感器1221可包括色彩滤镜阵列(如Bayer滤镜),图像传感器1221可获取用图像传感器1221的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器30处理的一组原始图像数据。After the ISP processor 30 receives the color information of the object under test captured by the image sensor 1221 , it can process the image data corresponding to the color information of the object under test. ISP processor 30 analyzes the image data to obtain image statistics that may be used to determine and/or control one or more parameters of imaging device 10 . The image sensor 1221 can include a color filter array (such as a Bayer filter), and the image sensor 1221 can obtain light intensity and wavelength information captured by each imaging pixel of the image sensor 1221, and provide a set of raw images that can be processed by the ISP processor 30. image data.
ISP处理器30按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器30可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的图像统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。The ISP processor 30 processes raw image data on a pixel-by-pixel basis in various formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 30 may perform one or more image processing operations on raw image data, gather image statistics about the image data. Among other things, image processing operations can be performed with the same or different bit depth precision.
ISP处理器30还可从图像传感器1221接收像素数据。图像传感器1221可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(DirectMemory Access,直接直接存储器存取)特征。ISP processor 30 may also receive pixel data from image sensor 1221 . The image sensor 1221 may be a part of a memory device, a storage device, or an independent dedicated memory in an electronic device, and may include a DMA (Direct Memory Access, Direct Memory Access) feature.
当接收到原始图像数据时,ISP处理器30可进行一个或多个图像处理操作。When raw image data is received, ISP processor 30 may perform one or more image processing operations.
在ISP处理器30获取到被测物的色彩信息和深度信息后,可对其进行融合,得到三维图像。其中,可通过外观轮廓提取方法或轮廓特征提取方法中的至少一种提取相应的被测物的特征。例如通过主动形状模型法ASM、主动外观模型法AAM、主成分分析法PCA、离散余弦变换法DCT等方法,提取被测物的特征,在此不做限定。再将分别从深度信息中提取到被测物的特征以及从色彩信息中提取到被测物的特征进行配准和特征融合处理。这里指的融合处理可以是将深度信息以及色彩信息中提取出的特征直接组合,也可以是将不同图像中相同的特征进行权重设定后组合,也可以有其他融合方式,最终根据融合后的特征,生成三维图像。After the ISP processor 30 acquires the color information and depth information of the measured object, it can be fused to obtain a three-dimensional image. Wherein, the feature of the corresponding object under test may be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the measured object are extracted by active shape modeling method ASM, active appearance modeling method AAM, principal component analysis method PCA, discrete cosine transform method DCT and other methods, which are not limited here. Then, the features of the measured object extracted from the depth information and the features of the measured object extracted from the color information are subjected to registration and feature fusion processing. The fusion processing referred to here can be to directly combine the features extracted from the depth information and color information, or to combine the same features in different images after setting weights, or there can be other fusion methods, and finally according to the fused features to generate a 3D image.
三维图像的图像数据可发送给图像存储器20,以便在被显示之前进行另外的处理。ISP处理器30从图像存储器20接收处理数据,并对处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。三维图像的图像数据可输出给显示器60,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器30的输出还可发送给图像存储器20,且显示器60可从图像存储器20读取图像数据。在一个实施例中,图像存储器20可被配置为实现一个或多个帧缓冲器。此外,ISP处理器30的输出可发送给编码器/解码器50,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器60设备上之前解压缩。编码器/解码器50可由CPU或GPU或协处理器实现。The image data for the three-dimensional image may be sent to image memory 20 for additional processing before being displayed. The ISP processor 30 receives the processed data from the image memory 20, and performs image data processing in the raw domain and in the RGB and YCbCr color spaces on the processed data. The image data of the three-dimensional image can be output to the display 60 for viewing by the user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit, graphics processor). In addition, the output of the ISP processor 30 can also be sent to the image memory 20 , and the display 60 can read image data from the image memory 20 . In one embodiment, image memory 20 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 30 may be sent to an encoder/decoder 50 for encoding/decoding image data. The encoded image data may be saved and decompressed prior to display on the display 60 device. Encoder/decoder 50 may be implemented by a CPU or GPU or a coprocessor.
ISP处理器30确定的图像统计信息可发送给控制逻辑器40单元。控制逻辑器40可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的图像统计信息,确定成像设备10的控制参数。The image statistics determined by the ISP processor 30 may be sent to the control logic 40 unit. Control logic 40 may include a processor and/or microcontroller executing one or more routines (eg, firmware) that may determine control parameters for imaging device 10 based on received image statistics.
以下为运用图10中图像处理技术实现终端设备的解锁方法的步骤:The following are the steps of using the image processing technology in Figure 10 to realize the unlocking method of the terminal device:
S1',在检测到用户对终端设备进行解锁操作时,获取当前用户的声纹信息以及脸部三维信息,其中所述脸部三维信息是利用结构光获取的;S1', when it is detected that the user unlocks the terminal device, acquire voiceprint information and three-dimensional facial information of the current user, wherein the three-dimensional facial information is acquired by using structured light;
S2',分别依据预设的声纹库及预设的人脸三维信息库,判断所述当前用户是否合法;S2', judging whether the current user is legal according to the preset voiceprint database and the preset three-dimensional face information database respectively;
S3',若合法,则对所述终端设备进行解锁。S3', if legal, unlock the terminal device.
其中,需要说明的是,前述对终端设备的解锁方法实施例的解释说明也适用该实施例的移动终端,其实现原理类似,此处不再赘述。Wherein, it should be noted that the foregoing explanations of the embodiment of the method for unlocking a terminal device are also applicable to the mobile terminal of this embodiment, and its implementation principles are similar, so details will not be repeated here.
一种计算机程序产品,当计算机程序产品中的指令处理器执行时,执行前述的终端设备的解锁方法。A computer program product, when an instruction processor in the computer program product executes, executes the aforementioned terminal device unlocking method.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples. In addition, those skilled in the art can combine and combine different embodiments or examples and features of different embodiments or examples described in this specification without conflicting with each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, the features defined as "first" and "second" may explicitly or implicitly include at least one of these features. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless otherwise specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the present application includes additional implementations in which functions may be performed out of the order shown or discussed, including in substantially simultaneous fashion or in reverse order depending on the functions involved, which shall It should be understood by those skilled in the art to which the embodiments of the present application belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, can be considered as a sequenced listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium, For use with instruction execution systems, devices, or devices (such as computer-based systems, systems including processors, or other systems that can fetch instructions from instruction execution systems, devices, or devices and execute instructions), or in conjunction with these instruction execution systems, devices or equipment for use. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate or transmit a program for use in or in conjunction with an instruction execution system, device or device. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program may be printed, as it may be possible, for example, by optically scanning the paper or other medium, followed by editing, interpretation or other suitable means if necessary. Processing to obtain programs electronically and store them in computer memory.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that each part of the present application may be realized by hardware, software, firmware or a combination thereof. In the embodiments described above, various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, Programmable Gate Arrays (PGAs), Field Programmable Gate Arrays (FPGAs), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. When the program is executed , including one or a combination of the steps of the method embodiment.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limitations on the present application, and those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711242470.9A CN108052813A (en) | 2017-11-30 | 2017-11-30 | Unlocking method and device of terminal equipment and mobile terminal |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711242470.9A CN108052813A (en) | 2017-11-30 | 2017-11-30 | Unlocking method and device of terminal equipment and mobile terminal |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108052813A true CN108052813A (en) | 2018-05-18 |
Family
ID=62121634
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711242470.9A Pending CN108052813A (en) | 2017-11-30 | 2017-11-30 | Unlocking method and device of terminal equipment and mobile terminal |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108052813A (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108848266A (en) * | 2018-06-27 | 2018-11-20 | Oppo广东移动通信有限公司 | Control method, electronic device, storage medium, and computer apparatus |
| CN108920928A (en) * | 2018-09-14 | 2018-11-30 | 算丰科技(北京)有限公司 | Personal identification method, device, electronic equipment and computer readable storage medium |
| WO2020088483A1 (en) * | 2018-10-31 | 2020-05-07 | 华为技术有限公司 | Audio control method and electronic device |
| CN113360869A (en) * | 2020-03-04 | 2021-09-07 | 北京嘉诚至盛科技有限公司 | Method for starting application, electronic equipment and computer readable medium |
| CN113921016A (en) * | 2021-10-15 | 2022-01-11 | 阿波罗智联(北京)科技有限公司 | Voice processing method, device, electronic equipment and storage medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104834849A (en) * | 2015-04-14 | 2015-08-12 | 时代亿宝(北京)科技有限公司 | Dual-factor identity authentication method and system based on voiceprint recognition and face recognition |
| CN107277053A (en) * | 2017-07-31 | 2017-10-20 | 广东欧珀移动通信有限公司 | Identity verification method, device and mobile terminal |
| CN107368730A (en) * | 2017-07-31 | 2017-11-21 | 广东欧珀移动通信有限公司 | Unlock verification method and device |
| CN107404381A (en) * | 2016-05-19 | 2017-11-28 | 阿里巴巴集团控股有限公司 | A kind of identity identifying method and device |
-
2017
- 2017-11-30 CN CN201711242470.9A patent/CN108052813A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104834849A (en) * | 2015-04-14 | 2015-08-12 | 时代亿宝(北京)科技有限公司 | Dual-factor identity authentication method and system based on voiceprint recognition and face recognition |
| CN107404381A (en) * | 2016-05-19 | 2017-11-28 | 阿里巴巴集团控股有限公司 | A kind of identity identifying method and device |
| CN107277053A (en) * | 2017-07-31 | 2017-10-20 | 广东欧珀移动通信有限公司 | Identity verification method, device and mobile terminal |
| CN107368730A (en) * | 2017-07-31 | 2017-11-21 | 广东欧珀移动通信有限公司 | Unlock verification method and device |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108848266A (en) * | 2018-06-27 | 2018-11-20 | Oppo广东移动通信有限公司 | Control method, electronic device, storage medium, and computer apparatus |
| CN108920928A (en) * | 2018-09-14 | 2018-11-30 | 算丰科技(北京)有限公司 | Personal identification method, device, electronic equipment and computer readable storage medium |
| WO2020051971A1 (en) * | 2018-09-14 | 2020-03-19 | 福建库克智能科技有限公司 | Identity recognition method, apparatus, electronic device, and computer-readable storage medium |
| WO2020088483A1 (en) * | 2018-10-31 | 2020-05-07 | 华为技术有限公司 | Audio control method and electronic device |
| CN113360869A (en) * | 2020-03-04 | 2021-09-07 | 北京嘉诚至盛科技有限公司 | Method for starting application, electronic equipment and computer readable medium |
| CN113360869B (en) * | 2020-03-04 | 2024-12-20 | 北京嘉诚至盛科技有限公司 | Method for launching application, electronic device and computer readable medium |
| CN113921016A (en) * | 2021-10-15 | 2022-01-11 | 阿波罗智联(北京)科技有限公司 | Voice processing method, device, electronic equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107368730B (en) | Unlock verification method and device | |
| CN107480613B (en) | Face recognition method, device, mobile terminal and computer-readable storage medium | |
| CN107895110A (en) | Unlocking method, device and the mobile terminal of terminal device | |
| CN107563304B (en) | Terminal device unlocking method and device, and terminal device | |
| CN107623817B (en) | Video background processing method, device and mobile terminal | |
| CN107682607A (en) | Image acquisition method, device, mobile terminal and storage medium | |
| CN108052813A (en) | Unlocking method and device of terminal equipment and mobile terminal | |
| CN107437019A (en) | Identity verification method and device for lip recognition | |
| CN107277053A (en) | Identity verification method, device and mobile terminal | |
| CN107623832A (en) | Video background replacement method, device and mobile terminal | |
| CN107657652A (en) | Image processing method and device | |
| CN107592490A (en) | Video background replacing method and device and mobile terminal | |
| CN107590435A (en) | Palmprint recognition method, device and terminal equipment | |
| CN107491675A (en) | Information security processing method and device and terminal | |
| CN107491744A (en) | Human body personal identification method, device, mobile terminal and storage medium | |
| CN107463659A (en) | Object search method and device thereof | |
| CN107707838A (en) | Image processing method and device | |
| CN107613239B (en) | Video communication background display method and device | |
| CN107527335A (en) | Image processing method and device, electronic device, and computer-readable storage medium | |
| CN107590828B (en) | Blurring processing method and device for shot image | |
| CN107622496A (en) | Image processing method and device | |
| CN107610078A (en) | Image processing method and device | |
| CN107592491A (en) | Video communication background display method and device | |
| CN107392545A (en) | Identity verification method and device for receiving express delivery | |
| CN107680034A (en) | Image processing method and device, electronic device, and computer-readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
| CB02 | Change of applicant information | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180518 |
|
| RJ01 | Rejection of invention patent application after publication |