[go: up one dir, main page]

WO2019179033A1 - 说话人认证方法、服务器及计算机可读存储介质 - Google Patents

说话人认证方法、服务器及计算机可读存储介质 Download PDF

Info

Publication number
WO2019179033A1
WO2019179033A1 PCT/CN2018/102203 CN2018102203W WO2019179033A1 WO 2019179033 A1 WO2019179033 A1 WO 2019179033A1 CN 2018102203 W CN2018102203 W CN 2018102203W WO 2019179033 A1 WO2019179033 A1 WO 2019179033A1
Authority
WO
WIPO (PCT)
Prior art keywords
speaker
neural network
convolutional neural
network architecture
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/102203
Other languages
English (en)
French (fr)
Inventor
王义文
王健宗
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Publication of WO2019179033A1 publication Critical patent/WO2019179033A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/18Artificial neural networks; Connectionist approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Definitions

  • the present application relates to the field of identity authentication, and in particular, to a speaker authentication method, a server, and a computer readable storage medium.
  • intelligent hardware for information security, most smart devices are equipped with an authentication password.
  • the usual identity authentication password is fingerprint authentication or digital password or graphic password as the basis of identity, but often the key or touch screen is not the most efficient.
  • voice input will be more convenient.
  • the current voice recognition is mainly when the user inputs a specific text voice, and when the smart device recognizes the corresponding content, the identity verification succeeds, but the specific voice is used as a password, which is easy to be cracked, and has a security risk.
  • the present application provides a speaker authentication method, a server, and a computer readable storage medium.
  • the present application provides a speaker authentication method, which is applied to a server, and the method includes:
  • test utterance When the test utterance is received, comparing the test utterance information with the stored speech model of the speaker;
  • the similarity between the test utterance information and the speaker's voice model is calculated.
  • the speaker authentication is successful, and when the similarity is less than a preset value, the speaker authentication fails.
  • the present application further provides a server, where the server includes a memory, a processor, and the speaker stores a speaker authentication system operable on the processor, the speaker authentication system.
  • the server includes a memory, a processor, and the speaker stores a speaker authentication system operable on the processor, the speaker authentication system.
  • test utterance When the test utterance is received, comparing the test utterance information with the stored speech model of the speaker;
  • the step of creating and storing the speaker's voice model by using the 3D convolutional neural network architecture includes:
  • a speaker's speech model is generated from an average vector of audio stack frames belonging to the speaker.
  • the present application further provides a computer readable storage medium storing a speaker authentication system, the speaker authentication system being executable by at least one processor, such that The at least one processor performs the steps of the speaker authentication method as described above.
  • 1 is a schematic diagram of an optional hardware architecture of the server of the present application.
  • FIG. 2 is a schematic diagram of a program module of a first embodiment of a speaker authentication system of the present application
  • FIG. 3 is a schematic diagram of parsing speaker speech into an audio stream stack frame
  • FIG. 4 is a schematic flow chart of a first embodiment of a speaker authentication method according to the present application.
  • FIG. 5 is a schematic diagram of a specific process of step S303 in the first embodiment of the speaker authentication method of the present application.
  • Server 2 Memory 11 processor 12 Network Interface 13 Speaker authentication system 200 Acquisition module 201 Building module 202 Input module 203 Comparison module 204 Calculation module 205 Parsing module 206
  • the server 2 may include, but is not limited to, the memory 11, the processor 12, and the network interface 13 being communicably connected to each other through a system bus. It is pointed out that Figure 1 only shows the server 2 with the components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), and a random access memory (RAM). , static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the server 2, such as a hard disk or memory of the server 2.
  • the memory 11 may also be an external storage device of the server 2, such as a plug-in hard disk equipped on the server 2, a smart memory card (SMC), and a secure digital (Secure) Digital, SD) cards, flash cards, etc.
  • the memory 11 can also include both the internal storage unit of the server 2 and its external storage device.
  • the memory 11 is generally used to store an operating system installed on the server 2 and various types of application software, such as program codes of the speaker authentication system 200. Further, the memory 11 can also be used to temporarily store various types of data that have been output or are to be output.
  • the processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 12 is typically used to control the overall operation of the server 2, such as performing control and processing related to data interaction or communication with the terminal device 1.
  • the processor 12 is configured to run program code or process data stored in the memory 11, such as running the speaker authentication system 200 and the like.
  • the network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the server 2 and other electronic devices.
  • the present application proposes a speaker authentication system 200.
  • FIG. 2 it is a program block diagram of the first embodiment of the speaker authentication system 200 of the present application.
  • the speaker authentication system 200 includes a series of computer program instructions stored in the memory 11, and when the computer program instructions are executed by the processor 12, the speaker authentication operation of the embodiments of the present application can be implemented. .
  • the speaker authentication system 200 can be divided into one or more modules based on the particular operations implemented by the various portions of the computer program instructions. For example, in FIG. 2, the speaker authentication system 200 can be divided into an acquisition module 201, a construction module 202, an input module 203, a comparison module 204, and a calculation module 205. among them:
  • the obtaining module 201 is configured to acquire voice information of a preset speaker, where the voice information does not limit content.
  • acoustic features for speaker authentication: one is to do long-term statistics on acoustic feature parameters, and the other is to analyze several specific tones.
  • the time statistics of the acoustic feature parameters are regardless of the content of the speaker, that is, it is not related to the text, and is called text-independent speaker recognition.
  • the speaker To limit the content of the speech, for the specific sound analysis, the speaker must be made to emit certain specific words of speech, so it is related to the text, called text-dependent speaker recognition.
  • voice is used as the password of the server 2
  • a specific voice is used as the password, it is easy to be cracked, which poses a security risk. Therefore, in the present embodiment, text-independent speaker verification is employed.
  • the server 2 acquires the voice information of the speaker through the acquisition module 201, and the voice information does not restrict the content, that is, is independent of the text.
  • the text as an example of an application related to a text-independent voice password.
  • the text-related means that the content of the voice is pre-defined. For example, if the content is limited to “learning well”, the user only has to say “good study” to calculate the password correctly.
  • the text is irrelevant because the voice content is not limited, regardless of whether the user says "good learning” or "every day”, the password is considered correct as long as it corresponds to the speaker's voice model stored by the server.
  • the speech model for storing speakers will be detailed below.
  • the building block 202 is configured to construct a 3D (three-dimensional) convolutional neural network architecture, and input the speaker's voice information to the 3D convolutional neural network architecture through the input module 203.
  • the server 2 constructs a 3D convolutional neural network architecture through the building block 202.
  • the 3D convolutional neural network architecture (3D-CNN) includes a hardwired layer H1 (hardwired layer), a convolutional layer, a downsampling layer, a convolutional layer, and a downsampling layer in order from the input end. , convolutional layer, fully connected layer, classification layer.
  • the speaker's voice information is input to an input of the 3D convolutional neural network.
  • the building block 202 is further configured to create and store the speaker's voice model through the 3D convolutional neural network architecture.
  • the server 2 wants to confirm the identity of a person, for example, if a server confirms whether the person is an administrator or has a person who has the authority to open the server, the internal storage of the server 2 must have a voice model in which the speaker is stored. . That is, the server 2 has to collect the speaker's voice and build his model, also called the target model.
  • the building module 203 creates the speaker's voice model according to the acquired speaker's voice information through the 3D convolutional neural network architecture and stores it in the internal storage of the server 2.
  • the 3D convolutional neural network architecture can analyze the voiceprint information of the speaker, and the voiceprint can be recognized because each person has a unique difference in the oral cavity, the nasal cavity and the channel structure. According to the obtained voice information of the speaker, the voiceprint information is analyzed, and the difference of the sounding organ is indirectly analyzed to determine the identity of the speaker.
  • the comparison module 204 is configured to compare the test utterance with the stored speech model of the speaker when the test utterance information is received.
  • the server 2 when the server 2 sets a voice password, only the administrator who is authenticated or has the authority to open the server can be unlocked.
  • the server 2 when the server 2 receives the test utterance information, for example, receives the utterance information of A, the server 2 acquires the voice information of A through the comparison module 204, and extracts the voiceprint information according to the voice information of A. Further, the voiceprint information of A is compared with the voice model of the speaker stored in the server 2 to verify whether A is an administrator or a person having the authority to turn on the server.
  • the calculating module 205 is configured to calculate a similarity between the test utterance information and the speaker's voice model. When the similarity is greater than a preset value, the speaker authentication is successful, and when the similarity is less than a preset value At the time, the speaker authentication failed.
  • the server 2 calculates a similarity score, that is, a similarity degree, by the calculation module 205 calculating a cosine similarity between the speaker's voice model and the test utterance information. Therefore, it is judged according to the similarity whether the current speaker is an administrator or a person who has the authority to open the server.
  • the speaker authentication system 200 further includes a parsing module 206, wherein:
  • the parsing module 206 is configured to parse the obtained voice information of the speaker into an audio stack frame.
  • FIG. 3 is a schematic diagram of parsing speaker speech into audio stream stack frames according to the present application.
  • the MFCC Mel Frequency Cepstral Coefficient
  • the final generation of the MFCC's DCT1 operation will result in these features becoming non-local features, which are distinct from the local features in the convolution operation. Contrast.
  • the logarithmic energy that is, the MFEC
  • the feature extracted in the MFEC is similar to the feature obtained by discarding the DCT operation, and the temporal feature overlaps the 20 ms window with a span of 10 ms to generate a spectral feature (audio stack).
  • 80 temporal feature sets (each constituting 40 MFEC features) can be obtained from the input speech feature map.
  • the dimensions of each input feature are nx80x40, which consists of 80 input frames and similar map features, and n represents the number of statements used in the 3D convolutional neural network architecture.
  • the input module 203 is further configured to input the audio stack frame to the 3D convolutional neural network architecture.
  • the building block 202 is further configured to generate a vector for each word of the audio stack frame, and generate an average vector of audio stack frames belonging to the speaker to generate a speaker's voice model.
  • the server 2 parses the acquired speaker voice into a stacked frame of the audio stream by using the parsing module 206, and inputs the audio stack frame into the 3D through the input module 203.
  • Convolutional neural network architecture finally through the construction module 202 each utterance will directly generate a d vector, which belongs to the average d vector of the speaker's utterance to generate a speaker model.
  • the server 2 may also acquire a plurality of different voice information of the same speaker, and further parse the plurality of different voice information into a feature map and superimpose them together.
  • the superimposed feature map is converted into a vector input to a convolutional neural network architecture convolutional neural network architecture to generate a speaker's speech model.
  • D1 represents the vector of the test utterance information
  • D2 represents the vector of the speaker's speech model
  • the numerator represents the dot product of the two vectors
  • the denominator represents the product of the moduli of the two vectors.
  • the server 2 presets a preset value, and when the calculated similarity is greater than the preset value, it indicates that the speaker verification is successful, that is, A is an administrator or a person having the authority to open the server. Similarly, when the calculated similarity is less than the preset value, the speaker authentication fails.
  • the server 2 locks or issues an alarm to improve the security of the use of the server.
  • the speaker authentication system 200 proposed by the present application firstly acquires voice information of a preset speaker, wherein the voice information does not restrict content; and then constructs a 3D convolutional neural network architecture; further And inputting the speaker's voice information to the 3D convolutional neural network architecture; then, creating and storing the speaker's voice model through the 3D convolutional neural network architecture; and then, when receiving the test utterance And comparing the test utterance information with the stored speech model of the speaker; finally, calculating the similarity between the test utterance information and the speaker's speech model, when the similarity is greater than a preset value
  • the speaker authentication is successful. When the similarity is less than a preset value, the speaker authentication fails.
  • the present application also proposes a speaker authentication method.
  • FIG. 4 it is a schematic flowchart of the first embodiment of the speaker authentication method of the present application.
  • the order of execution of the steps in the flowchart shown in FIG. 4 may be changed according to different requirements, and some steps may be omitted.
  • Step S301 Acquire voice information of a preset speaker, wherein the voice information does not limit content.
  • acoustic features for speaker authentication: one is to do long-term statistics on acoustic feature parameters, and the other is to analyze several specific tones.
  • the time statistics of the acoustic feature parameters are regardless of the content of the speaker, that is, it is not related to the text, and is called text-independent speaker recognition.
  • the speaker To limit the content of the speech, for the specific sound analysis, the speaker must be made to emit certain specific words of speech, so it is related to the text, called text-dependent speaker recognition.
  • voice is used as the password of the server, if a specific voice is used as the password, it is easy to be cracked, which poses a security risk. Therefore, in the present embodiment, text-independent speaker verification is employed.
  • the server 2 acquires the speaker's voice information, which does not restrict the content, that is, is independent of the text.
  • the text-related means that the content of the voice is pre-defined. For example, if the content is limited to “learning well”, the user only has to say “good study” to calculate the password correctly.
  • the text is irrelevant because the voice content is not limited, regardless of whether the user says "good learning” or "every day”, the password is considered correct as long as it corresponds to the speaker's voice model stored by the server.
  • the speech model for storing speakers will be detailed below.
  • Step S302 constructing a 3D convolutional neural network architecture, and inputting the speaker's voice information to the 3D convolutional neural network architecture through the input module 203.
  • the server 2 constructs a 3D convolutional neural network architecture.
  • the 3D convolutional neural network architecture (3D-CNN) includes a hardwired layer H1 (hardwired layer), a convolutional layer, a downsampling layer, a convolutional layer, and a downsampling layer in order from the input end. , convolutional layer, fully connected layer, classification layer.
  • the speaker's voice information is input to the input of the 3D convolutional neural network.
  • Step S303 creating and storing the speaker's voice model through the 3D convolutional neural network architecture.
  • the server 2 wants to confirm the identity of a person, for example, if a server confirms whether the person is an administrator or has a person who has the authority to open the server, the internal storage of the server 2 must have a voice model in which the speaker is stored. . That is, the server 2 has to collect the speaker's voice and build his model, also called the target model. In this embodiment, the server 2 creates the speaker's voice model according to the acquired speaker's voice information through the 3D convolutional neural network architecture and stores it in the internal storage of the server 2.
  • step S303 creating and storing the speaker's voice model through the 3D convolutional neural network architecture, specifically including S401-S403.
  • Step S401 parsing the obtained voice information of the speaker into an audio stack frame.
  • FIG. 3 is a schematic diagram of parsing speaker speech into audio stream stack frames according to the present application.
  • the MFCC Mel Frequency Cepstral Coefficient
  • the MFCC Mel Frequency Cepstral Coefficient
  • the logarithmic energy that is, the MFEC
  • the feature extracted in the MFEC is similar to the feature obtained by discarding the DCT operation, and the temporal feature overlaps the 20 ms window with a span of 10 ms to generate a spectral feature (audio stack).
  • each input feature can be obtained from the input speech feature map.
  • the dimensions of each input feature are nx80x40, which consists of 80 input frames and similar map features, and n represents the number of statements used in the 3D convolutional neural network architecture.
  • Step S402 inputting the audio stack frame to the 3D convolutional neural network architecture.
  • Step S403 generating a vector for each word of the audio stack frame, and generating an average vector of the audio stack frames belonging to the speaker to generate a speaker's voice model.
  • the server 2 parses the acquired speaker speech into a stacked frame of audio streams, and then inputs the audio stack frame into a 3D-convolution neural network architecture, and finally each utterance will be A d vector is directly generated, which belongs to the average d vector of the speaker's utterance to generate a speaker model.
  • the server 2 may also acquire a plurality of different voice information of the same speaker, and further parse the plurality of different voice information into a feature map and superimpose them together.
  • the superimposed feature map is converted into a vector input to a convolutional neural network architecture convolutional neural network architecture to generate a speaker's speech model.
  • Step S304 when the test utterance information is received, comparing the test utterance with the stored speech model of the speaker.
  • the server 2 sets a voice password
  • only the administrator who is authenticated or has the authority to open the server can be unlocked.
  • the server 2 receives the test utterance information, for example, the utterance information of A is received, and the voiceprint information is extracted according to the voice information of A, and then the voiceprint information of A and the server are connected to the server. 2
  • the internally stored speaker's voice model is compared to verify that A is an administrator or has access to the server.
  • Step S305 calculating a similarity between the test utterance information and the speaker's voice model.
  • the similarity is greater than a preset value, the speaker authentication is successful, and when the similarity is less than a preset value, the speaker is Authentication failed.
  • the server 2 calculates a cosine similarity between the speaker's speech model and the test utterance information to obtain a similarity score, that is, a similarity. Therefore, it is judged according to the similarity whether the current speaker is an administrator or a person who has the authority to open the server.
  • the similarity is calculated using the following formula:
  • D1 represents the vector of the test utterance information
  • D2 represents the vector of the speaker's speech model
  • the numerator represents the dot product of the two vectors
  • the denominator represents the product of the moduli of the two vectors.
  • the server 2 presets a preset value, and when the calculated similarity is greater than the preset value, it indicates that the speaker verification is successful, that is, A is an administrator or a person having the authority to open the server. Similarly, when the calculated similarity is less than the preset value, the speaker authentication fails.
  • the server 2 locks or issues an alarm to improve the security of the use of the server.
  • the speaker authentication method proposed by the present application firstly acquires voice information of a preset speaker, wherein the voice information does not restrict content; then, constructs a 3D convolutional neural network architecture; further, Inputting the speaker's voice information into the 3D convolutional neural network architecture; then, creating and storing the speaker's voice model through the 3D convolutional neural network architecture; and then, when receiving the test utterance, Comparing the test utterance information with the stored speech model of the speaker; finally, calculating the similarity between the test utterance information and the speaker's speech model, when the similarity is greater than a preset value, The speaker authentication succeeds. When the similarity is less than a preset value, the speaker authentication fails.
  • a speaker-independent text model as a password, it is not easy to crack and improve server security.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Telephonic Communication Services (AREA)

Abstract

一种说话人认证方法,包括:获取预设说话人的语音信息,其中,语音信息不限制内容(S301);构建3D卷积神经网络架构,将说话人的语音信息输入至3D卷积神经网络架构(S302);通过3D卷积神经网络架构创建并存储说话人的语音模型(S303);当接收到测试话语时,将测试话语信息与存储的说话人的语音模型进行对比(S304);计算测试话语信息与说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败(S305)。还涉及一种服务器及计算机可读存储介质。

Description

说话人认证方法、服务器及计算机可读存储介质
本申请要求于2018年3月23日提交中国专利局,申请号为201810246497.3、发明名称为“说话人认证方法、服务器及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及身份认证领域,尤其涉及一种说话人认证方法、服务器及计算机可读存储介质。
背景技术
随着互联网信息技术的发展,智能硬件的应用越来越广泛,例如智能电视、智能手机、智能机器人等。在智能硬件中,为了信息安全,大部分智能设备都设置了身份验证密码,通常的身份认证密码为指纹验证或者数字密码或者图形密码作为身份的依据,但是很多时候通过按键或触摸屏不是最有效率的做法,使用语音输入会更方便。目前的语音识别主要是用户输入特定的文本语音,智能设备识别出相应的内容时,则身份验证成功,但是特定语音作为密码,容易被破解,具有安全隐患。
发明内容
有鉴于此,本申请提出一种说话人认证方法、服务器及计算机可读存储介质,通过创建说话人的与文本无关的语音模型作为密码,不易破解,提高服务器使用安全。
首先,为实现上述目的,本申请提出一种说话人认证方法,该方法应用于服务器,所述方法包括:
获取预设说话人的语音信息,其中,所述语音信息不限制内容;
构建3D卷积神经网络架构;
将所述说话人的语音信息输入至所述3D卷积神经网络架构;
通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;
当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;
计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败。
此外,为实现上述目的,本申请还提供一种服务器,所述服务器包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的说话人认证系统,所述说话人认证系统被所述处理器执行时实现如下步骤:
获取预设说话人的语音信息,其中,所述语音信息不限制内容;
构建3D卷积神经网络架构;
将所述说话人的语音信息输入至所述3D卷积神经网络架构;
通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;
当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;
计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败,其中,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
将所述音频堆叠帧的每个词生成一个向量;
根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
进一步地,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储有说话人认证系统,所述说话人认证系统可被至少一个处理器执行,以使所述至少一个处理器执行如上述的说话人认证方 法的步骤。
附图说明
图1是本申请服务器一可选的硬件架构的示意图;
图2是本申请说话人认证系统第一实施例的程序模块示意图;
图3本申请将说话人语音解析成音频流堆叠帧的示意图;
图4为本申请说话人认证方法第一实施例的流程示意图;
图5为本申请说话人认证方法第一实施例中步骤S303的具体流程示意图。
附图标记:
服务器 2
存储器 11
处理器 12
网络接口 13
说话人认证系统 200
获取模块 201
构建模块 202
输入模块 203
对比模块 204
计算模块 205
解析模块 206
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
参阅图1所示,是服务器2一可选的硬件架构的示意图。本实施例中,所述服务器2可包括,但不仅限于,可通过系统总线相互通信连接存储器11、处理器12、网络接口13。需要指出的是,图1仅示出了具有组件11-13的服务器2,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器11可以是所述服务器2的内部存储单元,例如该服务器2的硬盘或内存。在另一些实施例中,所述存储器11也可以是所述服务器2的外部存储设备,例如该服务器2上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,所述存储器11还可以既包括所述服务器2的内部存储单元也包括其外部存储设备。本实施例中,所述存储器11通常用于存储安装于所述服务器2的操作系统和各类应用软件,例如说话人认证系统200的程序代码等。此外,所述存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器12通常用于控制所述服务器2的总体操作,例如执行与所述终端设备1进行数据交互或者通信相关的控制和处理等。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行所述的说话人认证系统200等。
所述网络接口13可包括无线网络接口或有线网络接口,该网络接口13通常用于在所述服务器2与其他电子设备之间建立通信连接。
至此,己经详细介绍了本申请各个实施例的应用环境和相关设备的硬件结构和功能。下面,将基于上述应用环境和相关设备,提出本申请的各个实施例。
首先,本申请提出一种说话人认证系统200。
参阅图2所示,是本申请说话人认证系统200第一实施例的程序模块图。
本实施例中,所述说话人认证系统200包括一系列的存储于存储器11上的计算机程序指令,当该计算机程序指令被处理器12执行时,可以实现本申请各实施例的说话人认证操作。在一些实施例中,基于该计算机程序指令各部分所实现的特定的操作,说话人认证系统200可以被划分为一个或多个模块。例如,在图2中,所述说话人认证系统200可以被分割成获取模块201、构建模块202、输入模块203,对比模块204、以及计算模块205。其中:
所述获取模块201,用于获取预设说话人的语音信息,其中,所述语音信 息不限制内容。
具体地,采用声学特征作说话人认证身份可以有两种做法:一是对声学特征参数做长时间的统计,一是对几个特定音做分析。对声学特征参数做上时间统计,是不管说话人的内容,也就是它与文本不相关,称之为与文本无关的说话人验证(text-independent speaker recognition)。限制说话的内容,针对特定音作分析,就必须让说话人发出某些特定文字的语音,因此它是与文本相关的,称之为与文本相关的说话人验证(text-dependent speaker recognition)。当使用语音作为服务器2的密码时,若使用特定语音作为密码,容易被破解,具有安全隐患,因此,在本实施例中,采用与文本无关的说话人验证。详细而言,所述服务器2通过所述获取模块201获取说话人的语音信息,该语音信息不限制内容,即与文本无关。以文本有关与文本无关语音密码的应用为例说明:文本有关意味着预先限定了语音的内容,例如,限定内容为:“好好学习”,则用户只有说了“好好学习”才算密码正确。而文本无关因没有限定语音内容,则不管用户说的是“好好学习”还是“天天向上”,只要是与所述服务器存储的说话人的语音模型对应上,则认为密码正确。关于存储说话人的语音模型,将在下文详述。
所述构建模块202,用于构建3D(三维)卷积神经网络架构,并通过所述输入模块203将所述说话人的语音信息输入至所述3D卷积神经网络架构。
具体地,所述服务器2通过所述构建模块202构建3D卷积神经网络架构。在本实施例中,所述3D卷积神经网络架构(3D-CNN)由输入端开始依次包括硬连线层H1(hardwired层)、卷积层、下采样层、卷积层、下采样层、卷积层、全连接层、分类层。所述说话人的语音信息输入至所述3D卷积神经网络的输入端。
所述构建模块202,还用于通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型。
具体地,当服务器2要确认一个人的身份,例如一个服务器确认这个人 是否是管理员或者是否具有开启服务器权限的人,则该服务器2的内部存储就必须有存储有该说话人的语音模型。即所述服务器2须收集该说话人的语音,建立他的模型,也称之为目标模型。在本实施例中,所述构建模块203通过所述3D卷积神经网络架构根据获取到的说话人的语音信息创建该说话人的语音模型并存储在所述服务器2的内部存储中。在本实施例中,所述3D卷积神经网络架构通过分析说话人的声纹信息,声纹之所以能被识别,是因为每个人的口腔、鼻腔与声道结构都存在唯一的差异性,根据获取到的说话人的语音信息去分析声纹信息,间接分析发声器官的差异性,从而确定说话人身份。
所述对比模块204,用于当接收到测试话语信息时,将测试话语与所述存储的所述说话人的语音模型进行对比。
具体地,举例而言,当所述服务器2设置了语音密码,只有验证了是管理员或是具有开启服务器权限的人员方可解锁。在本实施例中,当所述服务器2接收到测试话语信息时,例如接收到A的话语信息,服务器2通过所述对比模块204获取A的语音信息,并根据A的语音信息提取声纹信息,进而将A的声纹信息与所述服务器2内部存储的说话人的语音模型进行对比,以验证A是不是管理员或者具有开启服务器权限的人员。
所述计算模块205,用于计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败。
具体地,所述服务器2通过所述计算模块205计算说话人的语音模型和测试话语信息之间的余弦相似度来获得相似性得分,即相似度。从而根据相似度判断当前说话人是否是管理员或者具有开启服务器权限的人员。
在本实施例中,所述说话人认证系统200还包括及解析模块206,其中:
所述解析模块206,用于将获取到的所述说话人的语音信息解析成音频堆叠帧。
具体地,请一并参阅附图3,附图3为本申请将说话人语音解析成音频流堆叠帧的示意图。如图3所示利用MFCC(Mel频率倒谱系数)特征作为架构语音表达的数据表示,但是最后生成MFCC的DCT1运算会导致这些特征成为非局部特征,与卷积操作中的局部特征形成了鲜明的对比。因此在本实施例中,采用对数能量,即MFEC,MFEC中提取的特征与丢弃DCT运算得到的特征相似,其时间特征时重叠的20ms窗口,跨度为10ms,以生成频谱特征(音频堆叠)。在一个0.8秒的声音样本中,可以从输入语音特征图中获取80个时间特征集合(每个都组成40各MFEC特征)。每张输入特征的维度是nx80x40,它们由80个输入帧和相似的图谱特征组成,n代表在3D卷积神经网络架构用到的语句数量。
所述输入模块203,还用于将所述音频堆叠帧输入至所述3D卷积神经网络架构。
所述构建模块202,还用于将所述音频堆叠帧的每个词生成一个向量,并将属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
具体地,在本实施例中,所述服务器2通过所述解析模块206将获取的说话人语音解析成将音频流的堆叠帧,并通过所述输入模块203将所述音频堆叠帧输入3D-卷积神经网络架构,最后通过所述构建模块202每个话语将直接生成一个d向量,属于说话人的话语的平均d向量来生成说话人模型。
通常地,每个人说话的强调在不同的时间段会有所改变,如,不同的情绪说话的语气不同,生病时说话的语气也会改变。同一个人说的不同的词可能被推断出不是同一个人发出的。因此,在本申请的其他实施例中,所述服务器2还可获取同一个说话人的多个不同的语音信息,进而将所述多个不同的语音信息解析成特征图谱并叠加在一起,最后将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。通过将同一个说话人所说的多个不同的发音的特征图谱叠加在一起,同时使用多个不同的发音的特征图谱叠转换成的向量生成说话人模型,使得该说话人模型能够提取说话人鉴别特征并且能够捕获说话人之间的变化。
在本实施例中,利用以下公式计算其相似度:
Figure PCTCN2018102203-appb-000001
其中,D1表示测试话语信息的向量,D2表示说话人的语音模型的向量,分子表示两个向量的点乘积,分母表示两个向量的模的积。
在本实施例中,所述服务器2预设定一预设值,当计算的相似度大于预设值时,则表示说话人验证成功,即A为管理员或者具有开启服务器权限的人员。同理,当计算的相似度小于预设值时,则说话人认证失败。
在本申请的其他实施例中,当说话人认证失败时,所述服务器2锁定或发出警报,提高服务器的使用安全。
通过上述程序模块201-205,本申请所提出的说话人认证系统200,首先,获取预设说话人的语音信息,其中所述语音信息不限制内容;然后,构建3D卷积神经网络架构;进一步地,将所述说话人的语音信息输入至所述3D卷积神经网络架构;接着,通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;然后,当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;最后,计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败,通过创建说话人的与文本无关的语音模型作为密码,不易破解,提高服务器使用安全。
此外,本申请还提出一种说话人认证方法。
参阅图4所示,是本申请说话人认证方法第一实施例的流程示意图。在本实施例中,根据不同的需求,图4所示的流程图中的步骤的执行顺序可以改变,某些步骤可以省略。
步骤S301,获取预设说话人的语音信息,其中,所述语音信息不限制内 容。
具体地,采用声学特征作说话人认证身份可以有两种做法:一是对声学特征参数做长时间的统计,一是对几个特定音做分析。对声学特征参数做上时间统计,是不管说话人的内容,也就是它与文本不相关,称之为与文本无关的说话人验证(text-independent speaker recognition)。限制说话的内容,针对特定音作分析,就必须让说话人发出某些特定文字的语音,因此它是与文本相关的,称之为与文本相关的说话人验证(text-dependent speaker recognition)。当使用语音作为服务器的密码时,若使用特定语音作为密码,容易被破解,具有安全隐患,因此,在本实施例中,采用与文本无关的说话人验证。详细而言,所述服务器2获取说话人的语音信息,该语音信息不限制内容,即与文本无关。以文本有关与文本无关语音密码的应用为例说明:文本有关意味着预先限定了语音的内容,例如,限定内容为:“好好学习”,则用户只有说了“好好学习”才算密码正确。而文本无关因没有限定语音内容,则不管用户说的是“好好学习”还是“天天向上”,只要是与所述服务器存储的说话人的语音模型对应上,则认为密码正确。关于存储说话人的语音模型,将在下文详述。
步骤S302,构建3D卷积神经网络架构,并通过所述输入模块203将所述说话人的语音信息输入至所述3D卷积神经网络架构。
具体地,所述服务器2构建3D卷积神经网络架构。在本实施例中,所述3D卷积神经网络架构(3D-CNN)由输入端开始依次包括硬连线层H1(hardwired层)、卷积层、下采样层、卷积层、下采样层、卷积层、全连接层、分类层。所述说话人的语音信息输入值所述3D卷积神经网络的输入端。
步骤S303,通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型。
具体地,当服务器2要确认一个人的身份,例如一个服务器确认这个人是否是管理员或者是否具有开启服务器权限的人,则该服务器2的内部存储 就必须有存储有该说话人的语音模型。即所述服务器2须收集该说话人的语音,建立他的模型,也称之为目标模型。在本实施例中,所述服务器2通过所述3D卷积神经网络架构根据获取到的说话人的语音信息创建该说话人的语音模型并存储在所述服务器2的内部存储中。
请参阅附图5,步骤S303:通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型,具体包括S401-S403。
步骤S401,将获取到的所述说话人的语音信息解析成音频堆叠帧。
具体地,请一并参阅附图3,附图3为本申请将说话人语音解析成音频流堆叠帧的示意图。如图4所示利用MFCC(Mel频率倒谱系数)特征作为架构语音表达的数据表示,但是最后生成MFCC的DCT1运算会导致这些特征成为非局部特征,与卷积操作中的局部特征形成了鲜明的对比。因此在本实施例中,采用对数能量,即MFEC,MFEC中提取的特征与丢弃DCT运算得到的特征相似,其时间特征时重叠的20ms窗口,跨度为10ms,以生成频谱特征(音频堆叠)。在一个0.8秒的声音样本中,可以从输入语音特征图中获取80个时间特征集合(每个都组成40各MFEC特征)。每张输入特征的维度是nx80x40,它们由80个输入帧和相似的图谱特征组成,n代表在3D卷积神经网络架构用到的语句数量。
步骤S402,将所述音频堆叠帧输入至所述3D卷积神经网络架构。
步骤S403,将所述音频堆叠帧的每个词生成一个向量,并将属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
具体地,在本实施例中,所述服务器2将获取的说话人语音解析成将音频流的堆叠帧,进而将所述音频堆叠帧输入3D-卷积神经网络架构,最后将每个话语将直接生成一个d向量,属于说话人的话语的平均d向量来生成说话人模型。
通常地,每个人说话的强调在不同的时间段会有所改变,如,不同的情绪说话的语气不同,生病时说话的语气也会改变。同一个人说的不同的词可 能被推断出不是同一个人发出的。因此,在本申请的其他实施例中,所述服务器2还可获取同一个说话人的多个不同的语音信息,进而将所述多个不同的语音信息解析成特征图谱并叠加在一起,最后将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。通过将同一个说话人所说的多个不同的发音的特征图谱叠加在一起,同时使用多个不同的发音的特征图谱叠转换成的向量生成说话人模型,使得该说话人模型能够提取说话人鉴别特征并且能够捕获说话人之间的变化。
步骤S304,当接收到测试话语信息时,将测试话语与所述存储的所述说话人的语音模型进行对比。
具体地,举例而言,当所述服务器2设置了语音密码,只有验证了是管理员或是具有开启服务器权限的人员方可解锁。在本实施例中,当所述服务器2接收到测试话语信息时,例如接收到A的话语信息,并根据A的语音信息提取声纹信息,进而将A的声纹信息与服务器与所述服务器2内部存储的说话人的语音模型进行对比,以验证A是不是管理员或者具有开启服务器权限的人员。
步骤S305,计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败。
具体地,所述服务器2计算说话人的语音模型和测试话语信息之间的余弦相似度来获得相似性得分,即相似度。从而根据相似度判断当前说话人是否是管理员或者具有开启服务器权限的人员。在本实施例中,利用以下公式计算其相似度:
Figure PCTCN2018102203-appb-000002
其中,D1表示测试话语信息的向量,D2表示说话人的语音模型的向 量,分子表示两个向量的点乘积,分母表示两个向量的模的积。
在本实施例中,所述服务器2预设定一预设值,当计算的相似度大于预设值时,则表示说话人验证成功,即A为管理员或者具有开启服务器权限的人员。同理,当计算的相似度小于预设值时,则说话人认证失败。
在本申请的其他实施例中,当说话人认证失败时,所述服务器2锁定或发出警报,提高服务器的使用安全。
通过上述步骤S301-305,本申请所提出的说话人认证方法,首先,获取预设说话人的语音信息,其中所述语音信息不限制内容;然后,构建3D卷积神经网络架构;进一步地,将所述说话人的语音信息输入至所述3D卷积神经网络架构;接着,通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;然后,当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;最后,计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败,通过创建说话人的与文本无关的语音模型作为密码,不易破解,提高服务器使用安全。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种说话人认证方法,应用于服务器,其特征在于,所述方法包括:
    获取预设说话人的语音信息,其中,所述语音信息不限制内容;
    构建3D卷积神经网络架构;
    将所述说话人的语音信息输入至所述3D卷积神经网络架构;
    通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;
    当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;
    计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败。
  2. 如权利要求1所述的说话人认证方法,其特征在于,所述将所述说话人的语音信息输入至所述3D卷积神经网络架构的步骤,具体包括如下步骤:
    将获取到的所述说话人的语音信息解析成音频堆叠帧;
    将所述音频堆叠帧输入至所述3D卷积神经网络架构。
  3. 如权利要求1所述的说话人认证方法,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    将所述音频堆叠帧的每个词生成一个向量;
    根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
  4. 如权利要求2所述的说话人认证方法,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    将所述音频堆叠帧的每个词生成一个向量;
    根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
  5. 如权利要求1所述的说话人认证方法,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    获取同一个说话人的多个不同的语音信息;
    将所述多个不同的语音信息解析成特征图谱并叠加在一起;
    将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。
  6. 如权利要求2所述的说话人认证方法,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    获取同一个说话人的多个不同的语音信息;
    将所述多个不同的语音信息解析成特征图谱并叠加在一起;
    将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。
  7. 如权利要求5或6所述的说话人认证方法,其特征在于,所述计算所述测试话语与所述说话人的语音模型的相似度计算公式为:
    Figure PCTCN2018102203-appb-100001
    其中,D1表示测试话语的向量,D2表示说话人模型的向量,分子表示两个向量的点乘积,分母表示两个向量的模的积。
  8. 一种服务器,其特征在于,所述服务器包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的说话人认证系统,所述说话人认证系统被所述处理器执行时实现如下步骤:
    获取预设说话人的语音信息,其中,所述语音信息不限制内容;
    构建3D卷积神经网络架构;
    将所述说话人的语音信息输入至所述3D卷积神经网络架构;
    通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;
    当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;
    计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大 于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败。
  9. 如权利要求8所述的服务器,其特征在于,其中,所述将所述说话人的语音信息输入至所述3D卷积神经网络架构的步骤,具体包括如下步骤:
    将获取到的所述说话人的语音信息解析成音频堆叠帧;
    将所述音频堆叠帧输入至所述3D卷积神经网络架构。
  10. 如权利要求8所述的服务器,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    将所述音频堆叠帧的每个词生成一个向量;
    根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
  11. 如权利要求9所述的服务器,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    将所述音频堆叠帧的每个词生成一个向量;
    根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
  12. 如权利要求8所述的服务器,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    获取同一个说话人的多个不同的语音信息;
    将所述多个不同的语音信息解析成特征图谱并叠加在一起;
    将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。
  13. 如权利要求9所述的服务器,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    获取同一个说话人的多个不同的语音信息;
    将所述多个不同的语音信息解析成特征图谱并叠加在一起;
    将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。
  14. 如权利要求12或13所述的服务器,其特征在于,所述计算所述测试话语与所述说话人的语音模型的相似度计算公式为:
    Figure PCTCN2018102203-appb-100002
    其中,D1表示测试话语的向量,D2表示说话人模型的向量,分子表示两个向量的点乘积,分母表示两个向量的模的积。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有说话人认证系统,所述说话人认证系统可被至少一个处理器执行,以使所述至少一个处理器执行如下的步骤:
    获取预设说话人的语音信息,其中,所述语音信息不限制内容;
    构建3D卷积神经网络架构;
    将所述说话人的语音信息输入至所述3D卷积神经网络架构;
    通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型;
    当接收到测试话语时,将测试话语信息与所述存储的所述说话人的语音模型进行对比;
    计算所述测试话语信息与所述说话人的语音模型的相似度,当相似度大于一预设值时,则说话人认证成功,当相似度小于一预设值时,则说话人认证失败。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述将所述说话人的语音信息输入至所述3D卷积神经网络架构的步骤,具体包括如下步骤:
    将获取到的所述说话人的语音信息解析成音频堆叠帧;
    将所述音频堆叠帧输入至所述3D卷积神经网络架构。
  17. 如权利要求15所述的计算机可读存储介质,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包 括:
    将所述音频堆叠帧的每个词生成一个向量;
    根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
  18. 如权利要求16所述的计算机可读存储介质,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    将所述音频堆叠帧的每个词生成一个向量;
    根据属于所述说话人的音频堆叠帧的平均向量生成说话人的语音模型。
  19. 如权利要求15或16所述的计算机可读存储介质,其特征在于,所述通过所述3D卷积神经网络架构创建并存储所述说话人的语音模型的步骤,具体包括:
    获取同一个说话人的多个不同的语音信息;
    将所述多个不同的语音信息解析成特征图谱并叠加在一起;
    将叠加在一起的特征图谱转换成向量输入到卷积神经网络架构卷积神经网络架构以生成说话人的语音模型。
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,所述计算所述测试话语与所述说话人的语音模型的相似度计算公式为:
    Figure PCTCN2018102203-appb-100003
    其中,D1表示测试话语的向量,D2表示说话人模型的向量,分子表示两个向量的点乘积,分母表示两个向量的模的积。
PCT/CN2018/102203 2018-03-23 2018-08-24 说话人认证方法、服务器及计算机可读存储介质 Ceased WO2019179033A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810246497.3A CN108597523B (zh) 2018-03-23 2018-03-23 说话人认证方法、服务器及计算机可读存储介质
CN201810246497.3 2018-03-23

Publications (1)

Publication Number Publication Date
WO2019179033A1 true WO2019179033A1 (zh) 2019-09-26

Family

ID=63627358

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102203 Ceased WO2019179033A1 (zh) 2018-03-23 2018-08-24 说话人认证方法、服务器及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108597523B (zh)
WO (1) WO2019179033A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109771944B (zh) * 2018-12-19 2022-07-12 武汉西山艺创文化有限公司 一种游戏音效生成方法、装置、设备和存储介质
CN109979467B (zh) * 2019-01-25 2021-02-23 出门问问信息科技有限公司 人声过滤方法、装置、设备及存储介质
CN110415708A (zh) * 2019-07-04 2019-11-05 平安科技(深圳)有限公司 基于神经网络的说话人确认方法、装置、设备及存储介质
CN111048097B (zh) * 2019-12-19 2022-11-29 中国人民解放军空军研究院通信与导航研究所 一种基于3d卷积的孪生网络声纹识别方法
CN112562685A (zh) * 2020-12-10 2021-03-26 上海雷盎云智能技术有限公司 一种服务机器人的语音交互方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294670A1 (en) * 2014-04-09 2015-10-15 Google Inc. Text-dependent speaker identification
CN105575388A (zh) * 2014-07-28 2016-05-11 索尼电脑娱乐公司 情感语音处理
US20170069327A1 (en) * 2015-09-04 2017-03-09 Google Inc. Neural Networks For Speaker Verification
CN107358951A (zh) * 2017-06-29 2017-11-17 阿里巴巴集团控股有限公司 一种语音唤醒方法、装置以及电子设备
CN107404381A (zh) * 2016-05-19 2017-11-28 阿里巴巴集团控股有限公司 一种身份认证方法和装置
CN107464568A (zh) * 2017-09-25 2017-12-12 四川长虹电器股份有限公司 基于三维卷积神经网络文本无关的说话人识别方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104485102A (zh) * 2014-12-23 2015-04-01 智慧眼(湖南)科技发展有限公司 声纹识别方法和装置
CN106971724A (zh) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 一种防干扰声纹识别方法和系统
CN107220237A (zh) * 2017-05-24 2017-09-29 南京大学 一种基于卷积神经网络的企业实体关系抽取的方法
CN107357875B (zh) * 2017-07-04 2021-09-10 北京奇艺世纪科技有限公司 一种语音搜索方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294670A1 (en) * 2014-04-09 2015-10-15 Google Inc. Text-dependent speaker identification
CN105575388A (zh) * 2014-07-28 2016-05-11 索尼电脑娱乐公司 情感语音处理
US20170069327A1 (en) * 2015-09-04 2017-03-09 Google Inc. Neural Networks For Speaker Verification
CN107404381A (zh) * 2016-05-19 2017-11-28 阿里巴巴集团控股有限公司 一种身份认证方法和装置
CN107358951A (zh) * 2017-06-29 2017-11-17 阿里巴巴集团控股有限公司 一种语音唤醒方法、装置以及电子设备
CN107464568A (zh) * 2017-09-25 2017-12-12 四川长虹电器股份有限公司 基于三维卷积神经网络文本无关的说话人识别方法及系统

Also Published As

Publication number Publication date
CN108597523B (zh) 2019-05-17
CN108597523A (zh) 2018-09-28

Similar Documents

Publication Publication Date Title
US10853676B1 (en) Validating identity and/or location from video and/or audio
JP6621536B2 (ja) 電子装置、身元認証方法、システム及びコンピュータ読み取り可能な記憶媒体
WO2019179033A1 (zh) 说话人认证方法、服务器及计算机可读存储介质
US10013985B2 (en) Systems and methods for audio command recognition with speaker authentication
KR102210775B1 (ko) 인적 상호 증명으로서 말하는 능력을 이용하는 기법
US11979398B2 (en) Privacy-preserving voiceprint authentication apparatus and method
WO2018166187A1 (zh) 服务器、身份验证方法、系统及计算机可读存储介质
US20160248768A1 (en) Joint Speaker Authentication and Key Phrase Identification
US20120143608A1 (en) Audio signal source verification system
WO2018113243A1 (zh) 语音分割的方法、装置、设备及计算机存储介质
WO2017197953A1 (zh) 基于声纹的身份识别方法及装置
US9947323B2 (en) Synthetic oversampling to enhance speaker identification or verification
WO2014186255A1 (en) Systems, computer medium and computer-implemented methods for authenticating users using voice streams
CN102201055A (zh) 信息处理设备、信息处理方法以及程序
WO2021042537A1 (zh) 语音识别认证方法及系统
WO2006109515A1 (ja) 操作者認識装置、操作者認識方法、および、操作者認識プログラム
KR20210050884A (ko) 화자 인식을 위한 등록 방법 및 장치
EP4184355A1 (en) Methods and systems for training a machine learning model and authenticating a user with the model
JP2007133414A (ja) 音声の識別能力推定方法及び装置、ならびに話者認証の登録及び評価方法及び装置
WO2019196305A1 (zh) 电子装置、身份验证的方法及存储介质
US12412177B2 (en) Methods and systems for training a machine learning model and authenticating a user with the model
WO2020140609A1 (zh) 一种语音识别方法、设备及计算机可读存储介质
WO2019218515A1 (zh) 服务器、基于声纹的身份验证方法及存储介质
US10628567B2 (en) User authentication using prompted text
JP2006235623A (ja) 短い発話登録を使用する話者認証のためのシステムおよび方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18910903

Country of ref document: EP

Kind code of ref document: A1