[go: up one dir, main page]

CN115329055A - Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium - Google Patents

Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium Download PDF

Info

Publication number
CN115329055A
CN115329055A CN202210782581.3A CN202210782581A CN115329055A CN 115329055 A CN115329055 A CN 115329055A CN 202210782581 A CN202210782581 A CN 202210782581A CN 115329055 A CN115329055 A CN 115329055A
Authority
CN
China
Prior art keywords
sample
similar
pair
sample pair
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210782581.3A
Other languages
Chinese (zh)
Inventor
袁一菲
施晨
王润泽
张增明
陈丽怡
胡仁君
姜飞俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Maojing Artificial Intelligence Technology Co ltd
Original Assignee
Zhejiang Maojing Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Maojing Artificial Intelligence Technology Co ltd filed Critical Zhejiang Maojing Artificial Intelligence Technology Co ltd
Priority to CN202210782581.3A priority Critical patent/CN115329055A/en
Publication of CN115329055A publication Critical patent/CN115329055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

本发明实施例提供了一种对比学习、查询和人机对话方法、电子设备和存储介质。所述对比学习方法包括:至少基于训练样本的相似增强样本和标注样本,构建相似样本对和非相似样本对;基于所述相似样本对的相似度和所述非相似样本对的相似度,构建任务模型的编码器的损失函数,所述损失函数的函数值与所述相似样本对的相似度成正比,并且与所述非相似样本对的相似度成反比;基于所述损失函数,训练所述任务模型的编码器。在本发明实施例的方案中,基于对比学习的发明构思,构建的损失函数的函数值与相似样本对的相似度成正比,并且与非相似样本对的相似度成反比,使得通过这样的损失函数,提高了训练样本的置信度,而不仅仅对训练样本本身进行增强,因此,提高了训练后的任务模型的性能。

Figure 202210782581

Embodiments of the present invention provide a comparative learning, query and man-machine dialogue method, electronic device and storage medium. The comparative learning method includes: constructing a pair of similar samples and a pair of dissimilar samples based on at least the similarity-enhanced samples and labeled samples of the training samples; The loss function of the encoder of the task model, the function value of the loss function is proportional to the similarity of the similar sample pairs, and is inversely proportional to the similarity of the dissimilar sample pairs; based on the loss function, the training Encoder for the described task model. In the solution of the embodiment of the present invention, based on the inventive concept of contrastive learning, the function value of the constructed loss function is proportional to the similarity of similar sample pairs, and is inversely proportional to the similarity of non-similar sample pairs, so that through such a loss function, which improves the confidence of the training samples, not just the training samples themselves, thus improving the performance of the trained task model.

Figure 202210782581

Description

Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a comparison learning, query and man-machine conversation method, electronic equipment and a storage medium.
Background
Contrast Learning (CL) is a self-monitoring Learning method, in which pairs of augmentation data (augmentations) are generated using unlabeled training data, and then a classification task is defined as a pre-task (pretext task) based on the augmentation data, and Learning is performed to an optimized depth representation (deep embedding).
The pre-task of contrast learning is to represent an instance (instance) as a class (class), learning invariant instance.
However, current contrast learning schemes still have room for improvement in the training of task models for specific scenarios.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a contrast learning, query and man-machine interaction method, an electronic device and a storage medium to at least partially solve the above problems.
According to a first aspect of embodiments of the present invention, there is provided a contrast learning method, including: constructing a similar sample pair and a non-similar sample pair at least based on a similar enhancement sample and a labeling sample of the training sample; constructing a loss function of an encoder of a task model based on the similarity of the similar sample pair and the similarity of the dissimilar sample pair, the function value of the loss function being proportional to the similarity of the similar sample pair and inversely proportional to the similarity of the dissimilar sample pair; an encoder of the task model is trained based on the loss function.
In another implementation manner of the present invention, constructing a similar sample pair and a non-similar sample pair based on at least a similar enhancement sample and a labeled sample of a training sample includes: constructing a first similar sample pair and a first non-similar sample pair based on the training sample and the similar enhancement sample thereof; and constructing a first similar sample pair and a first non-similar sample pair based on the similar enhancement sample and the labeled sample of the training sample.
In another implementation manner of the present invention, the constructing a loss function of an encoder of a task model based on the similarity of the similar sample pair and the similarity of the dissimilar sample pair includes: determining a first loss function based on the similarity of the first similar sample pair and the similarity of the first dissimilar sample pair, and determining a second loss function based on the similarity of the second similar sample pair and the similarity of the second dissimilar sample pair; determining a loss function of an encoder of the task model based on the first loss function and the second loss function.
In another implementation manner of the present invention, the constructing a first similar sample pair and a first non-similar sample pair based on the training sample and the similar enhanced sample thereof includes: determining a first training sample and a corresponding first similar enhancement sample as the first similar sample pair; and determining second similar enhancement samples corresponding to the first training sample and the second training sample as the first non-similar sample pair.
In another implementation manner of the present invention, the constructing a similar sample pair and a non-similar sample pair based on the training sample and the similar enhancement sample thereof further includes: determining the first training sample and the second training sample as the first non-similar sample pair.
In another implementation manner of the present invention, the constructing a first similar sample pair and a first non-similar sample pair based on the similar enhanced sample and the labeled sample of the training sample includes: determining a first fused sample of the first training sample and the first similar enhancement sample and a labeled sample of the first fused sample as the second similar sample pair; and determining the labeled sample of the first fused sample and the second fused sample as the second non-similar sample pair.
In another implementation of the invention, the method further comprises: and inputting the initial samples into a coder with a first random inactivation probability and a second random inactivation probability to respectively obtain the training samples and the similar enhancement samples.
In another implementation of the invention, the first function value of the first loss function is proportional to the similarity of the first similar sample pair, and the first function value is inversely proportional to the similarity of the first non-similar sample pair. A second function value of the second loss function is proportional to a similarity of the second similar sample pair, the second function value being inversely proportional to a similarity of the second non-similar sample pair.
In another implementation of the invention, the method further comprises: training the task model based on a third loss function.
According to a second aspect of the embodiments of the present invention, there is provided a query method, including: acquiring simplified query data; inputting the simplified query data into a query rewrite model to obtain context query data, wherein the query rewrite model is obtained by training according to the method of the first aspect; and querying based on the context query data to obtain a query result.
According to a third aspect of the embodiments of the present invention, there is provided a man-machine interaction method, including: acquiring a conversation request; analyzing based on the dialogue request to obtain simplified query data; querying based on the simplified data by utilizing a query method to obtain a query result; generating a dialog reply to the dialog request based on the query result.
According to a fourth aspect of embodiments of the present invention, there is provided an electronic apparatus, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the method according to any one of the first aspect to the third aspect.
According to a fifth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the first to third aspects.
In the scheme of the embodiment of the invention, at least based on the similar enhancement samples and the labeled samples of the training samples, the similar sample pairs and the non-similar sample pairs are constructed, so that the quality of the enhancement samples formed by the similar sample pairs and the non-similar sample pairs is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
Fig. 1 is a schematic block diagram of a dialog system according to an example.
FIG. 2 is a flow diagram of steps of a comparative learning method according to one embodiment of the invention.
FIG. 3 is a schematic diagram of a comparative learning process of the embodiment of FIG. 2.
FIG. 4 is a flow chart of steps of a query method according to another embodiment of the invention.
Fig. 5 is a flowchart illustrating steps of a human-machine conversation method according to another embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device according to another embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described in detail below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
With the continuous development of computer technology and the continuous progress of artificial intelligence technology, an intelligent conversation system such as a content-based Information Retrieval (CIR) system can be developed, which has an Information Retrieval system of a conversation interface allowing a user to interact with the system to find Information in a spoken or written form through multiple rounds of dialog of natural language, greatly facilitating human-computer interaction efficiency.
In the session information retrieval system, the rewriting model trained on the rewriting task based on the dialogue problem is beneficial to converting simplified query data into the corresponding complete query data without omission of context, so that the query data can be better processed by the information retrieval system.
Fig. 1 is a schematic block diagram of a dialog system according to an example. The dialog system of the present example includes a front-end device 110, a dialog server 120, and a database 130. The front-end device 110 may be a user device including a human-machine interaction module. The front-end device 110 generates query data, e.g., simplified query data, through the human-machine interaction module. The front-end device 110 then transmits the simplified query data to the dialog server 120, such as through a communication module (not shown).
Specifically, the front-end device 110 may be a terminal device such as an embedded device, an internet of things device, or a non-embedded device such as a desktop computer or a server. An embedded operating system, such as a real-time operating system, may be installed in the embedded device to perform communication with the dialog server 120 through a network communication model. As an internet of things device, the front-end device 110 may be implemented as a smart device such as a smart appliance including, but not limited to, a smart watch, a smart speaker, a smart air conditioner, a smart doorbell, etc., and the smart device can implement a smart conversation such as voice interaction, computer vision interaction, etc., with a user through a human-computer interaction module, and perform initial processing based on a conversation instruction of the user and send to the conversation server 120 for further processing, or directly forward to the conversation server 120 for further processing.
The dialog server 120 includes a rewrite module, a query module, and a text generation module. The rewrite module generates contextual query data from the simplified query data and queries a database, such as the conversation database 130, for information based on the contextual query data. Then, the query module transmits the information to the text generation module based on the query, generates a text as a query result, and returns the text to the human-computer interaction module of the front-end device 110.
It should be appreciated that the Query module of the present example may be configured to different functions, e.g., based on contextual Query data, a Structured Query Language (SQL) statement may be generated, for example, to Query (i.e., a database Query scenario). The query module may also obtain a corresponding reply text or reply keyword based on the contextual query data, and then the text generation module generates a text that more conforms to natural language (i.e., a client dialog scene) based on the reply keyword or reply text. Further, the reply text or reply keywords may be based on the target language and the contextual query data may be based on the source language, such that the generated text is a translated text of the contextual query data (i.e., translated query scenario). Additionally, the reply text or reply keywords, contextual query data, can each be based on the source language, and the generated text based on the target language, which is another example of a translation query scenario.
Since the context query data is generated based on the simplified query data, the reliability of the generated context query data depends on the configuration of the rewrite module, and the performance of the rewrite module is particularly important when the rewrite module is implemented by the rewrite module. In addition, simplified query data is used as input, excessive information output by a user is avoided, the intellectualization of the rewriting module is improved, and the reasoning capability of the rewriting module is particularly important under the condition. Similarly, because simplified query data is used as input, the number of samples of context query data is often small, and how to train to obtain a re-model with stronger reasoning ability and generalization ability while ensuring the intellectualization of the re-model is quite difficult.
The invention adopts the inventive concept of comparative learning to optimize the loss function (for example, minimize the loss function) as much as possible, so that the trained task model has stronger performances, such as prediction accuracy and generalization capability. FIG. 2 is a flow diagram of steps of a comparative learning method according to one embodiment of the invention. The solution of the present embodiment may be applied to any suitable electronic device with data processing capability, including but not limited to: a server, a mobile terminal (such as a mobile phone, a PAD and the like), a PC and the like. For example, in a model training (training) phase, a codec model may be trained based on training samples with a computing device (e.g., a data center) configured with a CPU (example of a processing unit) + GPU (example of an acceleration unit) architecture. Computing devices such as data centers may be deployed in cloud servers such as a private cloud, or a hybrid cloud. Accordingly, in the inference (inference) phase, the inference operation may also be performed by using a computing device configured with a CPU (example of processing unit) + GPU (example of acceleration unit) architecture.
The comparative learning method of the embodiment includes:
s210: and constructing a similar sample pair and a non-similar sample pair at least based on the similar enhancement sample and the labeled sample of the training sample.
It should be understood that the similar enhancement samples may be obtained by performing any enhancement processing on the training samples, for example, coarse-grained enhancement may be performed on the training samples, and when the training samples are text samples, the text samples may be translated and then translated back. Sample enhancement processing may be performed on each training sample, where each training sample and the corresponding similar enhancement sample may form a similar sample pair, and any other two different samples may form a non-similar sample pair.
S220: and constructing a loss function of the encoder of the task model based on the similarity of the similar sample pairs and the similarity of the non-similar sample pairs, wherein the function value of the loss function is in direct proportion to the similarity of the similar sample pairs and in inverse proportion to the similarity of the non-similar sample pairs.
It should be understood that the task model is a model based on the encoder-decoder structure, e.g., a query rewrite model. The query rewrite model is used to generate contextual query data from the simplified query data, where the query data may be textual data or conforming data.
S230: an encoder of the task model is trained based on the loss function.
It should be understood that the loss function of the encoder corresponds to the encoder, the encoder and the decoder of the task model may be trained based on the corresponding generation loss function (e.g., the third loss function described below), and the overall loss function of the task model may be composed of the generation loss function and the loss function of the encoder of the task model, e.g., by weighting the generation loss function and the loss function of the encoder, and the loss function of the encoder may be supplemented and adjusted as the generation loss function. The parameters of the task model may be adjusted based on the total loss function while training the encoder and decoder of the task model such that the total loss function is maximized.
In the scheme of the embodiment of the invention, at least based on the similar enhancement samples and the labeled samples of the training samples, the similar sample pairs and the non-similar sample pairs are constructed, so that the quality of the enhancement samples formed by the similar sample pairs and the non-similar sample pairs is higher.
In other examples, the training sample and the similar enhancement sample can be obtained based on the input of the initial sample into the encoder with the first random inactivation probability and the second random inactivation probability, so that the generalization capability of the task model is improved based on the random inactivation probability.
In other examples, constructing similar sample pairs and non-similar sample pairs based on at least similar enhancement samples and annotation samples of the training samples comprises: constructing a first similar sample pair and a first non-similar sample pair based on the training sample and the similar enhancement sample thereof; and constructing a first similar sample pair and a first non-similar sample pair based on the similar enhancement sample and the labeled sample of the training sample. That is, the similar sample pair includes a first similar sample pair and a second similar sample pair, and the dissimilar sample pair includes a first dissimilar sample pair and a second dissimilar sample pair.
In other examples, constructing a loss function for an encoder of a task model based on similarities of similar sample pairs and similarities of non-similar sample pairs includes: determining a first loss function based on the similarity of the first similar sample pair and the similarity of the first dissimilar sample pair, and determining a second loss function based on the similarity of the second similar sample pair and the similarity of the second dissimilar sample pair; and determining a loss function of the encoder of the task model based on a first loss function and a second loss function, wherein the first loss function is mainly used for describing internal contrast loss, and the second loss function is mainly used for describing external contrast loss, so that the effectiveness of the loss function is improved.
In some examples, constructing the first similar sample pair and the first non-similar sample pair based on the training sample and the similar enhancement sample thereof includes: determining a first training sample and a corresponding first similar enhancement sample as a first similar sample pair; and determining second similar enhancement samples corresponding to the first training sample and the second training sample as a first non-similar sample pair, thereby further improving the quality of the similar sample pair and the non-similar sample pair and being beneficial to constructing a more effective loss function.
Further, based on the training sample and the similar enhancement sample thereof, a similar sample pair and a non-similar sample pair are constructed, and the contrast learning method further comprises the following steps: a first training sample and a second training sample are determined as a first non-similar sample pair.
Specifically, for a text sample, an initial text sample (an example of the initial sample) may be input to the encoder for the first time and the second time, respectively, to obtain a first similar sample pair formed by two similar text samples, where the initial text sample may be a paragraph, a sentence, a clause, or the like, and the internal contrast loss is a loss constructed based on the similarity of the first similar sample pair and the similarity of the first non-similar sample pair.
Alternatively, the first similar sample pair may also be the initial text sample and its first input to the encoder resulting in a similar text sample composition.
Alternatively, the first similar sample pair may also be a composition of the initial text sample and its similar text sample that is input to the encoder a second time.
Further, the first similar sample pair may also be a weighted sum of a similar sample pair formed by two similar text samples, the initial text sample and the similar text sample obtained by inputting the initial text sample to the encoder for the first time, and the initial text sample and the similar text sample obtained by inputting the initial text sample to the encoder for the second time.
In other examples, constructing a first similar sample pair and a first non-similar sample pair based on the similar enhanced samples and the labeled samples of the training samples comprises: determining a first fusion sample of the first training sample and the first similar enhancement sample and a labeled sample of the first fusion sample as a second similar sample pair; and determining the labeled samples of the first fused sample and the second fused sample as a second non-similar sample pair, thereby further improving the quality of the similar sample pair and the non-similar sample pair and being beneficial to constructing a more effective loss function.
In particular, for text samples, the external contrast loss is constructed based on the similarity of the second similar sample pair and the similarity of the second non-similar sample pair, i.e. the pairwise distance between the fused sample of similar text samples and the real (groudtuth) rewrite in the sample space.
Similar sample pairs and non-similar sample pairs, and the loss functions constructed based on the similar sample pairs and the non-similar sample pairs, are described below in conjunction with fig. 3.
FIG. 3 is a schematic diagram of a comparative learning process of the embodiment of FIG. 2. The task model in this example may be a query rewrite model, and is a model based on the encoder-decoder structure. The task model of this example includes a word embedding layer 310 (worembedding), an encoder 320 (encoder), and a decoder 330 (decoder). This example focuses on training the encoder 320, i.e., training the encoder based on the training samples and their enhancement samples.
Specifically, a first initial sample output by the word embedding layer 310 is input into the encoder 320, the encoder 320 having a random inactivation probability, in which case a first training sample is output, and a second similar enhancement sample is output, in which case a second random inactivation probability is output. Accordingly, the second initial sample is input into the encoder 320, resulting in a second training sample and a second similar training sample.
The first training sample and the first similar enhanced sample form a first similar sample pair, the first training sample and the second training sample form a first non-similar sample pair, and the first training sample and the second similar enhanced sample pair form a non-similar sample pair.
For a batch (batch) with N instances (instances) and enhanced instances of the same size, for one data sample, the corresponding enhanced data is taken as a positive sample, while the remaining 2 (N-1) data records are taken as negative samples. The contrast loss function in a batch can be expressed as:
Figure BDA0003730161450000071
n is the batch size, X is a word embedding matrix, X is 2N The positive example pairs in (1) are recorded one by one, which means that each pair in the odd-even rows of X constitutes a positive example pair, and X thus 2N The loss function for all combined samples formed is as follows:
Figure BDA0003730161450000081
specifically, the first function value of the first loss function is proportional to the similarity of the first similar sample pair, and the first function value is inversely proportional to the similarity of the first non-similar sample pair. The second function value of the second loss function is in direct proportion to the similarity of the second similar sample pair, and the second function value is in inverse proportion to the similarity of the second non-similar sample pair, so that the construction of a more effective loss function is further improved.
In other examples, the comparative learning method further comprises: based on the third loss function, the task model is trained, thereby improving the overall training effect of the task model including the encoder and the decoder.
In one example, the loss function for contrast learning (encoder) is the sum of a first loss function and a second loss function:
L C =L icl +L ecl
wherein the first loss function is: l is a radical of an alcohol icl =L cl (Combine[Q′;Q″]) For a first similar sample pair and a second dissimilar sample pair constructing a first loss function, the first training sample and the first similar enhancement sample form a first similar sample pair, the first training sample and the second training sample form a first dissimilar sample pair, and the first training sample and the second similar enhancement sample form a dissimilar sample pair, wherein the combination functionFor the combination of Q 'and Q'. Where Q 'and Q "are two sample matrices (e.g., query embedding matrices) from the same input sample, a first similar sample pair and a first non-similar sample pair are constructed from Q' and Q". The combination function represents the one-to-one connection of two nxm embedded matrices into one 2 nxm matrix. L is cl Is a bulk contrast loss function.
The second loss function is:
Figure BDA0003730161450000082
wherein,
Figure BDA0003730161450000083
a labeled sample matrix that is a sample matrix Q 'and Q', a combine function for averaging (an example of a weighting process) the values between Q 'and Q'
Figure BDA0003730161450000084
Combinations of (a) and (b). Correspondingly, a first fusion sample of the first training sample and the first similar enhancement sample and an annotation sample of the first fusion sample are determined as a second similar sample pair; determining the labeled sample of the first fused sample and the second fused sample as the second non-similar sample pair, i.e., by Q ', Q' and
Figure BDA0003730161450000085
a second similar sample pair and a second non-similar sample pair are constructed.
Wherein, under the condition that both the encoder and the decoder are trained, the total loss function Lall is as follows:
L all =L G +w LC wherein L is G Generating a loss (encoder-decoder) for the task model; the loss function for contrast learning is L C (ii) a w is a loss weight based on L all When training the task model, the parameters of the task model are adjusted to make L all The maximum value is obtained.
FIG. 4 is a flow chart of steps of a query method according to another embodiment of the invention. The solution of the present embodiment may be applied to any suitable electronic device with data processing capability, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. For example, in a model training (training) phase, a codec model may be trained based on training samples with a computing device (e.g., a data center) configured with a CPU (example of a processing unit) + GPU (example of an acceleration unit) architecture. Computing devices such as data centers may be deployed in cloud servers such as a private cloud, or a hybrid cloud. Accordingly, in the inference (inference) phase, the inference operation may also be performed by using a computing device configured with a CPU (example of processing unit) + GPU (example of acceleration unit) architecture.
The query method of the embodiment comprises the following steps:
s410: simplified query data is obtained.
S420: based on the simplified query data, the simplified query data is input into a query rewrite model to obtain contextual query data.
S430: and querying based on the context query data to obtain a query result.
In the scheme of the embodiment of the invention, at least based on the similar enhancement samples and the labeled samples of the training samples, the similar sample pairs and the non-similar sample pairs are constructed, so that the quality of the enhancement samples formed by the similar sample pairs and the non-similar sample pairs is higher.
Fig. 5 is a flowchart illustrating steps of a human-machine conversation method according to another embodiment of the present invention.
The man-machine conversation method of the embodiment comprises the following steps:
s510: a dialog request is obtained.
S520: and analyzing based on the conversation request to obtain simplified query data.
S530: and querying based on the simplified data by using a query method to obtain a query result.
S540: based on the query results, a dialog reply to the dialog request is generated.
It should be understood that the query method may be the query method of the embodiment of fig. 4.
It should also be understood that in the example of fig. 1, parsing based on the dialog request may be performed in the human-computer interaction module, as well as in the dialog server 120.
According to the man-machine conversation method, the query rewriting model obtained through the training of the collaborative training method is used, the accuracy of data query is improved, and the man-machine conversation efficiency is further improved.
Referring to fig. 6, a schematic structural diagram of an electronic device according to another embodiment of the present invention is shown, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 6, the electronic device may include: a processor (processor) 602, a communication interface (communication interface) 604, a memory (memory) 606 in which a program 610 is stored, and a communication bus 608.
The processor, the communication interface, and the memory communicate with each other via a communication bus.
A communication interface for communicating with other electronic devices or servers.
And the processor is used for executing the program, and particularly can execute the relevant steps in the method embodiment.
In particular, the program may include program code comprising computer operating instructions.
The processor may be a processor CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And the memory is used for storing programs. The memory may comprise high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
The program may specifically be adapted to cause a processor to perform the following operations: constructing a similar sample pair and a non-similar sample pair based on the training sample and the similar enhancement sample thereof; constructing a loss function of an encoder of a task model based on the similarity of the similar sample pair and the similarity of the dissimilar sample pair, the function value of the loss function being proportional to the similarity of the similar sample pair and inversely proportional to the similarity of the dissimilar sample pair; an encoder of the task model is trained based on the loss function.
Alternatively, the program may be specifically adapted to cause a processor to perform the following operations: acquiring simplified query data; inputting the simplified query data into a query rewrite model to obtain context query data; and querying based on the context query data to obtain a query result.
Alternatively, the program may be specifically adapted to cause a processor to perform the following operations: acquiring a conversation request; analyzing based on the dialogue request to obtain simplified query data; querying based on the simplified data by utilizing a query method to obtain a query result; generating a dialog reply to the dialog request based on the query result.
In addition, for specific implementation of each step in the program, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing method embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described methods according to the embodiments of the present invention may be implemented in hardware, firmware, or as software or computer code that may be stored in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code downloaded through a network, originally stored in a remote recording medium or a non-transitory machine-readable medium, and to be stored in a local recording medium, so that the methods described herein may be stored in such software processes on a recording medium using a general purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that a computer, processor, microprocessor controller, or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, implements the methods described herein. Further, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present invention.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.

Claims (12)

1. A contrast learning method, comprising:
constructing a similar sample pair and a non-similar sample pair at least based on a similar enhancement sample and a labeling sample of the training sample;
constructing a loss function of an encoder of a task model based on the similarity of the similar sample pair and the similarity of the dissimilar sample pair, a function value of the loss function being proportional to the similarity of the similar sample pair and inversely proportional to the similarity of the dissimilar sample pair;
an encoder of the task model is trained based on the loss function.
2. The method of claim 1, wherein the constructing of the similar sample pair and the non-similar sample pair based on at least the similar enhanced sample and the labeled sample of the training samples comprises:
constructing a first similar sample pair and a first non-similar sample pair based on the training sample and the similar enhancement sample thereof;
and constructing a first similar sample pair and a first non-similar sample pair based on the similar enhancement sample and the labeled sample of the training sample.
3. The method of claim 2, wherein constructing a loss function of an encoder of a task model based on the similarities of the similar sample pairs and the similarities of the dissimilar sample pairs comprises:
determining a first loss function based on the similarity of the first similar sample pair and the similarity of the first dissimilar sample pair, and determining a second loss function based on the similarity of the second similar sample pair and the similarity of the second dissimilar sample pair;
determining a loss function of an encoder of the task model based on the first loss function and the second loss function.
4. The method of claim 2, wherein constructing a first similar sample pair and a first non-similar sample pair based on the training samples and their similar enhancement samples comprises:
determining a first training sample and a corresponding first similar enhancement sample as the first similar sample pair;
and determining second similar enhancement samples corresponding to the first training sample and the second training sample as the first non-similar sample pair.
5. The method of claim 4, wherein the constructing similar sample pairs and non-similar sample pairs based on the training samples and the similar enhancement samples further comprises:
determining the first training sample and the second training sample as the first non-similar sample pair.
6. The method of claim 2, wherein the constructing a first similar sample pair and a first non-similar sample pair based on the similar enhancement samples and the labeled samples of the training samples comprises:
determining a first fused sample of the first training sample and the first similar enhancement sample, and an annotated sample of the first fused sample as the second similar sample pair;
and determining the labeled sample of the first fused sample and the second fused sample as the second non-similar sample pair.
7. The method of claim 1, wherein the method further comprises:
and inputting the initial samples into a coder with a first random inactivation probability and a second random inactivation probability to respectively obtain the training samples and the similar enhancement samples.
8. The method of claim 7, wherein a first function value of the first loss function is proportional to a similarity of the first similar sample pair, the first function value being inversely proportional to a similarity of the first non-similar sample pair;
wherein a second function value of the second loss function is proportional to a similarity of the second similar sample pair and the second function value is inversely proportional to a similarity of the second non-similar sample pair.
9. A method of querying, comprising:
acquiring simplified query data;
inputting the simplified query data into a query rewrite model based on the simplified query data, the query rewrite model being obtained by training according to the method of any of claims 1-8;
and querying based on the context query data to obtain a query result.
10. A human-machine dialog method, comprising:
acquiring a conversation request;
analyzing based on the dialogue request to obtain simplified query data;
querying based on the simplified data by using a query method to obtain a query result;
generating a dialog reply to the dialog request based on the query result.
11. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction which causes the processor to execute the operation corresponding to the method as claimed in any one of claims 1-10.
12. A computer storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1-10.
CN202210782581.3A 2022-07-05 2022-07-05 Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium Pending CN115329055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210782581.3A CN115329055A (en) 2022-07-05 2022-07-05 Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210782581.3A CN115329055A (en) 2022-07-05 2022-07-05 Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115329055A true CN115329055A (en) 2022-11-11

Family

ID=83917242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210782581.3A Pending CN115329055A (en) 2022-07-05 2022-07-05 Contrast learning, inquiry and man-machine interaction method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115329055A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210295091A1 (en) * 2020-03-19 2021-09-23 Salesforce.Com, Inc. Unsupervised representation learning with contrastive prototypes
KR20210151017A (en) * 2020-11-24 2021-12-13 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Method and apparatus for training search model, and method and apparatus for searching for target object
WO2022037256A1 (en) * 2020-08-21 2022-02-24 腾讯科技(深圳)有限公司 Text sentence processing method and device, computer device and storage medium
CN114528383A (en) * 2021-12-29 2022-05-24 阿里云计算有限公司 Pre-training language model processing method based on comparative learning and intelligent question-answering system
CN114565799A (en) * 2022-04-27 2022-05-31 南京邮电大学 Comparison self-supervision learning method based on multi-network framework
CN114579606A (en) * 2022-05-05 2022-06-03 阿里巴巴达摩院(杭州)科技有限公司 Pre-training model data processing method, electronic device and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210295091A1 (en) * 2020-03-19 2021-09-23 Salesforce.Com, Inc. Unsupervised representation learning with contrastive prototypes
WO2022037256A1 (en) * 2020-08-21 2022-02-24 腾讯科技(深圳)有限公司 Text sentence processing method and device, computer device and storage medium
KR20210151017A (en) * 2020-11-24 2021-12-13 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Method and apparatus for training search model, and method and apparatus for searching for target object
CN114528383A (en) * 2021-12-29 2022-05-24 阿里云计算有限公司 Pre-training language model processing method based on comparative learning and intelligent question-answering system
CN114565799A (en) * 2022-04-27 2022-05-31 南京邮电大学 Comparison self-supervision learning method based on multi-network framework
CN114579606A (en) * 2022-05-05 2022-06-03 阿里巴巴达摩院(杭州)科技有限公司 Pre-training model data processing method, electronic device and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIANYU GAOY,XINGCHENG YAOZ,DANQI CHEN: "SimCSE: Simple Contrastive Learning of Sentence Embeddings", pages 6894, Retrieved from the Internet <URL:arXiv> *
赵宇晴;向阳;: "基于分层编码的深度增强学习对话生成", 计算机应用, no. 10, 10 October 2017 (2017-10-10) *

Similar Documents

Publication Publication Date Title
CN113553414B (en) Intelligent dialogue method, device, electronic equipment and storage medium
US10437929B2 (en) Method and system for processing an input query using a forward and a backward neural network specific to unigrams
CN114548110A (en) Semantic understanding method and device, electronic equipment and storage medium
WO2023160472A1 (en) Model training method and related device
CN111966782B (en) Multi-round dialogue retrieval method and device, storage medium and electronic equipment
CN113705218B (en) Event element gridding extraction method based on character embedding, storage medium and electronic device
CN117312535B (en) Method, device, equipment and medium for processing problem data based on artificial intelligence
CN113782007B (en) Voice recognition method, device, voice recognition equipment and storage medium
WO2022095354A1 (en) Bert-based text classification method and apparatus, computer device, and storage medium
CN117421398A (en) Man-machine interaction method, device, equipment and storage medium
CN112507090B (en) Method, apparatus, device and storage medium for outputting information
CN109635197A (en) Searching method, device, electronic equipment and storage medium
CN114020886A (en) Voice intent recognition method, device, device and storage medium
CN117315334B (en) Image classification method, model training method, device, equipment and medium
CN117235205A (en) Named entity recognition method, named entity recognition device and computer readable storage medium
CN111126084B (en) Data processing method, device, electronic equipment and storage medium
CN114116975A (en) Multi-intention identification method and system
CN119248916A (en) A method, system, storage medium and program product for adaptive retrieval enhanced large language model construction and question answering
CN113569094A (en) Video recommendation method and device, electronic equipment and storage medium
JP2023002690A (en) Semantics recognition method, apparatus, electronic device, and storage medium
CN114842246B (en) Social media pressure type detection method and device
CN117633162A (en) Machine learning task template generation method, training method, fine adjustment method and equipment
CN113343692B (en) Search intention recognition method, model training method, device, medium and equipment
CN115186080A (en) Intelligent question-answering data processing method, system, computer equipment and medium
CN112559715B (en) Attitude recognition methods, devices, equipment and storage media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 311121 room 801, building 2, No. 2699, yuhangtang Road, Cangqian street, Yuhang District, Hangzhou, Zhejiang Province

Applicant after: Zhejiang Aikesi Elf Artificial Intelligence Technology Co.,Ltd.

Address before: 311121 room 801, building 2, No. 2699, yuhangtang Road, Cangqian street, Yuhang District, Hangzhou, Zhejiang Province

Applicant before: Zhejiang Maojing Artificial Intelligence Technology Co.,Ltd.