[go: up one dir, main page]

CN114186202A - An Unreliable User Tracking and Revocation Method in Privacy-Preserving Federated Learning - Google Patents

An Unreliable User Tracking and Revocation Method in Privacy-Preserving Federated Learning Download PDF

Info

Publication number
CN114186202A
CN114186202A CN202111543609.XA CN202111543609A CN114186202A CN 114186202 A CN114186202 A CN 114186202A CN 202111543609 A CN202111543609 A CN 202111543609A CN 114186202 A CN114186202 A CN 114186202A
Authority
CN
China
Prior art keywords
user
model parameters
unreliable
cloud server
trusted agent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111543609.XA
Other languages
Chinese (zh)
Other versions
CN114186202B (en
Inventor
马钰婷
刘小微
姚远志
俞能海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202111543609.XA priority Critical patent/CN114186202B/en
Publication of CN114186202A publication Critical patent/CN114186202A/en
Application granted granted Critical
Publication of CN114186202B publication Critical patent/CN114186202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/16Program or content traceability, e.g. by watermarking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2141Access rights, e.g. capability lists, access control lists, access tables, access matrices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Storage Device Security (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种隐私保护联邦学习中的不可靠用户追踪与撤销方法,通过可信代理的模型参数验证机制,对用户上传的模型参数进行甄别,从而追踪不可靠用户。并且,使用密钥销毁对不可靠用户进行访问权限撤销,实现对隐私保护联邦学习中的用户动态更新。该方法可以有效提高联邦学习的安全性和实用性。

Figure 202111543609

The invention discloses an unreliable user tracking and revocation method in privacy protection federated learning. The model parameters uploaded by users are screened through a model parameter verification mechanism of a trusted agent, thereby tracking unreliable users. In addition, key destruction is used to revoke the access rights of unreliable users, so as to realize the dynamic update of users in the privacy-preserving federated learning. This method can effectively improve the security and practicality of federated learning.

Figure 202111543609

Description

Unreliable user tracking and cancelling method in privacy protection federal learning
Technical Field
The invention relates to the technical field of privacy protection, in particular to a method for tracking and revoking an unreliable user in privacy protection federal learning.
Background
In the big data age, deep learning is widely applied and shows excellent performance in a plurality of fields such as speech recognition, computer vision and medical health. Federal learning is used as an emerging machine learning framework, parameter aggregation can be performed on the basis of distributed training of a local model, and a deep neural network model with more enhanced performance is formed. Federal learning requires a large amount of data support to train a deep neural network model with high accuracy. However, private data containing user lifestyle habits, contact details, and business secrets face a potential risk of leakage during federal learning.
In a privacy protection federal learning framework based on homomorphic encryption, a homomorphic encryption technology is used for encrypting model parameters interacted between a user and a cloud server, so that the local privacy data of the user can be effectively prevented from being stolen by the cloud server. However, the cloud server cannot directly detect the quality of the model parameters uploaded by the user, and therefore, the accuracy of the global model is reduced by the low-quality model parameters uploaded by the unreliable user.
Disclosure of Invention
The invention aims to provide a method for tracking and revoking an unreliable user in privacy protection federal learning, which can effectively track the unreliable user on the premise of not leaking privacy information of the user and revoke the access right of the unreliable user to realize dynamic updating of the user.
The purpose of the invention is realized by the following technical scheme:
an unreliable user tracking and revocation method in privacy preserving federal learning, comprising:
the method comprises the following steps that a user and a cloud server update global model parameters under a privacy protection federal learning framework based on homomorphic encryption, and the updated global model parameters are transmitted to a trusted agent by the cloud server;
the trusted agent verifies the global model parameters updated by the current round by using the model parameters updated by the trusted agent, and if the global model parameters updated by the current round are not verified, the trusted agent indicates that an unreliable user exists; if the verification is passed, the global model parameter is continuously updated;
when the unreliable users exist, the trusted agent and the cloud server verify local model parameters updated by the relevant users in the current round based on the ID values of the users, and the unreliable users are tracked;
and the trusted agent destroys the key of the unreliable user to realize access right revocation and user dynamic update.
According to the technical scheme provided by the invention, the model parameters uploaded by the user are screened through the model parameter verification mechanism of the trusted agent, so that the unreliable user is tracked. And the secret key is destroyed to revoke the access authority of the unreliable user, so that the user dynamic update in the privacy protection federal study is realized. The method can effectively improve the safety and the practicability of the federal learning.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of an unreliable user tracking and revocation method in privacy protection federal learning according to an embodiment of the present invention;
fig. 2 is a timing diagram for initializing model parameters in a privacy preserving federal learning framework based on homomorphic encryption according to an embodiment of the present invention;
FIG. 3 is a graph illustrating a relationship between a global model accuracy and a training round number when an unreliable user exists according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a principle that a trusted agent and a cloud server track an unreliable user according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The terms that may be used herein are first described as follows:
the terms "comprising," "including," "containing," "having," or other similar terms of meaning should be construed as non-exclusive inclusions. For example: including a feature (e.g., material, component, ingredient, carrier, formulation, material, dimension, part, component, mechanism, device, process, procedure, method, reaction condition, processing condition, parameter, algorithm, signal, data, product, or article of manufacture), is to be construed as including not only the particular feature explicitly listed but also other features not explicitly listed as such which are known in the art.
The following describes in detail an unreliable user tracking and revocation method in privacy protection federal learning according to the present invention. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art. Those not specifically mentioned in the examples of the present invention were carried out according to the conventional conditions in the art or conditions suggested by the manufacturer.
As shown in fig. 1, a method for tracking and revoking an unreliable user in privacy protection federal learning mainly includes the following steps:
step 1, updating global model parameters by a user and a cloud server under a privacy protection federal learning framework based on homomorphic encryption, and transmitting the updated global model parameters to a trusted agent by the cloud server.
Step 2, the trusted agent verifies the global model parameters updated by the current round by using the model parameters updated by the trusted agent, and if the global model parameters updated by the current round do not pass the verification, the trusted agent indicates that an unreliable user exists; if the verification is passed, the global model parameter updating is continued.
And 3, when the unreliable users exist, the trusted agent and the cloud server verify the local model parameters updated by the relevant users in the current round based on the ID values of the users, and the unreliable users are tracked.
And 4, destroying the key of the unreliable user by the trusted agent to realize access right revocation and user dynamic update.
According to the scheme of the embodiment of the invention, the model parameters uploaded by the user are discriminated through the model parameter verification mechanism of the trusted agent, so that the unreliable user is tracked. And the secret key is destroyed to revoke the access authority of the unreliable user, so that the user dynamic update in the privacy protection federal study is realized. The method can effectively improve the safety and the practicability of the federal learning.
In order to more clearly show the technical solutions and the technical effects provided by the present invention, a method for tracking and revoking an unreliable user in privacy protection federal learning provided by the embodiments of the present invention is described in detail below with specific embodiments.
Firstly, initializing system parameters.
As shown in fig. 2, a timing diagram is initialized for model parameters in the privacy preserving federal learning framework for homomorphic encryption.
The main steps of system parameter initialization include:
1) trusted agent generates homomorphic encrypted encryption KeypublicAnd decryption KeyprivateTraining the global model by using the local data set of the trusted agent, and initializing a global model parameter theta, wherein the initialized global model parameter is marked as theta0
2) Global model parameter θ to be initialized0Using encryption Key KeypublicEncryption:
E(θ0)=Enc(Keypublic0)
wherein Enc is an encryption algorithm in homomorphic encryption.
3) When the trusted agent receives a model updating request of a user, the trusted agent establishes an independent and safe communication channel for the user and the cloud server, and then encrypts a Key KeypublicAnd decryption KeyprivateSending to the user, and sending the encrypted initialized global model parameter data E (theta)0) Sending to cloud serverAnd informing the cloud server of the user's ID value; and the global model of the trusted agent and the local model of each user are the same deep neural network model.
And secondly, updating global model parameters by the user and the cloud server under a privacy protection federal learning framework based on homomorphic encryption.
User { Client i1,2, N and a cloud server perform global model parameters theta under a privacy protection federal learning framework based on homomorphic encryptionkI k ═ 0,1,. and m } update, where i represents the user ClientiK represents the number of rounds, and m is the maximum number of update rounds; the preferred embodiment of this stage is as follows:
1. the user acquires the encrypted global model parameters updated in the previous round from the cloud server, decrypts the global model parameters updated in the previous round to update the local model parameters, performs model training, encrypts the model parameters updated in the current round and transmits the encrypted model parameters to the cloud server; specifically, the method comprises the following steps:
1) the user obtains the global model parameters E (θ) of the encrypted previous round updatek) Decryption Key provided by post-use trusted agentprivateDecryption, expressed as:
θk=Dec(Keyprivate,E(θk))
where Dec is the decryption algorithm in homomorphic encryption, θkRepresenting the global model parameters of the last round of update.
2) User utilizes global model parameter theta updated in previous roundkUpdating local model parameters, and training the local model by using a local data set to obtain the gradient of the local model of the current round of training
Figure BDA0003415140480000041
Multiplying the result by the learning rate alpha after negation, and then utilizing an encryption Key Key provided by a trusted agentpublicAfter being encrypted, the encrypted data is transmitted to a cloud server through a secure communication channel, and the encryption process is represented as:
Figure BDA0003415140480000042
wherein, if the current round is the first round (i.e. k +1 ═ 1), the global model parameters updated in the previous round are the global model parameters θ initialized by the trusted agent0
2. And the cloud server aggregates the encrypted model parameters of the current round of updating transmitted by all the users to obtain the encrypted global model parameters of the current round of updating.
The polymerization process is represented as:
Figure BDA0003415140480000051
where N represents the number of users participating in the model update, E (θ)k+1) Global model parameters representing the encrypted current round update.
3. The cloud server sends the information of completion of aggregation to all users, and the encrypted global model parameter E (theta) after aggregation is carried outk+1) And sending the message to the trusted agent. After the user receives the aggregation completion message, the updated encrypted global model parameter E (theta) is downloaded through the secure communication channelk+1) The next round of global model training and parameter updating is started, i.e. 1-3 of this phase is repeated until the global model converges (e.g. the number of update rounds reaches the maximum number of update rounds m) or a trusted agent suspension training message is received.
And thirdly, the trusted agent verifies the global model parameters updated in the current round by using the model parameters updated in the current round so as to judge whether the unreliable users exist.
Due to unreliable users { [ Attacker ═ ClientjI j ═ 1, 2.., N } uploading low quality model parameters to a cloud server
Figure BDA0003415140480000053
And the cloud server cannot judge the quality of the uploaded model parameters, so that the accuracy of the global model is reduced. Therefore, the trusted agent needs to detect low quality model parameters.
To illustrate the effect of an unreliable user on the global model accuracy, tests were performed in the Windows 10 operating system and PyTorch deep learning framework. The data set used for the test was the well-known handwritten digit recognition data set MNIST and the learning rate for model training was defined as α. Fig. 3 shows a graph of global model accuracy versus number of training rounds in the presence of unreliable users, with α being set to 0.001 on the left and 0.005 on the right, and the Size of the data Batch process (Batch Size) being 32, 64, 128, and 200, respectively. As can be seen from fig. 3, as the number of training rounds increases, the influence degree of the unreliable user on the accuracy of the global model increases, and the accuracy of the global model tends to decrease, or even fails to converge. Therefore, it is necessary to track and revoke the access rights of the unreliable users, so as to ensure the accuracy of the global model and improve the safety and practicability of the federal study.
In the embodiment of the invention, the trusted agent, similar to other users, trains the local model and obtains the updated model parameters, the trusted agent does not interact with the cloud server during self-training, meanwhile, the trusted agent tests through local data set testing after training and records the accuracy of each round
Figure BDA0003415140480000052
Therefore, in the embodiment of the invention, after each round of training, the trusted agent verifies the global model parameters updated in the current round by using the model parameters updated in the current round. The preferred embodiment of this stage is as follows:
1. after each round of training, the trusted agent tests in a local data set respectively by using the model parameters updated in the current round of the trusted agent and the global model parameters updated in the current round, so as to obtain two kinds of accuracy.
Referring to the previous description, the current round is the k +1 round, and the trusted agent receives the encrypted updated global model parameters E (θ) sent by the cloud serverk+1) After decryption, the global model parameter theta after the current round of updating is obtainedk+1Using the updated global model parameter θ of the current roundk+1Testing is carried out in a local data set test to obtain the accuracy
Figure BDA0003415140480000061
Meanwhile, the trusted agent tests in the local data set by using the model parameters updated by the trusted agent in the current round to obtain the accuracy
Figure BDA0003415140480000062
2. And comparing the difference between the two accuracy rates, and judging whether the unreliable user exists currently.
If the accuracy rate
Figure BDA0003415140480000063
Is obviously less than the accuracy
Figure BDA0003415140480000064
(namely, the gap exceeds a threshold value, such as 10%), if the verification is not passed, the unreliable user is considered to be present, and a parameter update suspension message is sent to the cloud server and the user, so that the detection of the low-quality model parameters is realized. If the accuracy rate
Figure BDA0003415140480000065
Is not significantly less than the accuracy
Figure BDA0003415140480000066
(i.e. the gap does not exceed the threshold, for example 10%), then, by verification, it is considered that there is no unreliable user currently, and the parameter updating described in the second stage can be performed continuously.
It should be noted that, the specific size of the threshold is not limited in the present invention, and in practical applications, the user may determine the size of the threshold according to practical situations or experience or multiple experiments.
And fourthly, the trusted agent and the cloud server perform unreliable user tracking.
If the unreliable users are judged to exist currently through the manner introduced in the third stage, the trusted agent and the cloud server verify the local model parameters updated by the relevant users in the current round based on the ID values of the users, and the unreliable users are tracked, wherein the specific principle is as follows: the cloud server establishes an unreliable user range according to the ranges of all current users, model parameters updated by all current rounds of the users in the unreliable user range are aggregated based on ID values of all the users, the trusted agent tests in local data sets respectively by using the model parameters updated by the trusted agent and the aggregated model parameters to obtain two accuracy rates, whether the unreliable user is located in the current unreliable user range is judged according to the difference between the two accuracy rates, and then the cloud server updates the current unreliable user range according to the judgment result of the trusted agent until the unreliable user range only contains the unreliable user.
As shown in fig. 4, the preferred embodiment at this stage is as follows:
in step S1, the cloud server updates the number N of users to 2 according to the participation modelu(u is a positive integer), establishing an unreliable user range, comprising three variables, namely an initial variable low, a median variable mid and a terminating variable high:
Figure BDA0003415140480000067
at step S2, the cloud server aggregates the updated model parameters of the current round of users whose user ID values are in the range [ low, mid ], which is expressed as:
Figure BDA0003415140480000071
wherein E (theta)k) Global model parameters representing the last round of updating of the encryption,
Figure BDA0003415140480000072
representing the encrypted model parameters of the current round of updating of the user, i representing the ID value of the user; e (theta)k'+1) Indicating the range of encryption [ low, mid]Model parameters of the current round of updating of the inner user.
Step S3, the trusted agent sends E (theta) provided by the cloud serverk'+1) Carry out decryption toRespectively testing the model parameters updated by the current round of the device and the global parameters obtained by decryption in a local data set test to obtain two kinds of accuracy rates
Figure BDA0003415140480000073
And
Figure BDA0003415140480000074
if the accuracy rate
Figure BDA0003415140480000075
Less than the accuracy
Figure BDA0003415140480000076
And exceeds the threshold, the unreliable user is considered to be within the range [ low, mid]Sending a boolean variable msg true to the cloud server; otherwise, the unreliable user is not in range [ low, mid]Sending a boolean variable msg ═ false to a cloud server; the principle of verification in this section is the same as that of the third stage described previously.
Step S4, the cloud server adjusts the unreliable user range according to the boolean variable msg ∈ { true, false } sent by the trusted agent, including:
resetting the variable range:
Figure BDA0003415140480000077
namely: if msg is false, the cloud server sets a variable low to mid +1, and if msg is true, the cloud server sets a variable high to mid;
and reset the variable mid:
Figure BDA0003415140480000078
according to the updated unreliable user range, the steps S2, S3 and S4 are re-executed, that is, the updated model parameters of the current round of the relevant user are re-aggregated through step S2, verification is performed through step S3 to determine whether the unreliable user actor is in the aggregated users, and the range variables low and mid are updated through step S4 according to the determination result until the ID value of the unreliable user, that is, the variable low becomes high.
And fifthly, destroying the key of the unreliable user to realize access right revocation and user dynamic updating.
The preferred embodiment of this stage is as follows:
1. the trusted agent regenerates the encrypted encryption key and the decrypted key which are encrypted in a homomorphic way and sends the encrypted encryption key and the decrypted key to the unreliable user (Client)j) All other users simultaneously revoke the secure communication channel between the unreliable user and the cloud server, so as to realize the revocation of the access authority of the unreliable user;
2. and the trusted agent sends a parameter updating starting message to the cloud server and all the users except the unreliable user, finishes the tracking of the unreliable user and updates the model parameters again.
According to the scheme of the embodiment of the invention, the unreliable users can be effectively tracked on the premise of not leaking the privacy information of the users, and the access authority of the unreliable users is cancelled to realize the dynamic updating of the users. The security of the scheme is ensured by a homomorphic encryption technology, namely, a user without a decryption key cannot obtain the global model parameters, so that the privacy information of the user is effectively protected.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A method for unreliable user tracking and revocation in privacy preserving federal learning, comprising:
the method comprises the following steps that a user and a cloud server update global model parameters under a privacy protection federal learning framework based on homomorphic encryption, and the updated global model parameters are transmitted to a trusted agent by the cloud server;
the trusted agent verifies the global model parameters updated by the current round by using the model parameters updated by the trusted agent, and if the global model parameters updated by the current round are not verified, the trusted agent indicates that an unreliable user exists; if the verification is passed, the global model parameter is continuously updated;
when the unreliable users exist, the trusted agent and the cloud server verify local model parameters updated by the relevant users in the current round based on the ID values of the users, and the unreliable users are tracked;
and the trusted agent destroys the key of the unreliable user to realize access right revocation and user dynamic update.
2. The method for unreliable user tracking and revocation in privacy preserving federal learning according to claim 1, wherein the user and the cloud server further comprise system parameter initialization before global model parameter update under a privacy preserving federal learning framework based on homomorphic encryption, and the related steps comprise:
trusted agent generates homomorphic encrypted encryption KeypublicAnd decryption KeyprivateTraining the global model by using the local data set of the trusted agent, and initializing a global model parameter theta, wherein the initialized global model parameter is marked as theta0
Global model parameter θ to be initialized0By encryptionKey of KeypublicEncryption:
E(θ0)=Enc(Keypublic0)
wherein Enc is an encryption algorithm in homomorphic encryption;
when the trusted agent receives a model updating request of a user, the trusted agent establishes an independent and safe communication channel for the user and the cloud server, and then encrypts a Key KeypublicAnd decryption KeyprivateSending to the user, and sending the encrypted initialized global model parameter data E (theta)0) Sending the ID value to a cloud server and informing a cloud server user of the ID value; and the global model of the trusted agent and the local model of each user are the same deep neural network model.
3. The method for unreliable user tracking and revocation in privacy preserving federal learning according to claim 1 or 2, wherein the user and the cloud server perform global model parameter updating under a privacy preserving federal learning framework based on homomorphic encryption, and the updated global model parameters are transmitted to the trusted agent by the cloud server, and the method comprises the following steps:
the user acquires the encrypted global model parameters updated in the previous round from the cloud server, decrypts the global model parameters updated in the previous round to update the local model parameters, performs model training, encrypts the model parameters updated in the current round and transmits the encrypted model parameters to the cloud server; if the current round is the first round, the global model parameters updated in the previous round are the global model parameters initialized by the trusted agent;
and the cloud server aggregates the encrypted model parameters of the current round of updating transmitted by all the users, obtains the encrypted global model parameters of the current round of updating and transmits the global model parameters to the trusted agent.
4. The method as claimed in claim 3, wherein the step of the user obtaining the encrypted global model parameters updated in the previous round from the cloud server, decrypting the global model parameters updated in the previous round, performing model training, encrypting the model parameters updated in the current round, and transmitting the encrypted model parameters to the cloud server comprises:
the user obtains the global model parameters E (θ) of the encrypted previous round updatek) Decryption Key provided by post-use trusted agentprivateDecryption, expressed as:
θk=Dec(Keyprivate,E(θk))
where Dec is the decryption algorithm in homomorphic encryption, θkGlobal model parameters representing the previous round of updating, wherein k is 0,1, and k represents the number of rounds;
user utilizes global model parameter theta updated in previous roundkUpdating local model parameters, and training to obtain the local model gradient of the current round of training
Figure FDA0003415140470000021
Multiplying the result by the learning rate alpha after negation, and then utilizing an encryption Key Key provided by a trusted agentpublicAnd transmitting the encrypted data to a cloud server, wherein the encryption process is represented as:
Figure FDA0003415140470000022
wherein Enc is an encryption algorithm in homomorphic encryption.
5. The method for unreliable user tracking and revocation in privacy preserving federal learning as claimed in claim 4, wherein the cloud server aggregates the model parameters of the encrypted current round update transmitted by all users, and obtains the global model parameters of the encrypted current round update as:
Figure FDA0003415140470000023
where N represents the number of users participating in the model update, i represents the ID value of the user, E (θ)k+1) Global model parameters representing the encrypted current round update.
6. The method for unreliable user tracking and revocation in privacy preserving federal learning as claimed in claim 1, wherein the step of the trusted agent verifying the global model parameters of the current round of update by using the model parameters of the current round of update comprises:
the trusted agent tests in local data set tests respectively by using the model parameters updated by the trusted agent in the current round and the global model parameters updated by the trusted agent in the current round to obtain two accuracy rates, and if the difference between the two accuracy rates exceeds a threshold value, the trusted agent fails to pass the verification; if the difference between the two accuracy rates does not exceed the threshold, the verification is passed.
7. The method for tracing and revoking the unreliable users in the privacy protection federal learning system as claimed in claim 1,2, 4 or 5, wherein the verifying, by the trusted agent and the cloud server, the local model parameters updated by the relevant users in the current round based on the ID values of the users comprises:
the cloud server establishes an unreliable user range according to the ranges of all current users, model parameters updated by all current rounds of the users in the unreliable user range are aggregated based on ID values of all the users, the trusted agent tests in local data sets respectively by using the model parameters updated by the trusted agent and the aggregated model parameters to obtain two accuracy rates, whether the unreliable user is located in the current unreliable user range is judged according to the difference between the two accuracy rates, and then the cloud server updates the current unreliable user range according to the judgment result of the trusted agent until the unreliable user range only contains the unreliable user.
8. The method as claimed in claim 7, wherein the cloud server establishes an unreliable user range according to the ranges of all current users, aggregates model parameters updated in current rounds of all users in the unreliable user range based on ID values of the users, tests the updated model parameters in local data sets by the trusted agent using the updated model parameters in current rounds of the trusted agent and the aggregated model parameters to obtain two accuracy rates, determines whether the unreliable user is located in the current unreliable user range according to a difference between the two accuracy rates, and updates the current unreliable user range by the cloud server according to a determination result of the trusted agent until the unreliable user range only contains the unreliable users, the method comprises:
in step S1, the cloud server updates the number N of users to 2 according to the participation modeluAnd u is a positive integer, and an unreliable user range is established and comprises three variables, namely an initial variable low, a median variable mid and a termination variable high:
Figure FDA0003415140470000031
at step S2, the cloud server aggregates the updated model parameters of the current round of users whose user ID values are in the range [ low, mid ], which is expressed as:
Figure FDA0003415140470000032
wherein E (theta)k) Global model parameters representing the last round of updating of the encryption,
Figure FDA0003415140470000033
representing the encrypted model parameters of the current round of updating of the user, i representing the ID value of the user;
step S3, the trusted agent providing E (theta ') provided by the cloud server'k+1) Decrypting, and testing in local data set test respectively by using model parameters updated by the current round of the device and global parameters obtained by decryption to obtain two kinds of accuracy rates
Figure FDA0003415140470000041
And
Figure FDA0003415140470000042
if the accuracy rate
Figure FDA0003415140470000043
Less than the accuracy
Figure FDA0003415140470000044
And exceeds the threshold, the unreliable user is considered to be within the range [ low, mid]Sending a boolean variable msg true to the cloud server; otherwise, the unreliable user is not in range [ low, mid]Sending a boolean variable msg ═ false to a cloud server;
step S4, the cloud server adjusts the unreliable user range according to the boolean variable msg ∈ { true, false } sent by the trusted agent, including:
resetting the variable range:
Figure FDA0003415140470000045
namely: if msg is false, the cloud server sets a variable low to mid +1, and if msg is true, the cloud server sets a variable high to mid;
and reset the variable mid:
Figure FDA0003415140470000046
and according to the updated unreliable user range, re-executing the steps S2, S3 and S4 until the variable low is high.
9. The method for tracking and revoking the unreliable users in the privacy protection federal learning system as claimed in claim 1, wherein the step of destroying the key of the unreliable user to revoke the access right and dynamically update the user comprises the steps of:
the trusted agent regenerates the encrypted encryption key and the decrypted key which are encrypted in a homomorphic way, sends the encrypted encryption key and the decrypted key to all users except the unreliable user, and simultaneously cancels the secure communication channel between the unreliable user and the cloud server to realize the cancellation of the access authority of the unreliable user;
and the trusted agent sends a parameter updating starting message to the cloud server and all the users except the unreliable user, finishes the tracking of the unreliable user and renews the model parameter updating.
CN202111543609.XA 2021-12-16 2021-12-16 A privacy-preserving method for tracking and revoking unreliable users in federated learning Active CN114186202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111543609.XA CN114186202B (en) 2021-12-16 2021-12-16 A privacy-preserving method for tracking and revoking unreliable users in federated learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111543609.XA CN114186202B (en) 2021-12-16 2021-12-16 A privacy-preserving method for tracking and revoking unreliable users in federated learning

Publications (2)

Publication Number Publication Date
CN114186202A true CN114186202A (en) 2022-03-15
CN114186202B CN114186202B (en) 2024-09-24

Family

ID=80544175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111543609.XA Active CN114186202B (en) 2021-12-16 2021-12-16 A privacy-preserving method for tracking and revoking unreliable users in federated learning

Country Status (1)

Country Link
CN (1) CN114186202B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116261135A (en) * 2023-05-15 2023-06-13 中维建技术有限公司 Homomorphic data safety processing method of communication base station

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190227980A1 (en) * 2018-01-22 2019-07-25 Google Llc Training User-Level Differentially Private Machine-Learned Models
CN110442457A (en) * 2019-08-12 2019-11-12 北京大学深圳研究生院 Model training method, device and server based on federation's study
CN111581648A (en) * 2020-04-06 2020-08-25 电子科技大学 A Federated Learning Approach for Privacy Preserving Among Irregular Users
CN111598143A (en) * 2020-04-27 2020-08-28 浙江工业大学 A Defense Method for Federated Learning Poisoning Attack Based on Credit Evaluation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190227980A1 (en) * 2018-01-22 2019-07-25 Google Llc Training User-Level Differentially Private Machine-Learned Models
CN110442457A (en) * 2019-08-12 2019-11-12 北京大学深圳研究生院 Model training method, device and server based on federation's study
CN111581648A (en) * 2020-04-06 2020-08-25 电子科技大学 A Federated Learning Approach for Privacy Preserving Among Irregular Users
CN111598143A (en) * 2020-04-27 2020-08-28 浙江工业大学 A Defense Method for Federated Learning Poisoning Attack Based on Credit Evaluation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘俊旭;孟小峰;: "机器学习的隐私保护研究综述", 计算机研究与发展, no. 02, 15 February 2020 (2020-02-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116261135A (en) * 2023-05-15 2023-06-13 中维建技术有限公司 Homomorphic data safety processing method of communication base station
CN116261135B (en) * 2023-05-15 2023-07-11 中维建技术有限公司 Homomorphic data safety processing method of communication base station

Also Published As

Publication number Publication date
CN114186202B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
JP6491192B2 (en) Method and system for distinguishing humans from machines and for controlling access to network services
WO2020177392A1 (en) Federated learning-based model parameter training method, apparatus and device, and medium
US11063941B2 (en) Authentication system, authentication method, and program
KR102224998B1 (en) Computer-implemented system and method for protecting sensitive data via data re-encryption
CN111886828A (en) Consensus-based online authentication
CN108475309B (en) Systems and methods for biometric protocol standards
JP2016131335A (en) Information processing method, information processing program and information processing device
JP2023504569A (en) Privacy Preserving Biometric Authentication
CN112787809A (en) Efficient crowd sensing data stream privacy protection truth value discovery method
Torres et al. Effectiveness of fully homomorphic encryption to preserve the privacy of biometric data
JP2015192446A (en) Program, cipher processing method, and cipher processing device
CA3244884A1 (en) Systems and methods for continuous, active, and non-intrusive user authentication
CN117521853A (en) A privacy-preserving federated learning method with verifiable aggregation results and testable gradient quality
CN108462699A (en) Based on the encrypted Quick Response Code generation of sequential and verification method and system
US10681038B1 (en) Systems and methods for efficient password based public key authentication
CN114186202A (en) An Unreliable User Tracking and Revocation Method in Privacy-Preserving Federated Learning
Rajani et al. Multi-factor authentication as a service for cloud data security
Al-Husainy MAC address as a key for data encryption
CN117574434A (en) Model privacy protection method and system
CN112784249B (en) Method, system, processor and computer readable storage medium for implementing mobile terminal authentication processing under no-identification condition
Ogiela et al. AI for Security of Distributed Systems
Sun et al. Towards trusted 6G mobile edge computing: A secure batch large language models deployment framework
Matsumoto et al. Implementation and Evaluation of a Person Identification System Considering Face Image Recognition Accuracy and Privacy Protection
CN119155010B (en) A method for discovering safe truth values of vehicle crowd perception based on homomorphic encryption and differential privacy
Zhao et al. SMTFL: Secure Model Training to Untrusted Participants in Federated Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant