[go: up one dir, main page]

CN112600618A - Attention mechanism-based visible light signal equalization system and method - Google Patents

Attention mechanism-based visible light signal equalization system and method Download PDF

Info

Publication number
CN112600618A
CN112600618A CN202011414459.8A CN202011414459A CN112600618A CN 112600618 A CN112600618 A CN 112600618A CN 202011414459 A CN202011414459 A CN 202011414459A CN 112600618 A CN112600618 A CN 112600618A
Authority
CN
China
Prior art keywords
signal
branch
cnn
optical signal
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011414459.8A
Other languages
Chinese (zh)
Other versions
CN112600618B (en
Inventor
陈俊杰
卢星宇
肖云鹏
刘宴兵
刘媛媛
冉玉林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202011414459.8A priority Critical patent/CN112600618B/en
Publication of CN112600618A publication Critical patent/CN112600618A/en
Application granted granted Critical
Publication of CN112600618B publication Critical patent/CN112600618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Optical Communication System (AREA)

Abstract

本发明涉及可见光通信技术领域,具体涉及一种基于注意力机制的可见光信号均衡系统及方法,包括:数据接收端接收到数据后,将接收到的数据进行解码,得到解码数据;将解码数据输入到训练好网络权重参数的CLSTM神经网络模型中,得到均衡信号并输出。本发明利用卷积神经网络和长短时记忆网络(LSTM)补偿接收数据中存在的线性和非线性损伤,提高了可见光通信系统的传输速率和接收机的灵敏度,改善传输性能。

Figure 202011414459

The invention relates to the technical field of visible light communication, in particular to a visible light signal equalization system and method based on an attention mechanism, comprising: after a data receiving end receives data, decoding the received data to obtain decoded data; inputting the decoded data In the CLSTM neural network model that has trained the network weight parameters, the balanced signal is obtained and output. The invention utilizes the convolutional neural network and the long short-term memory network (LSTM) to compensate the linear and nonlinear damage existing in the received data, improves the transmission rate of the visible light communication system and the sensitivity of the receiver, and improves the transmission performance.

Figure 202011414459

Description

Attention mechanism-based visible light signal equalization system and method
Technical Field
The invention relates to the technical field of visible light communication, in particular to a visible light signal equalization system and method based on an attention mechanism.
Background
The Visible Light Communication (VLC) technology is a Communication method in which Light in a Visible Light band is used as an information carrier, and an optical signal is directly transmitted in the air without using a transmission medium such as an optical fiber or a wired channel. The Visible Light Communication (VLC) technology based on led driving is an attractive and very potential technology with its advantages of low cost, high efficiency, strong anti-electromagnetic interference, high safety, etc. The visible light communication technology mainly completes the modulation of optical signals by the current drive of the LED, converts the received optical signals into electric signals at the receiving end of the signals, and completes the transmission of signal data.
Equalization techniques compensate for signal impairments. The classification into linear lesions and nonlinear lesions can be made according to the type of lesion in the signal. At present, a neural network is widely applied to visible light signal equalization, and a certain effect is achieved in a visible light equalization technology based on an Artificial Neural Network (ANN), a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). In the patent "a neural network equalizer based on visible light communication (application No. 201710602325.0)", it is proposed to use the neural network equalizer to overcome the problem of intersymbol interference caused by the limitation of the modulation bandwidth of the LED, use the strong parameter fitting data of the neural network to directly classify, solve the linear damage existing in the system, and do not consider the potential nonlinear relationship existing between the nonlinear damage existing in the system and the learning data. The patent "nonlinear modeling method of visible light communication system based on neural network (application number 202010044920.9)" proposes that nonlinearity and memory of a channel of the visible light communication system are obtained through the neural network, and by using the method, the channel is simulated by utilizing strong parameter fitting of the neural network, the relevance existing between data can be ignored, and the channel is memorized by only using excessive network parameters, and the association relation between signal data can be ignored by the method, so that the problems of serious overfitting of parameters, complex network and the like are caused.
In summary, the current equalization technology has the following problems: (1) compensating for nonlinear impairments present in the system; (2) the neural network is prevented from overfitting data, so that the network has serious memory and complexity; (3) ignoring the association relationship existing between data, rules implicit in the data cannot be learned effectively.
Disclosure of Invention
In order to solve the above problems, the present invention provides a system and a method for equalizing a visible light signal based on an attention mechanism.
A system for attention-based visible light signal equalization, comprising: the device comprises a data receiving module, a data processing module, a signal equalizer and a signal output module, wherein the data receiving module is used for receiving and demodulating a modulation signal output by a Pulse Amplitude Modulation (PAM) system to obtain a demodulated lossy optical signal; the data processing module is used for preprocessing the demodulated lossy optical signal to obtain a sample sequence of the lossy optical signal, and inputting the sub-sequence data of the lossy optical signal sample after division processing into the signal equalizer; the signal equalizer is used for equalizing the lossy optical signal into a lossless optical signal; the signal output module is used for outputting an equalization signal.
Further, the preprocessing in the data processing module comprises: the demodulated lossy optical signal is divided into consecutive subsequences with the same length by using a sliding window with tap ═ n, and a sample sequence of the lossy optical signal is obtained.
Furthermore, the signal equalizer comprises a convolutional neural network CNN module based on an attention mechanism and a long-time memory neural network LSTM unit.
Further, the attention-based CNN module includes two branches and a fusion module: the first branch is a CNN branch, and the CNN branch comprises a convolutional layer and a pooling layer; the second branch is an attention branch, which mainly comprises feature aggregation and scale recovery, wherein the feature aggregation is to extract more comprehensive features in a cross-scale sequence through convolution layers, a convolution kernel of 1 × 1 is used in the last layer to recover the feature scale to an M × N feature sequence with the same size as the CNN branch output, a sigmoid function is used to range the value from 0 to 1, and finally the feature sequence with the size of M × N and an attention mechanism is obtained; a fusion module: and fusing the characteristic value output by the CNN branch circuit with the characteristic sequence output by the attention branch circuit and containing the attention mechanism to obtain a fusion result.
Further, the input of the CNN branch is a subsequence divided by the data processing module, the output of the CNN branch is a characteristic sequence of M × N, M represents the length of the subsequence, and N represents a signal characteristic sequence after passing through the CNN branch; the input of the attention branch is from the middle point of the previous subsequence to the middle point of the next subsequence of the input sequence of the CNN branch, and the input length of the attention branch is twice as long as that of the input of the CNN branch; the output of the attention branch is a characteristic sequence with the size of M × N and including an attention mechanism, and the output dimension of the attention branch is the same as that of the CNN branch.
Further, the LSTM unit includes an input gate, a forgetting gate, and an output gate.
Further, the input of the LSTM unit is a fusion result of the CNN module based on the attention mechanism, the LSTM unit performs feature extraction learning on the fusion result again, and finally, the classification result of the damaged optical signal is obtained through the judgment unit, that is, the balanced non-damaged optical signal is obtained, and the signal balancing is completed.
A visible light signal equalization method based on an attention mechanism comprises the following steps: and after receiving the PAM modulated lossy optical signal, the data receiving module demodulates the received lossy optical signal to obtain a demodulated lossy optical signal and inputs the demodulated lossy optical signal into a signal equalizer, and the signal equalizer outputs a compensated lossless optical signal. The signal equalizer comprises a CLSTM neural network model based on an attention mechanism, the CLSTM neural network model is trained and then used, and the training process comprises the following steps:
s1, at a signal receiving end, receiving a modulation signal output by a Pulse Amplitude Modulation (PAM) system through a data receiving module, demodulating, obtaining a demodulated lossy optical signal, and transmitting the demodulated lossy optical signal to a data processing module; collecting a transmitted lossless optical signal sample at a signal transmitting end, and transmitting the lossless optical signal sample to a data processing module;
s2, the data processing module divides the lossy optical signal sample and the lossless optical signal sample by using a sliding window to obtain a training sample subset, and the training sample subset is input into a signal equalizer;
s3, in the signal equalizer, first initializing the parameters of the CLSTM neural network model based on the attention mechanism: the method comprises initializing weight parameters of a convolution layer of an attention mechanism and a CNN parallel module; initializing the convolution layer weight parameters of the Attention branch; initializing the initialization weight parameters of an input gate, a forgetting gate and an output gate of the LSTM network;
s4, inputting the training signal sample subsets divided in the step S2 into a CLSTM neural network model based on an attention mechanism of initialization parameters, in a CNN module based on the attention mechanism, carrying out local feature extraction on data by a CNN branch to obtain a feature map extracted by the CNN branch, and carrying out large-scale feature extraction on the data by the attention branch to obtain a feature map extracted by the attention branch; finally, fusing the characteristic graph extracted by the CNN branch with the characteristic graph extracted by the attention branch through a fusion module to obtain fusion characteristics;
s5, inputting the fusion characteristics into a long-time and short-time memory neural network LSTM, and balancing the linearity and nonlinearity among signals to obtain an expected balanced signal; and calculating a loss function according to the balanced signal obtained by the network and the original label signal, and iteratively updating the weight parameter through a loss function result value to obtain a trained CLSTM neural network model based on the attention mechanism when the loss function of the network reaches the minimum or the maximum empirical iteration times.
Further, the expression of the loss function is:
Figure BDA0002819719040000041
wherein L represents a loss value, and p ═ p0,…,pc-1]Is determined by the softmax function described aboveThe resulting probability value, piIndicates the probability of a sample belonging to class i, y ═ y0,…,yc-1]Is the onehot representation of the subset of labeled exemplars, y when the exemplar belongs to the ith classi1, otherwise yiC denotes a label sample, 0.
The invention has the beneficial effects that:
1. the CLSTM neural network based on the Attention mechanism can directly equalize signals at a receiving end without performing additional preprocessing on the signals.
2. The invention utilizes an attention mechanism and a CNN parallel module to extract fine-grained signal characteristics in signals, screens important characteristic signals, utilizes a long-time memory network LSTM network to extract characteristic relations among the signals, can effectively enhance the characteristic extraction strength, accelerates the fitting of a CLSCM neural network, optimizes a training neural network according to a loss function, and utilizes the trained neural network to compensate linear and nonlinear damages in received data, so that the final signal equalization is more accurate, the transmission rate of a visible light communication system and the sensitivity of a receiver are improved, and the transmission performance is improved.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a structural diagram of a visible light signal equalization system of a CLSTM neural network based on an Attention mechanism according to this embodiment;
FIG. 2 is a training process of a CLSTM neural network model based on the Attention mechanism according to this embodiment;
FIG. 3 is a flowchart illustrating the training of a CLSTM neural network based on the Attention mechanism according to this embodiment;
fig. 4 is a schematic structural diagram of a CLSTM neural network model based on an Attention mechanism in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, a pulse amplitude modulation system (PAM) is used as a modulation and demodulation system of a signal, an original binary bit stream is input to the pulse amplitude modulation system (PAM), and after preprocessing and code modulation, an LED is driven to perform intensity modulation, and an electrical signal is converted into an optical signal to be input as a signal of a visible light communication system. Linear damage and nonlinear damage exist in the transmission process of signals, and the source of the linear damage mainly comprises intersymbol interference existing in the transmission of adjacent code elements and intersymbol interference generated by multipath effect in optical propagation; the main sources of nonlinear damage include: nonlinear impairments due to Visible Light system devices in Visible Light Communication (VLC) systems and nonlinear impairments due to systems for receiver square rate detection. In high-order modulation and high-rate transmission, linear and nonlinear impairments can seriously affect the performance of a visible light communication system, so that linear and nonlinear distorted light signals occurring in a VLC system need to be compensated for to restore lossless light signals.
The structure of the visible light communication system is shown in fig. 1. Fig. 1 shows the processes of signal emission, signal modulation (including PAM modulation and LED intensity modulation), signal transmission, signal reception, signal equalization (implemented by the signal equalization system of the present embodiment), and signal output of the visible light system.
The signal equalization is realized by a visible light signal equalization system based on an attention mechanism, and the purpose is to compensate linear and nonlinear distorted light signals appearing in a VLC system so as to recover lossless light signals.
The embodiment provides a visible light signal equalization system based on an attention mechanism, which comprises: the device comprises a data receiving module, a data processing module, a signal equalizer and a signal output module.
And the data receiving module is used for receiving the modulation signal output by the pulse amplitude modulation system PAM and demodulating the modulation signal to obtain a demodulated lossy optical signal.
And the data processing module is used for preprocessing the demodulated lossy optical signal to obtain a sample sequence of the lossy optical signal. The pretreatment comprises the following steps: and performing data division processing, namely dividing the demodulated lossy optical signal into continuous lossy optical signal sample subsequences with the same length by using a sliding window with tap being equal to n. And inputting the divided lossy optical signal sample sub-sequence data into a signal equalizer.
The signal equalizer comprises a CNN module based on an attention mechanism and a long-time memory network LSTM unit. The CNN module based on the attention mechanism comprises a CNN branch and an attention branch, and further comprises a fusion module, wherein the CNN branch extracts the characteristics of a lossy optical signal sample subsequence, the attention branch extracts the characteristics of the lossy optical signal sample subsequence with the attention mechanism, and the fusion module is used for fusing the characteristics of the two branches to finally obtain the fusion characteristics of the lossy optical signal sample subsequence. The specific description is as follows:
the first branch is a CNN branch, the CNN branch includes a convolutional layer, a nonlinear layer and a pooling layer, an input of the CNN branch is a subsequence divided by the data processing module, an output of the CNN branch is a characteristic sequence of M × N, M represents a length of the subsequence, and N represents a signal characteristic sequence after passing through the CNN branch.
The second branch is the attention branch, the input of which is the middle point of the previous subsequence to the middle point of the next subsequence of the CNN branch input sequence, and the input length of which is twice as long as the CNN branch. The main focus branch includes feature aggregation and scale restoration. The feature aggregation is to extract more comprehensive features from the cross-scale sequence through the convolution layer, restore the feature scale to the feature sequence of M × N with the same size as the CNN branch output by using a convolution kernel of 1 × 1 in the last layer, and use a sigmoid function to make the value range from 0 to 1, so as to finally obtain the feature sequence with the size of M × N and the attention mechanism. The attention branch adopts an input sequence different from the CNN branch, so that the characteristics of a longer sequence including the sequence of the CNN branch can be captured, the characteristic information of the current sequence can not be lost, and overfitting can be effectively avoided.
And the fusion module is used for multiplying the characteristic value output by the CNN branch circuit element by element with the characteristic sequence output by the attention branch circuit and containing the attention mechanism to obtain a fusion result.
LSTM units in signal equalizers. And taking the fusion result as the input of the LSTM unit, entering the fused sequence into an input gate, a forgetting gate and an output gate through the LSTM network for re-extraction of the relationship among the fused features, and finally obtaining the classification result of the damaged optical signal through a judgment unit to finish the equalization/compensation of the damaged optical signal.
The signal output module is used for outputting an equalization signal, and the equalization signal is a compensated lossless optical signal.
In the visible light signal equalization system of this embodiment, the attention mechanism and the CNN parallel network module are cascaded to a long-term and short-term memory network (LSTM), as shown in fig. 4. The attention mechanism and CNN parallel module network can extract the non-linear and linear correlation characteristics existing in the signal (continuous signal with damage), and filter out the non-important information in the signal data; LSTM networks can remember the correlation between long-range signal data, but do not adequately extract the potential features present between impairment signals. Based on the characteristics of the two networks, the attention mechanism and the CNN parallel module network are used for firstly extracting the characteristics of the signal data, the extracted characteristic data are input into the LSTM network for long-distance relation learning, and potential linear and nonlinear laws in the signal data can be learned to solve the damage of the signal data.
As shown in fig. 2, this embodiment provides a method for equalizing a visible light signal based on an attention mechanism, where the method is a method for jointly compensating for linear and nonlinear damage to visible light based on the attention mechanism, a CNN parallel module, and a long-and-short-term memory network (LSTM), and includes: and after receiving the PAM modulated lossy optical signal, the data receiving module demodulates the received lossy optical signal to obtain a demodulated lossy optical signal and inputs the demodulated lossy optical signal into a signal equalizer, and the signal equalizer outputs a compensated lossless optical signal.
The specific structure in the signal equalizer comprises a CLSTM neural network model based on an Attention mechanism, wherein the model comprises a CNN module and an LSTM unit based on the Attention mechanism.
The CLSTM neural network model based on the Attention mechanism is trained and then used, and the training process comprises the following steps:
and S1, at a signal receiving end, receiving the modulation signal output by the pulse amplitude modulation system PAM through a data receiving module, demodulating to obtain a demodulated lossy optical signal, and sampling the lossy optical signal according to the following steps of 7: and 3, dividing the signal in a ratio of 3 to be used as a training signal sample and transmitting the training signal sample to a data processing module.
And collecting the transmitted lossless optical signal sample at the signal transmitting end, dividing the lossless optical signal sample according to the proportion of 7:3 to be used as a label signal sample, and transmitting the label signal sample to the data processing module.
And S2, the data processing module divides the training signal sample and the label sample by using a sliding window to obtain a training signal sample subset and a label sample subset, and inputs the divided data into a signal equalizer.
Setting tap as sliding window of n, tap is sliding window size, training signal sample is divided into training subsequence { x with specified length1,x2,...,xn-1,xnAnd dividing the label signal samples into subsequences with corresponding lengths, and finally obtaining a training signal sample subset and a label sample subset, wherein the subsequences in the sample subsets are in a nonlinear association relationship.
And taking the intermediate value of the label sample subset as the label value of the sample set, and sliding through a sliding window to obtain a series of training subsequences and corresponding label value sets.
S3, building a CLSTM neural network model based on an Attention mechanism, and initializing parameters of the CLSTM neural network model based on the Attention mechanism, including initializing weight parameters of convolution layers of an Attention mechanism and a CNN parallel module; initializing the initialization weight parameters of an input gate, a forgetting gate and an output gate of the LSTM network.
S4, inputting the training signal sample subsets divided in the step S2 into a CLSTM neural network model based on an attention mechanism of initialization parameters, in a CNN module based on the attention mechanism, carrying out local feature extraction on data by a CNN branch to obtain a feature map extracted by the CNN branch, and carrying out large-scale feature extraction on the data by the attention branch to obtain a feature map extracted by the attention branch; and finally, fusing the characteristic graph extracted by the CNN branch with the characteristic graph extracted by the attention branch through a fusion module to obtain fusion characteristics.
As shown in fig. 2, the CLSTM neural network model includes two parts, a CNN module (feature extraction module) and an LSTM unit based on the attention mechanism.
The first part of the CLSTM neural network model is a CNN module (feature extraction module) based on an attention mechanism, wherein the CNN module based on the attention mechanism comprises an attention branch and a CNN branch which are parallel to each other and are used for extracting a feature map between lossy optical signals.
The CNN branch mainly comprises two network modules, wherein the first module is a convolution layer, the convolution layer uses a plurality of different convolution kernels to check data for local feature extraction by sharing weight parameters, different feature relationships in the data can be learned by the different convolution kernels, and the learned feature relationships are continuously input to a subsequent convolution layer module for deep feature relationship extraction. The second module is a pooling layer, data passing through the sliding window has data redundancy of Tap multiple, the pooling layer is used for filtering non-key factors in the features, and the pooling layer obtains the local maximum value of the feature relation data in the window by using windows with different sizes. The pooling layer in the present invention uses maximum pooling. The maximum pooling layer also has the functions of reducing network parameters and network complexity in the network structure.
The input and output relationship of the convolution layer is as follows:
Y=conv(X,W,H)
wherein, X represents data, W represents convolution kernel, conv represents convolution operation, and the output characteristic data Y of the convolution layer can be obtained by applying W of different convolution kernels to the data, and the number of channels is H.
When the data length of the input convolutional layer is m, the setting of the convolutional layer is w, padding is p, and the step length is s, the size of the data dimension obtained after the convolutional layer is n × H, wherein the size of n is (m +2 × p-w)/s + 1. The data dimensionality obtained after the convolution layer is from original m × 1 dimensionality to n × H dimensionality, the maximum pooling operation is carried out on the data characteristics in the pooling layer through selecting a pooling window p, and the input and output relations of the data are as follows:
[y1,…,ys]=MAX(y1,…,yn)
where s denotes the size n/p, n denotes the data dimension after convolution, and p denotes the pooled window size, where the number of channels of data remains H.
In the attention branch, the input data is from the middle point of the previous sub-sequence to the middle point of the next sub-sequence, and the length of the attention branch input data is twice as long as that of the CNN branch input data. In this embodiment, the input data of the attention branch is 2 m. After the input data sequence is subjected to convolutional layer operation with different sizes, the dimension of the input data sequence is restored to be the same as that of the CNN branch output sequence (in the embodiment, the dimension is restored to be n × H) by setting a convolutional kernel and the number of channels, the value of the input data sequence is constrained between 0 and 1 through a sigmoid function, and finally the feature sequence with the size of n × H and including the attention mechanism is output. The attention branch adopts an input sequence different from the CNN branch, so that the characteristics of a longer sequence including the sequence of the CNN branch can be captured, the characteristic information of the current sequence can not be lost, and overfitting can be effectively avoided.
The CNN module based on the attention mechanism further comprises a fusion module, and the fusion module is used for multiplying the output data characteristics of the CNN branch circuit and the attention branch circuit element by element to obtain fusion characteristics. The fusion module screens the characteristics, extracts important characteristics, inhibits the interference of non-important characteristics on the model, and can effectively screen important fine-grained characteristic data. The input and output relationships for signal feature screening in the fusion module are as follows:
Figure BDA0002819719040000101
wherein Z (i, h) represents a fusion feature, ZAttention(i, h) represents a characteristic signal obtained by the attention mechanism, ZCNNAnd (i, h) represents a characteristic signal obtained after passing through the CNN branch, i represents a specific position of the obtained characteristic sequence, the range of i to nn represents the characteristic dimension of the branch output, and h represents the number of output characteristic channels.
S5, inputting the fusion characteristics into a long-time and short-time memory neural network (LSTM), and balancing the linearity and nonlinearity among the signals to obtain an expected balanced signal; and calculating a loss function according to the balanced signal obtained by the network and the original label signal, and iteratively updating the weight parameter through a loss function result value to obtain a trained CLSTM neural network model based on the attention mechanism when the loss function of the network reaches the minimum or the maximum empirical iteration times.
The second part of the CLSTM neural network model is the long and short term memory neural network (LSTM). The fusion characteristics fused by the attention mechanism and the CNN parallel module network are input into a long-and-short-time memory neural network (LSTM) for learning the linear and nonlinear potential relation between long-distance signal data, so that the further learning of the linear and nonlinear relation between the long-and-short-time memory neural network (LSTM) and the data characteristics is effectively enhanced, and the accuracy of the balanced signal is ensured. The gate is used for memorizing sequence information among input signals for a long time by the memory unit of the LSTM network, so that the problem of gradient disappearance is effectively reduced, the input gate controls the information which is allowed to be updated, the forgetting gate is used for controlling the information which needs to be saved or discarded, and the output gate determines the final output information.
The long-time memory neural network (LSTM) comprises an input gate, a forgetting gate, an output gate and a judgment unit. Calculating the activation value i of an input gate by the data through the input gate of a long-time and short-time memory neural network (LSTM)tThe memory cell at time t also needs to be counted in the input gateCandidate state value of
Figure BDA0002819719040000111
The expression is as follows:
it=σ(wi·[ht-1,xt]+bi)
Figure BDA0002819719040000112
wherein itIndicating the activation value of the input gate,
Figure BDA0002819719040000113
representing candidate state values, x, of the memory cell at time ttDenotes an input at time t, ht-1Representing the input value at time t-1, sigma representing the sigmoid activation function, biExpressed as a bias term of the input gate activation function, wiFor the weight of the input gate activation function, wcInput gate state function candidate weight, bcTan h represents the bi-tangent activation function for the bias of the input gate candidate state function.
After data enters the input gate, the data enters the forgetting gate to select and discard information, and the forgetting gate selects and discards information according to the activation value i of the input gatetAnd the state candidate value of the memory cell at time t
Figure BDA0002819719040000114
Calculating a new state value CtIts computational expression is as follows:
ft=σ(wf·[ht-1,xt]+bf)
Figure BDA0002819719040000115
wherein, XtDenotes an input at time t, ht-1Representing the input value at time t-1, sigma representing the sigmoid activation function, bfExpressed as a forgetting gate activation letterOffset term of number, wfWeight of activation function for forgetting gate, CtRepresenting a new state value, Ct-1Represents the state value at time t-1, ftIndicating an activation value representing a forgetting gate.
According to the new state value CtCalculating the activation value o of the output gate at time ttAnd an output value htThe specific expression is as follows:
ot=σ(wo·[ht-1,xt]+bo)
ht=ot*tanh(Ct)
wherein, woRepresents the weight of the output gate, boDenotes the offset of the output gate, htRepresenting the output value of the output gate.
Will output the output value h of the gatetAnd the input judgment unit is used for receiving the characteristics extracted by the CNN + LSTM network and then classifying the characteristics, and classifying the damaged optical signals into correct categories, namely equalizing the damaged optical signals back to correct non-damaged optical signals. The invention treats the signal as a classification problem, and the finally obtained classification result is the equalized correct optical signal.
The decision unit uses the softmax function as the activation function. The softmax function expression is as follows:
Figure BDA0002819719040000121
wherein p iskRepresenting the output of the kth neuron, i.e., the probability of the signal level after equalization; n represents the number of final output neurons, i.e., the number of classes we finally make decisions, and n is 4 in PAM4 modulation; numerator represents the input signal hkThe denominator is the sum of the exponential functions of all input signals.
The probability value of each neuron belonging to a certain category is obtained through a softmax judging unit, a cross entropy loss function in the middle process is calculated by combining lossless optical signal samples collected and sent at a signal transmitting end, parameters in the neural network are optimized through an optimizer based on the cross entropy loss function, and the parameters are continuously updated in an iterative mode, so that the relation between signal damages can be accurately represented by the finally-fitted neural network, and correct signals are balanced. And repeating the iteration step to update the parameters until the cross entropy loss function is reduced to a certain value or the iteration times exceed the maximum empirical iteration times, and stopping iteration to obtain the trained CLSTM neural network model based on the attention mechanism.
In one embodiment, the cross-entropy loss function of the CLSTM neural network model is expressed as follows:
Figure BDA0002819719040000122
wherein L represents a loss value, and p ═ p0,…,pc-1]Is the probability value, p, obtained by the softmax function described aboveiIndicates the probability of a sample belonging to class i, y ═ y0,…,yc-1]Is the onehot representation of the subset of labeled exemplars, y when the exemplar belongs to the ith classi1, otherwise yiC denotes a label sample, 0.
An optimizer is adopted in the CLSTM neural network model to carry out a parameter updating algorithm, the learning rate is 0.001, and the CLSTM neural network model is used as an optimization algorithm, so that data can be fitted quickly, loss is reduced, and training speed is accelerated. In the whole network, the structure of the convolutional neural network generally adopts 3 convolutional layers, the kernel size of each convolutional layer is 3, the filter is 64, and the pooling layer uses maximum pooling.
Preferably, the number of network hidden layers adopted in the long-term memory network is 15, and the maximum empirical iteration number for training the whole network is 50.
When introducing various embodiments of the present application, the article "said" is intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the processes of the above method embodiments may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when executed, the computer program may include the processes of the above method embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-0nly Memory (ROM), a Random Access Memory (RAM), or the like.
The foregoing is directed to embodiments of the present invention and it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. A system for attention-based visible light signal equalization, comprising: a data receiving module, a data processing module, a signal equalizer and a signal output module, which is characterized in that,
the data receiving module is used for receiving the modulation signal output by the pulse amplitude modulation system PAM and demodulating the modulation signal to obtain a demodulated lossy optical signal;
the data processing module is used for preprocessing the demodulated lossy optical signal to obtain a sample sequence of the lossy optical signal, and inputting the sub-sequence data of the lossy optical signal sample after division processing into the signal equalizer;
the signal equalizer is used for equalizing the lossy optical signal into a lossless optical signal;
the signal output module is used for outputting an equalization signal.
2. The system of claim 1, wherein the pre-processing in the data processing module comprises: the demodulated lossy optical signal is divided into consecutive subsequences with the same length by using a sliding window with tap ═ n, and a sample sequence of the lossy optical signal is obtained.
3. The system of claim 1, wherein the signal equalizer comprises a Convolutional Neural Network (CNN) module based on the attention mechanism and a long-term memory neural network (LSTM) unit.
4. The system of claim 3, wherein the attention-based CNN module comprises two branches and a fusion module:
the first branch is a CNN branch, and the CNN branch comprises a convolutional layer and a pooling layer;
the second branch is an attention branch, which mainly comprises feature aggregation and scale recovery, wherein the feature aggregation is to extract more comprehensive features in a cross-scale sequence through convolution layers, a convolution kernel of 1 × 1 is used in the last layer to recover the feature scale to an M × N feature sequence with the same size as the CNN branch output, a sigmoid function is used to range the value from 0 to 1, and finally the feature sequence with the size of M × N and an attention mechanism is obtained;
a fusion module: and fusing the characteristic value output by the CNN branch circuit with the characteristic sequence output by the attention branch circuit and containing the attention mechanism to obtain a fusion result.
5. The system according to claim 4, wherein the input of the CNN branch is a subsequence divided by the data processing module, the output of the CNN branch is a signature sequence of M × N, M represents the length of the subsequence, and N represents the signature sequence of the signal after passing through the CNN branch; the input of the attention branch is from the middle point of the previous subsequence to the middle point of the next subsequence of the input sequence of the CNN branch, and the input length of the attention branch is twice as long as that of the input of the CNN branch; the output of the attention branch is a characteristic sequence with the size of M × N and including an attention mechanism, and the output dimension of the attention branch is the same as that of the CNN branch.
6. The system of claim 3, wherein the LSTM unit comprises an input gate, a forgetting gate, and an output gate.
7. The system of claim 3, wherein the input of the LSTM unit is a fusion result of the CNN module based on attention mechanism, the LSTM unit performs feature extraction learning on the fusion result again, and finally the classification result of the damaged optical signal is obtained through the judgment unit, so that the balanced non-damaged optical signal is obtained, and the signal balancing is completed.
8. A visible light signal equalization method based on an attention mechanism is characterized by comprising the following steps: after the data receiving module receives the lossy optical signal modulated by PAM, the received lossy optical signal is demodulated to obtain a demodulated lossy optical signal and input into a signal equalizer, and the signal equalizer outputs a compensated lossless optical signal;
the signal equalizer comprises a CLSTM neural network model based on an attention mechanism, the CLSTM neural network model is trained and then used, and the training process comprises the following steps:
s1, at a signal receiving end, receiving a modulation signal output by a Pulse Amplitude Modulation (PAM) system through a data receiving module, demodulating, obtaining a demodulated lossy optical signal, and transmitting the demodulated lossy optical signal to a data processing module; collecting a transmitted lossless optical signal sample at a signal transmitting end, and transmitting the lossless optical signal sample to a data processing module;
s2, the data processing module divides the lossy optical signal sample and the lossless optical signal sample by using a sliding window to obtain a training sample subset, and the training sample subset is input into a signal equalizer;
s3, in the signal equalizer, first initializing the parameters of the CLSTM neural network model based on the attention mechanism: the method comprises initializing weight parameters of a convolution layer of an attention mechanism and a CNN parallel module; initializing the convolution layer weight parameters of the Attention branch; initializing the initialization weight parameters of an input gate, a forgetting gate and an output gate of the LSTM network;
s4, inputting the training signal sample subsets divided in the step S2 into a CLSTM neural network model based on an attention mechanism of initialization parameters, in a CNN module based on the attention mechanism, carrying out local feature extraction on data by a CNN branch to obtain a feature map extracted by the CNN branch, and carrying out large-scale feature extraction on the data by the attention branch to obtain a feature map extracted by the attention branch; finally, fusing the characteristic graph extracted by the CNN branch with the characteristic graph extracted by the attention branch through a fusion module to obtain fusion characteristics;
s5, inputting the fusion characteristics into a long-time and short-time memory neural network LSTM, and balancing the linearity and nonlinearity among signals to obtain an expected balanced signal; and calculating a loss function according to the balanced signal obtained by the network and the original label signal, and iteratively updating the weight parameter through a loss function result value to obtain a trained CLSTM neural network model based on the attention mechanism when the loss function of the network reaches the minimum or the maximum empirical iteration times.
9. The method of claim 8, wherein the loss function is expressed as:
Figure FDA0002819719030000031
wherein L represents a loss value, and p ═ p0,…,pc-1]Is the probability value, p, obtained by the softmax function described aboveiIndicates the probability of a sample belonging to class i, y ═ y0,…,yc-1]Is the onehot representation of the subset of labeled exemplars, y when the exemplar belongs to the ith classi1, otherwise yiC denotes a label sample, 0.
CN202011414459.8A 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method Active CN112600618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011414459.8A CN112600618B (en) 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011414459.8A CN112600618B (en) 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method

Publications (2)

Publication Number Publication Date
CN112600618A true CN112600618A (en) 2021-04-02
CN112600618B CN112600618B (en) 2023-04-07

Family

ID=75188514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011414459.8A Active CN112600618B (en) 2020-12-07 2020-12-07 Attention mechanism-based visible light signal equalization system and method

Country Status (1)

Country Link
CN (1) CN112600618B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259284A (en) * 2021-05-13 2021-08-13 中南大学 Channel blind equalization method and system based on Bagging and long-short term memory network
CN114500189A (en) * 2022-01-24 2022-05-13 华南理工大学 Direct pre-equalization method, system, device and medium for visible light communication
CN114500197A (en) * 2022-01-24 2022-05-13 华南理工大学 Method, system, device and storage medium for equalization after visible light communication
CN115085808A (en) * 2022-06-09 2022-09-20 重庆邮电大学 VLC system time-frequency combination post-equalization method based on wavelet neural network
CN115967597A (en) * 2022-11-21 2023-04-14 南京信息工程大学 Optical fiber nonlinear equalization method, system, device and storage medium
CN116094604A (en) * 2023-01-10 2023-05-09 复旦大学 Optical communication nonlinear compensation system based on photon convolution processor
CN116506261A (en) * 2023-06-27 2023-07-28 南昌大学 Visible light communication sensing method and system
CN116667926A (en) * 2023-05-31 2023-08-29 华南理工大学 A Visible Light Communication Decoding Method and System Based on Deep Learning
WO2024077449A1 (en) * 2022-10-10 2024-04-18 华为技术有限公司 Method for training model for positioning, positioning method, electronic device, and medium
CN118157770A (en) * 2024-03-11 2024-06-07 希烽光电科技(南京)有限公司 Coherent receiving end error correction system and method based on semantic segmentation
CN120110851A (en) * 2025-03-05 2025-06-06 重庆邮电大学 A signal equalization method based on neural network equalizer

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005082A1 (en) * 2016-04-11 2018-01-04 A2Ia S.A.S. Systems and methods for recognizing characters in digitized documents
CN107944915A (en) * 2017-11-21 2018-04-20 北京深极智能科技有限公司 A kind of game user behavior analysis method and computer-readable recording medium
EP3413218A1 (en) * 2017-06-08 2018-12-12 Facebook, Inc. Key-value memory networks
CN110472229A (en) * 2019-07-11 2019-11-19 新华三大数据技术有限公司 Sequence labelling model training method, electronic health record processing method and relevant apparatus
CN110610168A (en) * 2019-09-20 2019-12-24 合肥工业大学 A EEG Emotion Recognition Method Based on Attention Mechanism
CN110851718A (en) * 2019-11-11 2020-02-28 重庆邮电大学 Movie recommendation method based on long-time memory network and user comments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005082A1 (en) * 2016-04-11 2018-01-04 A2Ia S.A.S. Systems and methods for recognizing characters in digitized documents
EP3413218A1 (en) * 2017-06-08 2018-12-12 Facebook, Inc. Key-value memory networks
CN107944915A (en) * 2017-11-21 2018-04-20 北京深极智能科技有限公司 A kind of game user behavior analysis method and computer-readable recording medium
CN110472229A (en) * 2019-07-11 2019-11-19 新华三大数据技术有限公司 Sequence labelling model training method, electronic health record processing method and relevant apparatus
CN110610168A (en) * 2019-09-20 2019-12-24 合肥工业大学 A EEG Emotion Recognition Method Based on Attention Mechanism
CN110851718A (en) * 2019-11-11 2020-02-28 重庆邮电大学 Movie recommendation method based on long-time memory network and user comments

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHONGYA LI: "Convolution-Enhanced LSTM Neural Network Post-Equalizer used in Probabilistic Shaped Underwater VLC System", 《2020 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATIONS AND COMPUTING》 *
李梅: "基于注意力机制的CNN-LSTM模型及其应用", 《计算机工程与应用》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259284B (en) * 2021-05-13 2022-05-24 中南大学 A channel blind equalization method and system based on bagging and long short-term memory network
CN113259284A (en) * 2021-05-13 2021-08-13 中南大学 Channel blind equalization method and system based on Bagging and long-short term memory network
CN114500197B (en) * 2022-01-24 2023-05-23 华南理工大学 Method, system, device and storage medium for equalizing after visible light communication
CN114500189A (en) * 2022-01-24 2022-05-13 华南理工大学 Direct pre-equalization method, system, device and medium for visible light communication
CN114500197A (en) * 2022-01-24 2022-05-13 华南理工大学 Method, system, device and storage medium for equalization after visible light communication
CN115085808A (en) * 2022-06-09 2022-09-20 重庆邮电大学 VLC system time-frequency combination post-equalization method based on wavelet neural network
CN115085808B (en) * 2022-06-09 2023-10-17 重庆邮电大学 A time-frequency joint post-equalization method for VLC system based on wavelet neural network
WO2024077449A1 (en) * 2022-10-10 2024-04-18 华为技术有限公司 Method for training model for positioning, positioning method, electronic device, and medium
CN115967597A (en) * 2022-11-21 2023-04-14 南京信息工程大学 Optical fiber nonlinear equalization method, system, device and storage medium
CN116094604A (en) * 2023-01-10 2023-05-09 复旦大学 Optical communication nonlinear compensation system based on photon convolution processor
CN116094604B (en) * 2023-01-10 2025-01-03 复旦大学 Optical communication nonlinear compensation system based on photon convolution processor
CN116667926A (en) * 2023-05-31 2023-08-29 华南理工大学 A Visible Light Communication Decoding Method and System Based on Deep Learning
CN116506261A (en) * 2023-06-27 2023-07-28 南昌大学 Visible light communication sensing method and system
CN116506261B (en) * 2023-06-27 2023-09-08 南昌大学 A visible light communication sensing method and system
CN118157770A (en) * 2024-03-11 2024-06-07 希烽光电科技(南京)有限公司 Coherent receiving end error correction system and method based on semantic segmentation
CN120110851A (en) * 2025-03-05 2025-06-06 重庆邮电大学 A signal equalization method based on neural network equalizer

Also Published As

Publication number Publication date
CN112600618B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN112600618A (en) Attention mechanism-based visible light signal equalization system and method
Zhang et al. Overfitting and underfitting analysis for deep learning based end-to-end communication systems
CN114881092B (en) A signal modulation recognition method based on feature fusion
CN118337576A (en) Lightweight automatic modulation identification method based on multichannel fusion
CN112381116A (en) Self-supervision image classification method based on contrast learning
CN110166391B (en) Baseband precoded MSK signal demodulation method based on deep learning under impulse noise
CN112039820B (en) A Modulation and Identification Method of Communication Signals Based on Evolutionary BP Neural Network of Quantum Swarm Mechanism
CN109586730B (en) A Polar Code BP Decoding Algorithm Based on Intelligent Post-processing
CN112115821B (en) Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
KR102073935B1 (en) Modulation recognition for radil signal
CN110233810B (en) A deep learning-based MSK signal demodulation method under mixed noise
CN112291005A (en) A receiver signal detection method based on Bi-LSTM neural network
CN112865866A (en) Visible light PAM system nonlinear compensation method based on GSN
CN113206808A (en) Channel coding blind identification method based on one-dimensional multi-input convolutional neural network
CN114070415A (en) Optical fiber nonlinear equalization method and system
CN110460359A (en) A Neural Network Based MIMO System Signal Receiving Method
Abbasi et al. Deep learning-based list sphere decoding for faster-than-Nyquist (FTN) signaling detection
Bansbach et al. Spiking neural network decision feedback equalization
Kalade et al. Using sequence to sequence learning for digital bpsk and qpsk demodulation
He et al. Design and implementation of adaptive filtering algorithm for vlc based on convolutional neural network
CN115733548A (en) Nonlinear Damage Compensation System and Method Based on Neural Network Equalizer
CN119743356B (en) A semi-supervised automatic modulation recognition method based on dual-channel signal comparison prediction
CN116055273B (en) A neural network cascade QPSK receiver and its auxiliary model training method
Tian et al. A deep convolutional learning method for blind recognition of channel codes
CN110474798B (en) A Method for Predicting Future Signals in Wireless Communication Using Echo State Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant