Disclosure of Invention
In view of the above defects or improvement requirements of the prior art, the invention provides a memristor array-based K-means classifier and a classification method thereof, and aims to solve the problem of high computational complexity caused by the fact that online updating of weights and complete expression of Euclidean distances cannot be realized on hardware in the prior art.
In order to achieve the above object, in a first aspect, the present invention provides a memristor array-based K-means classifier, including a first control module, a memristor array, a second control module, a data comparison module, and an output module;
the memristor array comprises a first memristor array, a second memristor array, a third memristor array and a fourth memristor array, each bit line of the first memristor array is connected with each bit line of the fourth memristor array, each bit line of the second memristor array is connected with each bit line of the third memristor array, each word line of the first memristor array is connected with each word line of the second memristor array, and each word line of the third memristor array is connected with each word line of the fourth memristor array;
the first control module is used for randomly selecting a clustering center from an input data set to be classified, respectively storing the clustering center into a first memristor array and a second memristor array after being subjected to writing voltage coding, and respectively storing data to be classified in the data set to be classified into a third memristor array and a fourth memristor array after being subjected to writing voltage coding; after reading voltage coding is carried out on the data to be classified and the opposite numbers of the weights of the clustering center, the data to be classified and the opposite numbers of the weights of the clustering center are respectively applied to bit lines of the second memristor array and the first memristor array, wherein the information of each dimension of the clustering center is the weight;
the memristor array is used for realizing dot product operation between the data to be classified after the read voltage coding input by the first control module and the opposite number of each weight and the self-stored data on the clustering center and the row where the data to be classified are located, accumulating the obtained result according to the row and outputting the accumulated result to the second control module;
the second control module is used for subtracting the calculation results of the row where the data to be classified and the clustering center input by the memristor array are located to obtain the Euclidean distance between the clustering center and the data to be classified, and outputting the Euclidean distance to the data comparison module;
the data comparison module is used for dividing the data to be classified into the class where the clustering center closest to the data to be classified is located, and outputting the classification result to the second control module and the output module respectively;
the second control module is also used for determining the row where the clustering center to be updated is located according to the classification result input by the data comparison module, and respectively outputting the data to be classified in the memristor array and the row where the clustering center to be updated is located after reading voltage coding is carried out on the preset learning rate and the opposite number of the preset learning rate;
the memristor array is also used for realizing the dot product operation between the preset learning rate and the inverse number thereof input by the second control module and the self-stored data on the row where the to-be-classified data and the to-be-updated clustering center are respectively located, accumulating the obtained results according to columns to obtain each weight change value, and outputting the weight change value to the first control module;
the first control module is also used for respectively outputting each weight change value input by the memristor array to a memristor array bit line after being subjected to write coding;
the memristor array is also used for updating the weight of the clustering center to be updated based on each weight change value input on the bit line of the first control module;
and the output module is used for outputting the classification result of the data to be classified input by the data comparison module when the weight of the clustering center is not changed any more.
Further preferably, the memristor array is in translational symmetry with reference to a center line.
Further preferably, the memristor array size is (k +1) × 2M, where k is the number of cluster classes and M is the dimension of sample data; the first memristor array and the second memristor array are in translational symmetry by taking a central line as a reference, and are formed by k rows of memristors and M columns of memristors; the third memristor array and the fourth memristor array are in translational symmetry with the center line as a reference and are formed by 1 row and M columns of memristors.
In a second aspect, the invention provides a memristor array-based K-means classification method, which comprises the following steps:
s1, randomly selecting k data from the data set to be classified as an initial clustering center, and respectively storing the k data into a first memristor array and a second memristor array after writing voltage coding, wherein k is the clustering number;
s2, selecting first data in a data set to be classified as data to be classified, and storing the data to be classified into a third memristor array and a fourth memristor array after writing voltage coding;
s3, after reading voltage coding is carried out on the data to be classified and the opposite numbers of the weights of the first clustering center, the data to be classified and the opposite numbers of the weights of the first clustering center are respectively applied to bit lines of a second memristor array and a first memristor array, dot product operation between the data to be classified and the opposite numbers of the weights of the first clustering center, which are input by the first control module, and self-stored data is respectively realized on the rows where the first clustering center and the data to be classified are located, and the obtained results are accumulated according to the rows and then subtracted to obtain the Euclidean distance between the first clustering center and the data to be classified;
s4, sequentially calculating Euclidean distances between the data to be classified and the rest clustering centers according to the method in the step S3;
s5, dividing the data to be classified into the class where the clustering center closest to the data to be classified is located, and determining the row where the clustering center to be updated is located according to the classification result;
s6, respectively inputting the preset learning rate and the inverse number thereof after the reading voltage coding to the row where the data to be classified and the cluster center to be updated are located, realizing the dot product operation between the preset learning rate and the inverse number thereof input by the second control module and the self-stored data, accumulating the obtained results according to columns to obtain the change value of each weight of the cluster center to be updated, writing the obtained change value into the memristor node of the cluster center to be updated, and updating the weight;
s7, sequentially dividing the residual data in the data set to be classified into corresponding categories according to the method of the steps S2-S6;
s8, repeating the steps S2-S7 to iterate until the weight of each cluster center is not changed;
the first memristor array is connected with each bit line of the fourth memristor array, the second memristor array is connected with each bit line of the third memristor array, the first memristor array is connected with each word line of the second memristor array, the third memristor array is connected with each word line of the fourth memristor array, and each dimension information of the clustering center is the weight.
Further preferably, after the data is written into the memristor by the writing voltage coding, the conductance value of the memristor is linearly related to the actual size of the data.
Further preferably, a first cluster center in the first memristor array is selected, a read voltage encoded coefficient-1 is applied to a bit line of the first cluster center, and the opposite number of each weight of the first cluster center is obtained.
Further preferably, the euclidean distance is determined by an amount of charge accumulation due to an output current on the memristor row, and the amount of accumulated charge is proportional to the euclidean distance.
Further preferably, the cluster center to be updated is the cluster center closest to the data to be classified.
Further preferably, the weight change value Δ W is represented by:
ΔW=η(Ui-Wp)
wherein η denotes a learning rate, UiFor the ith data to be classified, WpIs the cluster center to be updated.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
1. the invention provides a memristor array-based K-means classifier, which is characterized in that a memristor array structure is utilized, a K-means clustering center with practical significance is directly mapped and stored into an array node, the structural networking of an algorithm is realized, all dimension information of the clustering center is used as the weight of the network, the practical significance of the network weight is increased, the non-normalized input data clustering is realized, and the calculation complexity caused by data normalization is reduced; by applying the conductance value gradient characteristic of the memristor to calculation of Euclidean distance and weight updating of multi-dimensional data, the problem of high calculation complexity caused by the fact that complete expression of Euclidean distance and online updating of weight cannot be achieved on hardware in the prior art is solved.
2. The invention provides a memristor array-based K-means classification method, which is characterized in that all dimension information of a clustering center of a K-means algorithm is used as a training weight, the conductance value gradient characteristic of a memristor is applied to the calculation of Euclidean distance of multi-dimensional data, the problem of complete expression of the Euclidean distance on the memristor array is solved, the learning rate is directly applied to the memristor array through voltage coding, the online updating of the clustering center on a hardware circuit is further realized, the circuit complexity caused by the calculation weight change of an external circuit is greatly reduced, and the time and energy consumption of data interaction are saved.
3. The memristor gradient characteristic is applied to Euclidean distance simulation calculation, the method can be used for calculating the Euclidean distance between input data and a clustering center to realize K-means clustering, and the problem of similarity calculation of algorithms such as KNN (K nearest neighbor) and RBF (radial basis function) neural networks and the like in other similar algorithms in a hardware circuit can be solved.
4. According to the K-means classifier based on the memristor array, due to the high-density structure of the nanoscale memristor array and the information storage capacity of the memristor resistor, the circuit size is small, the energy consumption is lower than that of a traditional CMOS structure, the overall performance is better than that of an existing computing framework, and the K-means classifier based on the memristor array is more suitable for an edge computing scene.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In order to achieve the above object, in a first aspect, the present invention provides a memristor array-based K-means classifier, as shown in fig. 1, including a first control module 1, a memristor array 2, a second control module 3, a data comparison module 4, and an output module 5;
the first control module 1 is connected with the memristor array 2 in a bidirectional mode, the memristor array 2 is connected with the second control module 3 in a bidirectional mode, the second control module 3 is connected with the data comparison module 4 in a bidirectional mode, and the data comparison module 4 is connected with the output module 5. As shown in fig. 2, the memristor array 1 includes a first memristor array 21, a second memristor array 22, a third memristor array 23, and a fourth memristor array 24, each bit line of the first memristor array 21 and the fourth memristor array 24 is connected, each bit line of the second memristor array 22 and each bit line of the third memristor array 23 are connected, each word line of the first memristor array 21 and each word line of the second memristor array 22 are connected, and each word line of the third memristor array 23 and each word line of the fourth memristor array 24 are connected;
the first control module 1 is used for randomly selecting a clustering center from an input data set to be classified, respectively storing the clustering center into the first memristor array 21 and the second memristor array 22 after being subjected to writing voltage coding, and respectively storing data to be classified in the data set to be classified into the third memristor array 23 and the fourth memristor array 24 after being subjected to writing voltage coding; after reading voltage coding is carried out on the data to be classified and the opposite numbers of the weights of the clustering center, the data to be classified and the opposite numbers of the weights of the clustering center are respectively applied to bit lines of the second memristor array 22 and the first memristor array 21, wherein the information of each dimension of the clustering center is the weight;
the memristor array 2 is used for realizing dot product operation between the data to be classified after the read voltage coding input by the first control module 1 and the inverse number of each weight and the self-stored data on the clustering center and the row where the data to be classified are located, accumulating the obtained results according to the rows, and outputting the accumulated results to the second control module 3;
the second control module 3 is used for subtracting the calculation results of the row where the data to be classified and the clustering center input by the memristor array 2 are located to obtain the Euclidean distance between the clustering center and the data to be classified, and outputting the Euclidean distance to the data comparison module 4;
the data comparison module 4 is used for dividing the data to be classified into the class where the clustering center closest to the data to be classified is located, and outputting the classification result to the second control module 3 and the output module 5 respectively;
the second control module 3 is further configured to determine a row where the clustering center to be updated is located according to the classification result input by the data comparison module 4, and output the row where the clustering center to be updated and the data to be classified in the memristor array 2 are located respectively after reading voltage coding is performed on the preset learning rate and the inverse number thereof;
the memristor array 2 is further used for realizing dot product operation between the preset learning rate and the inverse number thereof input by the second control module 3 and self-stored data on the row where the to-be-classified data and the to-be-updated clustering center are located, accumulating the obtained results according to columns to obtain each weight change value, and outputting the weight change value to the first control module 1;
the first control module 1 is further configured to output each weight change value input by the memristor array 2 to a bit line of the memristor array 2 after being subjected to write coding;
the memristor array 2 is further used for updating the weight of the clustering center to be updated based on each weight change value input on the bit line of the first control module 1;
the output module 5 is used for outputting the classification result of the data to be classified input by the data comparison module 4 when the weight of the clustering center is not changed any more.
Specifically, the memristor array 2 has a data storage function and a data storage function, wherein the data storage function is to convert a data voltage obtained by encoding input data through a write voltage into a conductance value of a memristor node to be stored in the array; the data calculation function is to convert the data voltage of the input data after reading the voltage code and the conductance of the node into current and accumulate the charge.
In this embodiment, for the to-be-classified data set S ═ { U ═ U1,U2,…,UtEach to-be-classified data UiWith M data dimensions, i.e. Ui={xi1,xi2,…,xiMDesignating the data in S as k classes, k cluster centers W ═ W are generated1,W2,…,WkAnd each cluster center and data to be classified have M dimensions, namely Wj={yi1,yi2,…,yiM}. As shown in fig. 2, the present embodiment employs a memristor array of (k +1) × 2M size, which is in translational symmetry with respect to a central line, wherein the first memristor array and the second memristor array are in translational symmetry with respect to the central line, and are each formed by k rows and M columns of memristors; the third memristor array and the fourth memristor array are in translational symmetry by taking a center line as a reference, and are formed by 1 row and M columns of memristors, wherein the size of the memristor array is (k +1) × 2M, wherein k is the number of clustering classes, and M is the dimensionality of the classified data. The first memristor array and the second memristor array are used for storing dynamically-changed clustering center W ═ W1,W2,…,Wk}. In a memristor array, each node of the array represents one dimension of one data. M dimensions, namely M weights, of one clustering center are sequentially stored in each row of memristor units of the first memristor array from left to right, and each weight of K clustering centers can be stored in the M × K memristor units of the first memristor array. Similarly, K cluster centers may be stored to a second memoryIn a resistor array. The third memristor array and the fourth memristor array are used for storing the ith input data U to be classifiediThe data stored by the first memristor array, the second memristor array, the third memristor array and the fourth memristor array are identical and are in translational symmetry with the center line as a reference.
Specifically, the first control module 1 includes a data input unit 11, a first read-write encoding unit 12, a first buffer unit 13, and a second buffer unit 14; the second control module 3 comprises a third buffer unit 31, a second read-write encoding unit 32 and a subtraction unit 33; the output module 5 comprises an output buffer unit 51 and a result output unit 52;
wherein, the output end of the data input unit 11 is connected with one end of the first read-write coding unit 12, the other end of the first read-write coding unit 12 is respectively bidirectionally connected with one end of the first buffer unit 13 and one end of the second buffer unit 14, the bit lines of the first memristor array 21 and the fourth memristor array 24 are bidirectionally connected with the other end of the first buffer unit 13, the bit lines of the second memristor array 22 and the third memristor array 23 are bidirectionally connected with the other end of the second buffer unit 14, the word lines of the memristor array 2 are bidirectionally connected with one end of the third buffer unit 31, the other end of the third buffer unit 31 is respectively connected with the second read-write coding unit 32, one end of the subtraction unit 33 and one end of the data comparison module 4 are connected in a bidirectional manner, the other end of the data comparison module 4 is connected with the input end of the output cache unit 51, and the output end of the output cache unit 51 is connected with the input end of the result output unit 52;
FIG. 3 is a flow chart of the K-means clustering algorithm, which mainly includes data input stages S1-S2, distance calculation stages S3, and weight update stages S4-S5.
Correspondingly, the functions of each module and unit in the K-means classifier in FIG. 1 are as follows:
a data input stage: the data input unit 11 receives the input of the data set to be classified, selects the cluster center data and the data to be classified, and outputs to the first read-write encoding unit 12, the first read-write encoding unit 12 encodes the clustering center data and the data to be classified input by the data input unit 11 based on the write voltage with fixed amplitude, and the encoded data are respectively input to the bit lines of the memristor array 2 through the first buffer unit 13 and the second buffer unit 14, therefore, the clustering centers are respectively stored in the first memristor array 21 and the second memristor array 22, the write-coded data to be classified input by the first read-write module 12 is input to the bit lines of the memristor array 2, therefore, the data to be classified are respectively stored in the third memristor array 23 and the fourth memristor array 24, and the storage of the data to be classified and the dimension information of the clustering center by the memristor array 2 is completed.
A distance calculation stage: with each dimension information of the clustering center as a weight, after reading voltage coding is carried out on data to be classified and the opposite number of each weight, the data to be classified and the opposite number of each weight are applied to bit lines of a second memristor array 22 and a first memristor array 21 after passing through a second cache unit 14 and a first cache unit 13 respectively; the memristor array 2 is used for realizing dot product operation between the data to be classified after input read voltage encoding and the opposite number of each weight and self-stored data on the row where the data to be classified and the cluster center are located, accumulating the obtained results according to the row, outputting the accumulated results to the subtraction unit 33 through the third cache module for subtraction operation, obtaining Euclidean distances between the data to be classified and each cluster center, and outputting the Euclidean distances to the data comparison module 4 through the third cache module 31.
And a weight updating stage: the data comparison module 4 receives the euclidean distances between the data to be classified input by the third cache unit 31 and the clustering centers, compares the euclidean distances, divides the data to be classified into the class where the clustering center closest to the data to be classified is located, outputs the classification result to the third cache unit 31 and the output cache unit 51, and stores the temporary classification result in the output cache unit 51; the third cache unit 31 determines the row where the clustering center to be updated is located according to the classification result input by the data comparison module 4, and outputs the data to be classified and the row where the clustering center to be updated is located in the memristor array 2 after reading voltage coding is performed on the preset learning rate and the opposite number thereof; the memristor array 2 is respectively arranged on the row of the data to be classified and the cluster center to be updated, the dot product operation between the preset learning rate and the inverse number thereof input by the third cache unit 31 and the self-stored data is carried out, the obtained results are accumulated according to columns to obtain each weight change value, and the weight change values are output to the first cache unit 13 and the second cache unit 14; after the first buffer unit 13 and the second buffer unit 14 output the weight change values to the first read-write encoding unit 12 for write encoding, the weight change values are output to bit lines of the memristor array 2 through the first buffer unit 13 and the second buffer unit 14, so that the memristor array updates the weight of the clustering center to be updated.
After each data to be classified in the data set to be classified is subjected to the above process for multiple times, when the category of each data in the data set to be classified is not changed any more, the classification result is transmitted and output to the result output unit 52, so that the final classification result is output.
In a second aspect, the invention provides a memristor array-based K-means classification method. The invention simplifies the K-means algorithm into a single-layer perceptron model, inputs all dimension information of the data to be classified, outputs the class of the data to be classified, and trains the weight as all dimension information of the clustering center. And (3) realizing the sensor model by using the memristor array, repeatedly using data in the data set S to train a clustering center on line, completing the updating of all dimension information, namely weight, and finally realizing clustering. Fig. 4 shows a neural network weight mapping method provided by the present invention.
Specifically, the invention provides a memristor array-based K-means classification method, which comprises the following steps:
s1, classifying the data set S ═ U1,U2,…,UtRandomly selecting k data as initial clustering center weight W ═ W1,W2,…,WkWriting voltage codes are carried out, and then the codes are respectively stored in a first memristor array and a second memristor array, wherein k is a clustering number;
specifically, taking the K-means classifier provided in the first aspect of the present invention as an example, M dimensional data of a jth (j is 1,2, …, K) clustering center are sequentially encoded by using a first read-write data encoding module, and after passing through a first cache unit and a second cache unit, the encoded data to be classified are respectively written into jth row memristor nodes of a first memristor array and a second memristor array.
S2, selecting the first data U in the data set to be classified1The data to be classified are respectively stored in a third memristor array and a fourth memristor array after being coded by write voltage;
specifically, after the data is written into the memristor by the writing voltage code, the conductance value of the memristor is linearly related to the actual size of the data, and is represented as follows:
wherein G isxRepresenting the memristor conductance value after the data write voltage is coded and written into the memristor, X representing the data value, Gmax,GminRespectively representing maximum and minimum values of conductance, Xmax,XminRepresenting the maximum and minimum values of the data. The actual data is mapped to the device conductance accordingly, and since the encoding voltage amplitudes are the same, different numbers of voltages must be applied in order for the data to reach a certain conductance value, i.e. the write voltage encoding process. The writing data coding result is N ═ f (G)x) And the function f is memristor pulse conductance characteristics.
S3, data U to be classified1And the inverse of each weight of the first cluster center-W1After read voltage coding is carried out, the read voltage coding is respectively applied to bit lines of the second memristor array and the first memristor array and respectively arranged in the first clustering center W1And data U to be classified1On the row, the dot product operation between the data to be classified after the read voltage coding input by the first control module and the opposite number of each weight of the first clustering center and the self-stored data is realized, and the obtained results are accumulated according to the rows and then subtracted to obtain the Euclidean distance between the first clustering center and the data to be classified;
specifically, each dimension information of the clustering center is the weight. Selecting a first cluster center W in a first memristor array1Applying a read voltage encoded coefficient-1 on its bit line, based on ohm's lawCalculating to obtain the opposite number of each weight of the first clustering center; as shown in fig. 5, which is a schematic diagram of a memristor array-based weight reading method provided by the present invention, a first clustering center W is selected by a third cache unit1Applying a read voltage to the line, passing ohm's law through the first cluster center W1The conductance values of the memristors on the lines act to obtain a first clustering center W1The inverse of each weight. Specifically, the read voltage encoded coefficient-1 is input to the first clustering center W through the third buffer unit1In the row, the current values of all bit lines in the first memristor array are collected by the first cache unit, so that a first clustering center W can be obtained1The corresponding conductance value obtains the opposite number-y of each weight through the mapping relation between the conductance and the actual data11,-y12,…,-y1MAnd stored in the first cache unit.
Specifically, as shown in FIG. 6, the inverse number (-y) of each weight of the first cluster center is set
11,-y
12,…,-y
1M) And data to be classified (x)
11,x
12,…,x
1M) The data of each M dimensionalities are coded by a read data coding module, the coded opposite numbers of each weight are input into a first memristor array and a fourth memristor array through a first cache unit, the coded data to be classified are input into a second memristor array and a third memristor array through a second cache unit, dot product operation between the data to be classified coded by read voltage input by a first control module and the opposite numbers of each weight and self-stored data is achieved, the obtained results are accumulated according to rows, and the obtained results are subtracted to obtain the Euclidean distance between a first clustering center and the data to be classified. The essence of the process is that the reading voltage is converted into current based on the conductance action of the data voltage coded by the input reading voltage and the node, the current can represent the result of addition after vector dot multiplication, and the third cache unit is used for collecting the first clustering center W
1All lines and data to be classified U
1The charge value of the row represents the result of the addition after the vector dot product operation. Data U characterizing the charge value
1 2-U
1*W
1And a first cluster center W
1Data characterizing charge values of the row
Output to the subtraction unit via the third buffer unit for subtraction, i.e.
The data to be classified and the clustering center W are obtained
1And storing the obtained euclidean distance in a third cache unit. Further, the euclidean distance is determined by the charge accumulation amount by the current, and the accumulated charge amount is proportional to the euclidean distance.
The method applies the gradual change characteristic of the memristor to calculation of the Euclidean distance, not only can be used for calculating the Euclidean distance between input data and a clustering center, and realizing K-means clustering, but also can solve the similarity calculation problem of algorithms such as KNN and RBF neural networks in hardware circuits.
S4, calculating the data to be classified and the rest clustering centers W in sequence according to the method in the step S32,W3,…,WkThe Euclidean distance between;
s5, the obtained data to be classified and each clustering center W1,W2,…,WkComparing Euclidean distances between the data to be classified, dividing the data to be classified into the class where the clustering center closest to the data to be classified is located, and determining the row where the clustering center to be updated is located according to the classification result, wherein the clustering center to be updated is the clustering center closest to the data to be classified.
Specifically, the charges representing the euclidean distance stored in the third cache unit are transmitted to the data comparison module, the data comparison module divides the data to be classified into the class where the cluster center closest to the data to be classified is located by comparing the magnitude of the charge amount, and feeds the classification result back to the third cache unit and outputs the classification result to the output cache unit.
S6, respectively inputting the preset learning rate η and the inverse number- η of the read voltage after coding to the row where the data to be classified and the cluster center to be updated are located, realizing the dot product operation between the preset learning rate and the inverse number of the preset learning rate and the self-stored data input by the second control module, accumulating the obtained results according to columns to obtain the change value of each weight of the cluster center to be updated, writing the obtained change value into the memristor node of the cluster center to be updated, and updating the weight;
specifically, as shown in fig. 7(a), the voltage pulse corresponding to the learning rate inverse- η after being read by the third buffer unit is input to the clustering center W closest to the data to be classifiedpOn the row, at the same time, the voltage pulse corresponding to the learning rate η subjected to the read voltage encoding is input to the data U to be classified through the third buffer unit1In the embodiment, the number of voltage pulses corresponding to the learning rate is set to 1, that is, the number of minimum pulses, where the learning rate η takes a value of 0.1, current values on columns where the cluster centers closest to the data to be classified and the data to be classified are respectively obtained based on the ohm's law, and after the current values are accumulated according to the columns, each weight change value of the cluster centers closest to the data to be classified is calculated to be Δ W η (U)i-Wp) Wherein η denotes a learning rate, UiUiFor the ith data to be classified, WpThe cluster center closest to the data to be classified. Then, as shown in fig. 7(b), Δ W is encoded and written by the write encoding unit into the clustering center W closest to the data to be classified in the memristor arraypOn the row, thereby enabling the update of the cluster center on the memristor array.
S7, sequentially dividing the residual data in the data set to be classified into corresponding categories according to the method of the steps S2-S6;
s8, repeating the steps S2-S7 to iterate until the weight of each cluster center is not changed;
the first memristor array is connected with each bit line of the fourth memristor array, the second memristor array is connected with each bit line of the third memristor array, the first memristor array is connected with each word line of the second memristor array, and the third memristor array is connected with each word line of the fourth memristor array.
The invention provides a memristor array-based K-means classifier and a classification method thereof, which take all dimension information of a clustering center of a K-means algorithm as training weight and map the training weight in a memristor array, creatively provides a memristor array-based Euclidean distance calculation method, solves the problem of complete expression of Euclidean distance on the memristor array, and can be used for realizing data clustering of a large amount of data on the basis of a hardware circuit. According to the invention, the memristor array is utilized to reduce the data complexity in the data Euclidean distance calculation process, reduce the data storage time and the operation power consumption, and can be used for an edge calculation scene in the future.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.