[go: up one dir, main page]

WO2018226014A1 - Dispositif électronique et son procédé de commande - Google Patents

Dispositif électronique et son procédé de commande Download PDF

Info

Publication number
WO2018226014A1
WO2018226014A1 PCT/KR2018/006400 KR2018006400W WO2018226014A1 WO 2018226014 A1 WO2018226014 A1 WO 2018226014A1 KR 2018006400 W KR2018006400 W KR 2018006400W WO 2018226014 A1 WO2018226014 A1 WO 2018226014A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
electronic device
pattern
main pattern
optimization data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2018/006400
Other languages
English (en)
Korean (ko)
Inventor
조한수
김경훈
박영환
김석진
김현중
권동욱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170144071A external-priority patent/KR102548718B1/ko
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to CN201880034791.6A priority Critical patent/CN110678883B/zh
Priority to EP18812967.0A priority patent/EP3582152B1/fr
Priority to US16/617,967 priority patent/US11450086B2/en
Publication of WO2018226014A1 publication Critical patent/WO2018226014A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to an electronic device and a control method thereof, and more particularly, to an electronic device and a control method for reducing the amount of computation required in the process of performing the learning by the neural network.
  • Machine learning a field of artificial intelligence, refers to a technology for researching and building a system and algorithms for predicting the future and improving its performance by collecting and analyzing large-scale big data.
  • Neural networks are algorithms in which the active function determines the final output against a specific boundary for a sum multiplied by weights of a plurality of inputs, based on the intention to actively mimic the function of the human brain.
  • Consists of Typical examples include a convolutional neural network (CNN), which is frequently used for image recognition, and a recurrent neural network (RNN), which is widely used for speech recognition.
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the present invention has been made to solve the above-described problem, and an object of the present invention is to provide an electronic device and a control method thereof for improving the computation speed and the memory usage in the neural network computation process.
  • an electronic device may include input data of the neural network for each filter of a main pattern selected from a plurality of filters generated by a memory and learning by a neural network. And a processor configured to identify operation instructions for filtering and to store the identified operation instructions in the memory.
  • the processor may perform the filtering operation by the filter of the main pattern by using the stored operation instruction.
  • the filter of the main pattern may be a filter selected in the generated plurality of filters based on the statistical weight of the calculation amount of each pattern for each pattern in which the elements are disposed.
  • the processor may select the main pattern based on a frequency for each pattern in which elements are arranged in the generated plurality of filters.
  • the electronic device may further include a communication unit, and the processor may transmit the operation command to another electronic device through the communication unit, and the other electronic device may use the operation command to filter the main pattern. Filtering operation can be performed.
  • the processor may identify the optimization data in which zero elements are removed from the filter of the selected main pattern and the operation instructions corresponding to the optimization data, and store the checked optimization data and the operation instructions in the memory. have.
  • the processor may perform the filtering operation by the filter of the main pattern by using the stored optimization data and operation instructions corresponding to the identification information included in the optimization data.
  • the processor transmits the optimization data and the operation command to another electronic device through the communication unit, and the other electronic device uses the operation command corresponding to the received optimization data and the identification information included in the optimization data.
  • the filtering operation by the filter of the main pattern can be performed.
  • control method of the electronic device checks the operation command for filtering the input data of the neural network for each filter of the main pattern selected from a plurality of filters generated by learning by the neural network And storing the identified operation instruction.
  • control method may further include performing a filtering operation by a filter of the main pattern by using the stored operation command.
  • the filter of the main pattern may be a filter selected in the generated plurality of filters based on the statistical weight of the calculation amount of each pattern for each pattern in which the elements are disposed.
  • the determining may include selecting the main pattern based on a frequency for each pattern in which the elements are arranged in the generated plurality of filters.
  • the control method may further include transmitting the stored operation command to another electronic device, and the other electronic device may perform a filtering operation by the filter of the main pattern using the operation command.
  • the checking may include identifying the optimized data in which zero elements are removed from the filter of the selected main pattern and the operation instruction corresponding to the optimized data, and storing the optimized data and Operation instructions can be stored.
  • the control method may further include performing a filtering operation by the filter of the main pattern by using the stored optimization data and an operation command included in the optimization data.
  • the control method may further include transmitting the stored optimization data and a calculation command to another electronic device, wherein the other electronic device uses a calculation command corresponding to the optimization data and identification information included in the optimization data.
  • the filtering operation by the filter of the main pattern can be performed.
  • the computation speed and the computation amount may be greatly reduced, and the memory usage may also be reduced as compared to the conventional art.
  • FIGS. 1A and 1B are block diagrams schematically illustrating a configuration of an electronic device according to different embodiments of the present disclosure
  • FIG. 2 is a diagram illustrating a general CNN operation process of performing a classification operation by applying a plurality of filters to input data according to an embodiment of the present invention
  • 3A and 3B are views for explaining filters of a main pattern according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a structure of optimization data for a filter having an A pattern, which is one of main patterns, according to an embodiment of the present invention
  • FIG. 5 is a block diagram schematically illustrating a configuration of another electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating a machine learning system for receiving a calculation command for a main filter from a server and performing a convolution operation using the same according to an embodiment of the present invention
  • FIGS. 7A and 7B are diagrams for describing a method of learning various types of input images and recognizing the input image using the learned data according to one embodiment of the present invention
  • FIG. 8 is a flowchart illustrating a method of performing learning by a neural network according to an embodiment of the present invention.
  • ordinal numbers such as “first” and “second” may be used to distinguish between components. These ordinal numbers are used to distinguish the same or similar components from each other, and the meaning of the terms should not be construed as limited by the use of these ordinal numbers. For example, the components combined with these ordinal numbers should not be construed as limiting the order of use or arrangement of the components. If necessary, the ordinal numbers may be used interchangeably.
  • modules are terms for referring to a component that performs at least one function or operation, and such components are referred to as hardware or software. It may be implemented or in a combination of hardware and software.
  • a plurality of "modules”, “units”, “parts”, etc. are integrated into at least one module or chip, except that each needs to be implemented as a specific specific hardware, at least one processor It may be implemented as (not shown).
  • a part when a part is connected to another part, this includes not only a direct connection but also an indirect connection through another medium.
  • the meaning that a part includes a certain component means that it may further include other components, without excluding other components, unless specifically stated otherwise.
  • FIGS. 1A and 1B are block diagrams schematically illustrating a configuration of an electronic device according to an embodiment of the present disclosure.
  • an electronic device 100 performing learning by a neural network includes a memory 110 and a processor 120.
  • the memory 110 is configured to store training data by a neural network.
  • the training data may include a plurality of filters generated by learning by the neural network.
  • the memory 110 may store data in which preprocessing has been performed on the generated plurality of filters, which will be described later.
  • the memory 110 may be implemented as a memory of various formats such as a hard disk drive (HDD), a solid state drive (SSD), a DRAM memory, an SRAM memory, an FRAM memory, or a flash memory.
  • HDD hard disk drive
  • SSD solid state drive
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • FRAM Fidel random access memory
  • flash memory any type of non-volatile memory
  • the processor 120 is a component that controls the overall operation of the electronic device 100.
  • the processor 120 performs preprocessing on the filters of the main pattern selected from the plurality of filters generated by the learning of the neural network.
  • the filter is defined as a matrix of weights as a mask having weights. Filters are also known as windows or kernels.
  • the weights constituting the matrix in the filter are composed of zero elements that can be approximated to zero (zero value) or zero, and zero elements having a constant value between 0 and 1, and may have various patterns according to their function.
  • the processor 120 may apply a filter having a weight to an input image, and apply the weight of the image and the filter.
  • a feature map may be extracted by determining a sum (convolution operation) for each multiplied value as a pixel value of the output image.
  • a plurality of input images may be extracted through multiple filters, and a plurality of feature maps may be extracted according to the number of filters.
  • Such a convolutional image may be repeated by multiple layers.
  • the filters learned according to the learning target of the CNN are different, and the pattern of the selected filters is also changed. That is, the filter to be learned and the filter to be selected vary depending on what the CNN is learning from, cats, puppies, pigs, and cows.
  • the processor 120 may determine which type of feature the input original data has by applying a CNN by combining multiple filters that may extract different features.
  • the filter of the main pattern is a filter in which non-zero elements are arranged in the main pattern among the plurality of trained filters.
  • the filter refers to a filter having a pattern that occupies a frequency more than a predetermined ratio among the filters. This will be described in detail with reference to FIGS. 3A and 3B.
  • the preprocessing performed by the processor 120 includes a step of identifying each operation instruction for filtering the input data of the neural network using the filter of the selected main pattern for each filter of the selected main pattern.
  • the operation instruction is a command that replaces a convolution operation for filtering input data of the neural network, and may be compiled and identified.
  • the confirmed operation instruction may be stored in the memory 110.
  • the processor 120 may perform the filtering operation by the filter of the main pattern according to the stored operation instruction.
  • the processor 120 may further include an operation module (not shown) for performing a filtering operation.
  • the electronic device 100 ′ may further include a communication unit 130, as shown in FIG. 1B, and the processor 120 may enter an operation instruction through the communication unit 130. May transmit to the device 200.
  • the other electronic device 200 may include a calculation module, and may perform a filtering operation by a filter of a main pattern using an operation command.
  • the communication unit 130 may include Global System for Mobile Communication (GSM), Code Division Multi Access (CDMA), Code Division Multi Access 2000 (CDMA2000), Enhanced Voice-Data Optimized or Enhanced Voice-Data Only (EV-DO), and WCMA ( Other electronic devices in a mobile communication network constructed in accordance with Wideband CDMA (HDSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), etc. 200) can transmit and receive wireless signals.
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multi Access
  • CDMA2000 Code Division Multi Access 2000
  • EV-DO Enhanced Voice-Data Optimized or Enhanced Voice-Data Only
  • WCMA Other electronic devices in a mobile communication network constructed in accordance with Wideband CDMA (HDSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution (LTE), Long Term Evolution-Adv
  • the communication unit 130 may support Wi-Fi, Bluetooth, Bluetooth, and Wi-Fi Direct, and includes an Ethernet terminal for connection to a wired network. can do.
  • the communication unit 130 may transmit a calculation command to the other electronic device 200 by wire, and in this case, may include a connector for wired connection with the other electronic device 200.
  • the processor 120 may obtain optimization data from which zero elements are removed from the filter of the selected main pattern, and check an operation instruction for performing a filtering operation according to the obtained optimization data.
  • the processor 120 removes zero elements from the filter of the selected main pattern, and generates optimization data for the filter of the main pattern, which is composed of non-zero elements and identification information for identifying the pattern. Can be generated.
  • the optimization data has a data structure in which the identification information and the thesis elements are arranged in a line.
  • the operation command is an operation module for filtering input data of the neural network, and is a command for causing a specific operation element among a plurality of operation elements included in the operation module to perform a specific operation.
  • the plurality of computing elements may be arranged in a predetermined structure to constitute a computing module.
  • the operation module may be included in another electronic device 200 that communicates with the electronic device 100 ′, and the electronic device 100 ′ may calculate optimization data and operation instructions stored in the memory 110. It can transmit to another electronic device 100 '.
  • the other electronic device 200 may perform the filtering operation of the neural network by the filter of the main pattern by using the optimization data and the operation command received from the electronic device 100 '.
  • the processor 120 may store optimization data and arithmetic instructions corresponding to the filter of the main pattern in the memory 110 for each type of the main pattern.
  • FIG. 2 illustrates a general CNN calculation process of performing a classification operation by applying a plurality of filters to input data according to an embodiment of the present invention.
  • a plurality of filters 22-1 to 22-n may be applied to the input image 21 of the CNN.
  • the filters 22-1 to 22-n are different depending on the type of CNN, but generally, 64 to 512 types of filters may be configured.
  • the input image 21 may be divided into a plurality of images before being input to the CNN.
  • the input image 21 may have a certain depth for expressing the color, and may be composed of three pieces of data represented by, for example, R (red), G (green), and B (blue). have.
  • CNN basically has a structure in which a convolutional layer and a pooling layer are repeated.
  • a convolutional layer a plurality of feature maps 23-1 to 23-n for the input image 21 are generated based on the number of filters 22-1 to 22-n to be applied.
  • a plurality of feature maps are sampled in the pooling layer.
  • a plurality of feature maps 24-1 to 24-n having a reduced size may be generated.
  • the input image 21 is classified as a specific object by a fully connected layer. 2 shows an example in which the input image 21 is classified as "cat".
  • 3A and 3B are diagrams for describing filters of a main pattern according to an embodiment of the present invention.
  • a zero element that can be approximated to a zero value or zero value, and a non-zero non-zero non-zero constant element having a constant value between 0 and 1 are arranged based on a form.
  • a filter with a certain pattern is used.
  • a filter that is used at a particularly high frequency that is, a filter having a main pattern.
  • a filter having such a main pattern is widely used regardless of a neural network type, and an image having a high probability can be classified using a filter having such a main pattern. Since the calculation amount for the filter having such a main pattern occupies a statistically high calculation ratio among the total calculation amounts of the neural network, the processing efficiency can be improved by performing preprocessing on the main filters.
  • the processor 120 may select a main pattern based on a frequency for each pattern in which elements are disposed in a plurality of filters generated by learning by the neural network. That is, a predetermined number of patterns may be selected as the main pattern in order of a pattern having a relatively high frequency of appearance among the patterns of the plurality of generated filters. In addition, a pattern having a frequency of more than a predetermined number of times among each pattern of the generated plurality of filters may be selected as the main pattern.
  • the filter of the main pattern may be selected based on the statistical weight of the calculation amount occupied by each pattern for each pattern in which elements are arranged in the plurality of filters generated by learning by the neural network. That is, a filter having a predetermined number of different main patterns may be selected according to the order in which the ratio of the calculation amount to the total calculation amount required for the processing of the input data is high, wherein the sum of the calculation amounts exceeds a certain ratio or more. Can be selected.
  • VGG16 a type of neural network
  • the processor 120 may select a filter of 20 patterns that occupy 60% of the computation amount as a filter to perform preprocessing.
  • ResNet another kind of neural network
  • the processor 120 may select a filter of 20 patterns that occupy 40% of the computation amount as a filter to perform preprocessing.
  • the present invention when a filter having a main pattern is applied to the input data, by storing in advance the operation instruction corresponding to the filtering of the input data by the applied filter, according to the convolution operation of the filter of the main pattern The purpose is to reduce the amount of computation.
  • the position of the element in the two-dimensional plane (x, y) of the filter has a predetermined pattern, but the target of the pre-processing in 'pattern' unit, not 'filter' unit Can be extended
  • the operation instruction for the area in which the main pattern exists may be stored in advance.
  • an operation is performed on an area in which a main pattern exists by using a pre-stored operation instruction, and a filtering operation by a conventional convolution is performed on the remaining areas other than the area in which the main pattern exists. It can be included in the technical idea of.
  • the operation instruction for the region in which the main pattern exists may be stored in advance.
  • FIG. 4 illustrates a structure of optimization data for a filter having an A pattern, which is one of main patterns.
  • the A filter applied to the input data may have a data array structure in which a zero element and a zero element included in the A filter are arranged in series.
  • the optimization data of the A filter proposed in the present invention includes a tag indicating that the applied filter is an A pattern in the first field area of the data array, and the remaining field areas are A filter. Has the structure of only elements.
  • the tag functions as identification information for identifying the A pattern.
  • the optimization data is a structure in which all zero elements are removed from the existing data array structure and only a tag indicating a type of a pattern is added.
  • FIG. 5 is a block diagram schematically illustrating a configuration of another electronic device according to an embodiment of the present disclosure.
  • the other electronic device 200 is configured to apply the training data received from the electronic device 100 'to the CNN to generate output data that is determined or predicted from the input data.
  • the electronic device 100 ' will be referred to as a first electronic device and another electronic device 200 as a second electronic device.
  • the second electronic device 200 includes a communication unit 210, a calculation module 220, a memory 230, and a processor 240.
  • the communication unit 210 is configured to communicate with the first electronic device 100 ′ that performs learning by the neural network.
  • the second electronic device 200 may receive an operation command of a filter selected by the first electronic device 100 ′ from the first electronic device 100 through the communication unit 210.
  • the second electronic device 200 may receive an operation command corresponding to the optimization data and the optimization data of the selected filter from the first electronic device 100 ′.
  • the communication unit 210 may receive the updated operation command whenever the operation command is updated according to the learning of the first electronic device 100 ′.
  • the communication unit 210 of the second electronic device 200 updates the optimization data of the main filter and the operation command corresponding to the optimization data according to the learning of the first electronic device 100 ′. It may receive updated optimization data and operation instructions.
  • the communication unit 210 may include Global System for Mobile Communication (GSM), Code Division Multi Access (CDMA), Code Division Multi Access 2000 (CDMA2000), Enhanced Voice-Data Optimized or Enhanced Voice-Data Only (EV-DO), and WCMA (The first electronic device in a mobile communication network established in accordance with Wideband CDMA, High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), etc.
  • the wireless signal may be transmitted and received at 100 '.
  • the communication unit 210 may support Wi-Fi, Bluetooth, Bluetooth, and Wi-Fi Direct.
  • the calculation module 220 includes a plurality of processing elements (PEs) arranged in a predetermined structure, and is configured to perform calculation by sharing data between adjacent calculation elements.
  • the PE may be arranged in a structure of various types of networks, such as a mesh topology network and a tree topology network.
  • the memory 230 is a component that stores data in which preprocessing has been performed on filters of a main pattern.
  • the memory 230 stores optimization data for the filter of the main pattern and an operation command corresponding to the optimization data, received from the first electronic device 100 ′.
  • the memory 230 may be implemented as a memory of various formats such as a hard disk drive (HDD), a solid state drive (SSD), a DRAM memory, an SRAM memory, an FRAM memory, or a flash memory.
  • HDD hard disk drive
  • SSD solid state drive
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • FRAM Fidel random access memory
  • flash memory any type of non-volatile memory
  • the processor 240 is a component that controls the overall operation of the second electronic device 200.
  • the processor 240 receives a filter of a main pattern among a plurality of filters generated by learning of a neural network from the first electronic device 100 ′, and calculates an operation command corresponding to the main pattern, that is, a filter of the main pattern.
  • the communicator 230 may control the communicator 230 to receive an operation command for filtering input data of the neural network.
  • the processor 240 stores the received acid instruction in the memory 230.
  • the filter of the main pattern is preprocessed into the optimization data in the first electronic device 100 'and receives an operation command corresponding to the optimization data and the optimization data from the first electronic device 100', the optimization data and the optimization data.
  • the operation instruction corresponding to the is stored in the memory 230.
  • the processor 240 may identify the operation command corresponding to the main pattern to be applied among the operation instructions stored in the memory 230.
  • the processor 240 derives an operation value based on the input data and the preprocessed optimization data according to the confirmed operation instruction.
  • the processor 240 controls the operation module 220 to acquire an operation instruction corresponding to the identification information from the memory and perform a filtering operation according to the acquired operation instruction, based on the identification information included in the optimization data. can do.
  • the processor 240 may obtain an operation command corresponding to the identification information included in the optimization data from the memory 230 and control the operation module 220 to perform a filtering operation according to the obtained operation command.
  • the derived arithmetic value will generally be the same as the arithmetic value whose input data is filtered based on the convolution arithmetic by a filter of a preset pattern.
  • FIG. 6 illustrates a machine learning system that receives an operation command for a main filter from a first electronic device and performs a convolution operation using the same, according to an embodiment of the present disclosure.
  • the first electronic device 100 ′ performing the learning by the neural network is a server, and the second electronic device 200 performing a filtering operation on the input data using the data learned by the first electronic device.
  • a filter composed of a main pattern is called a main filter.
  • the server 100 ′ is configured to perform preprocessing on the main filter, and generates optimization data and operation instructions corresponding to the main filter.
  • the server 100 ′ includes a memory 110, a processor 120, and a communication unit 130.
  • the processor 120 learns a plurality of filters applied to the neural network, selects a main filter among the learned plurality of filters, and stores the selected filter in the memory 110.
  • the processor 120 may select at least one filter having a predetermined pattern as the main filter in order of the frequency of the pattern among the learned filters. For example, as shown in FIG. 6, the processor 120 may store the first data to the third data learned for the first to third filters, which are main filters, in the memory 110.
  • the processor 120 may generate first to third optimized data, respectively, in which the first to third filters are optimized using a data analysis software tool.
  • the first to third optimization data each include a tag indicating a pattern type of the first to third filters determined according to the arrangement pattern of the zero elements included in each of the first to third filters.
  • the first to third optimization data may be data from which zero elements included in the filter are removed.
  • the number of bits occupied by the first to third optimization data may be smaller than that of the original learned data. That is, in this case, since the first to third optimization data only include tags indicating the pattern type of the element and the filter, the number of bits occupied in comparison with the original learned data occupying a certain number of bits, regardless of the type of filter. Can be saved.
  • the number of bits representing the optimization data may be differently determined depending on the value of the element.
  • the nonzero elements at positions '1', '4', and '7' in the first optimization data may be represented by 8 bits, which are the optimal number of bits for representing each element.
  • the zero elements at positions '1' and '9' in the second optimization data may be represented by 16 bits, which is an optimal number of bits for representing each zero element.
  • the processor 120 may generate first to third operation instructions, which are operation instructions for the first to third filters, based on the first to third optimization data.
  • the operation command is a command for causing the operation element determined according to the main pattern among the plurality of operation elements included in the operation module 220 included in the user device 200 to perform the operation determined according to the main pattern.
  • the first to third operation instructions may be determined according to the pattern type of the first to third filters.
  • the processor 120 may transmit the first to third optimization data and the first to third operation instructions to the user device 200 through the communication unit 130.
  • the user device 200 may receive first to third optimization data and first to third operation instructions through a communication unit.
  • the processor 240 may store the received first to third optimization data and the first to third operation instructions in the memory 230.
  • the first to third optimization data and the first to third operation instructions may be separately stored in different areas (data memory, operation instruction memory) of the memory 230.
  • the first to third optimization data stored in the memory 230 are input to the calculation module 220, and the calculation module 220 applies the first to third optimization data to the input image, respectively.
  • An operation according to the third operation instruction may be performed.
  • the processor 240 performs a filtering operation by convolution by using a calculation element determined by the first to third calculation instructions for each of the first to third optimization data among the plurality of calculation elements included in the calculation module 220. Do this.
  • the processor 240 may receive the updated first to third optimization data from the server 100' and perform an update.
  • the processor 240 may receive the optimization data and the operation instruction for the changed main filter and perform an update.
  • FIGS. 7A and 7B are diagrams for describing a method of learning various types of input images and recognizing the input image using the learned data, according to an exemplary embodiment.
  • the processor 240 of the second electronic device 200 may control the operation module 220 to perform an operation according to the CNN algorithm for image recognition, and may include “cat” and “squirrel”. Recognition of various animal images such as ",” deer “,” dog “can be performed.
  • the optimization data learned about the main pattern may be stored in advance in the memory 230 of the second electronic device 200.
  • the optimization data may include, for example, respective data for the tag and the topic element indicating the type of the main pattern of the filter, each consisting of an A pattern, a B pattern and a C pattern.
  • the memory 230 may store operation instructions corresponding to tags representing the A pattern, the B pattern, and the C pattern type, respectively.
  • the calculation module 220 When the image data is input to the calculation module 220, filters of various patterns are applied to the image data. With respect to the filter of the main pattern, the calculation module 220 performs a calculation using the learned optimization data and calculation instructions. Done.
  • the processor 240 selects a filter of at least one A pattern from among the plurality of learned filters used for filtering the image data, and applies the preprocessing data for the filter of the A pattern when applying the filter of the selected A pattern to the image data. Use That is, when the filter of the A pattern is applied to the image data, the processor 240 inputs a logical element included in the optimization data corresponding to the filter of the A pattern to the calculation module 110.
  • the processor 240 may extract an operation instruction corresponding to the A pattern from the memory 230 based on a tag indicating the type of the A pattern.
  • the processor 240 may input the extracted operation command to the operation module 220 to control the filtering operation by the filter including the A pattern to be performed.
  • the processor 240 may rearrange a plurality of filters (learned data) to be sequentially applied to the calculation module 220 for each filter of a main pattern. Specifically, referring to FIG. 7B, the processor 240 selects the filters 71 of the plurality of A patterns from the plurality of filters generated by the learning. Since the result of performing the filtering operation is not influenced by the order of the filters due to the characteristics of the convolution operation, the processor 240 may apply the plurality of filters such that the selected plurality of A filters 71 are preferentially applied to the calculation module 220. Can be rearranged.
  • the processor 240 acquires the operation instruction corresponding to the A pattern from the memory 230 only once, and collectively performs the filtering operation by the filter 71 of the plurality of A patterns of the same pattern by using the acquired operation instruction. Can be handled as In this manner, the processor 240 selects and rearranges each of the plurality of B-pattern filters 72 and the C-pattern filter 73 for each of the same patterns among the plurality of filters sequentially applied to the filtering of the input data, and B Only one operation instruction corresponding to the filter 72 of the pattern 72 and the filter 73 of the C pattern is obtained once, and the filtering operations of the filter 72 of the B pattern and the filter 73 of the C pattern are collectively processed. can do.
  • the processor 240 may reduce the load and the decrease in the operation speed caused by repeatedly loading the operation instruction from the memory 230.
  • FIG. 8 is a flowchart illustrating a control method of a first electronic device according to an embodiment of the present disclosure.
  • an operation instruction for filtering input data of a neural network is checked for each filter of a main pattern selected from a plurality of filters generated by learning by the neural network (S1110).
  • the filter of the main pattern may be a filter selected based on the statistical weight of the calculation amount of each pattern for each of the patterns in which the elements are arranged in the generated plurality of filters.
  • the main pattern may be selected based on the frequency for each pattern in which the elements are arranged in a topic.
  • the plurality of filters generated by the learning may be different depending on the learning target of the neural network.
  • an operation command for filtering the input data of the neural network is checked for each filter of the selected main pattern (S820).
  • the optimization data in which zero elements are removed from the filter of the selected main pattern and the operation instruction corresponding to the optimization data may be checked.
  • the filtering operation by the filter of the main pattern may be performed according to the stored operation command.
  • the stored operation command may be transmitted to the second electronic device.
  • the second electronic device may filter input data of the neural network by using an operation command.
  • the neural network learning method may be implemented as a program and stored in various recording media. That is, a computer program processed by various processors and capable of executing the various display methods described above may be used in a state stored in a recording medium.
  • the method may further include verifying an operation command for filtering input data of the neural network for each filter of a main pattern selected from a plurality of filters generated by learning by the neural network, and storing the identified operation command.
  • a non-transitory computer readable medium in which a program is stored may be provided.
  • Non-transitory readable medium A medium that stores data for a short time, such as registers, caches, memories, etc., but semi-permanently, and means a medium that can be read by a device.
  • the various applications or programs described above may be stored and provided in a non-transitory readable medium such as a CD, a DVD, a hard disk, a Blu-ray disk USB, a memory card, a ROM, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif électronique. Ce dispositif électronique comprend : une mémoire ; et un processeur qui vérifie une instruction d'opération pour filtrer des données d'entrée d'un réseau neuronal pour chaque filtre à motif principal sélectionné à partir d'une pluralité de filtres générés conformément à un apprentissage par le réseau neuronal, et stocke l'instruction d'opération vérifiée dans la mémoire.
PCT/KR2018/006400 2017-06-07 2018-06-05 Dispositif électronique et son procédé de commande Ceased WO2018226014A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880034791.6A CN110678883B (zh) 2017-06-07 2018-06-05 电子装置和用于控制该电子装置的方法
EP18812967.0A EP3582152B1 (fr) 2017-06-07 2018-06-05 Dispositif électronique et son procédé de commande
US16/617,967 US11450086B2 (en) 2017-06-07 2018-06-05 Electronic device and method for controlling same

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762516389P 2017-06-07 2017-06-07
US62/516,389 2017-06-07
KR10-2017-0144071 2017-10-31
KR1020170144071A KR102548718B1 (ko) 2017-06-07 2017-10-31 전자 장치 및 그 제어 방법

Publications (1)

Publication Number Publication Date
WO2018226014A1 true WO2018226014A1 (fr) 2018-12-13

Family

ID=64566640

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/006400 Ceased WO2018226014A1 (fr) 2017-06-07 2018-06-05 Dispositif électronique et son procédé de commande

Country Status (1)

Country Link
WO (1) WO2018226014A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110687796A (zh) * 2019-11-05 2020-01-14 青岛大学 基于神经网络的自适应有限时间命令滤波反步控制方法
CN111881705A (zh) * 2019-09-29 2020-11-03 深圳数字生命研究院 数据处理、训练、识别方法、装置和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359576B1 (en) * 2004-02-27 2008-04-15 Adobe Systems Incorporated Using difference kernels for image filtering
US20100166315A1 (en) * 2008-12-26 2010-07-01 Paquier Williams J F Method and apparatus for creating a pattern recognizer
KR20160034814A (ko) * 2014-09-22 2016-03-30 삼성전자주식회사 뉴럴 네트워크를 수반한 클라이언트 장치 및 그것을 포함하는 시스템
KR20160142791A (ko) * 2015-06-03 2016-12-13 삼성전자주식회사 뉴럴 네트워크 실시 방법 및 장치
KR20170005562A (ko) * 2015-07-06 2017-01-16 삼성전자주식회사 데이터를 병렬적으로 처리하는 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359576B1 (en) * 2004-02-27 2008-04-15 Adobe Systems Incorporated Using difference kernels for image filtering
US20100166315A1 (en) * 2008-12-26 2010-07-01 Paquier Williams J F Method and apparatus for creating a pattern recognizer
KR20160034814A (ko) * 2014-09-22 2016-03-30 삼성전자주식회사 뉴럴 네트워크를 수반한 클라이언트 장치 및 그것을 포함하는 시스템
KR20160142791A (ko) * 2015-06-03 2016-12-13 삼성전자주식회사 뉴럴 네트워크 실시 방법 및 장치
KR20170005562A (ko) * 2015-07-06 2017-01-16 삼성전자주식회사 데이터를 병렬적으로 처리하는 장치 및 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3582152A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881705A (zh) * 2019-09-29 2020-11-03 深圳数字生命研究院 数据处理、训练、识别方法、装置和存储介质
CN111881705B (zh) * 2019-09-29 2023-12-12 深圳数字生命研究院 数据处理、训练、识别方法、装置和存储介质
CN110687796A (zh) * 2019-11-05 2020-01-14 青岛大学 基于神经网络的自适应有限时间命令滤波反步控制方法
CN110687796B (zh) * 2019-11-05 2022-03-18 青岛大学 基于神经网络的自适应有限时间命令滤波反步控制方法

Similar Documents

Publication Publication Date Title
WO2018230832A1 (fr) Appareil et procédé de traitement d'images utilisant une carte de caractéristiques multicanaux
WO2020246834A1 (fr) Procédé de reconnaissance d'objet dans une image
WO2019098538A1 (fr) Dispositif et procédé de traitement d'opération de convolution utilisant un noyau
WO2018106005A1 (fr) Système de diagnostic d'une maladie à l'aide d'un réseau neuronal et procédé associé
WO2020111754A2 (fr) Procédé pour fournir un système de diagnostic utilisant l'apprentissage semi-supervisé, et système de diagnostic l'utilisant
WO2021118039A1 (fr) Procédé fondé sur l'apprentissage profond permettant de filtrer des images similaires, et appareil faisant appel audit procédé
WO2021125619A1 (fr) Procédé d'inspection de marquage sur zone de délimitation à l'aide d'un modèle d'apprentissage profond et appareil faisant appel audit procédé
CN112132278A (zh) 模型压缩方法、装置、计算机设备及存储介质
WO2019098418A1 (fr) Procédé et dispositif d'apprentissage de réseau neuronal
WO2019235828A1 (fr) Système de diagnostic de maladie à deux faces et méthode associée
WO2020101457A2 (fr) Procédé de diagnostic de consensus à base d'apprentissage supervisé et système associé
WO2018226014A1 (fr) Dispositif électronique et son procédé de commande
WO2021118041A1 (fr) Procédé pour distribuer un travail d'étiquetage en fonction de sa difficulté, et appareil l'utilisant
Liu et al. Poultry disease identification based on light weight deep neural networks
CN114124447A (zh) 一种基于Modbus数据包重组的入侵检测方法及装置
US12462033B1 (en) Operation method of electronic apparatus for performing large language model based analysis inference for malicious script action
EP4374316A1 (fr) Procédé et dispositif électronique de segmentation d'objets dans une scène
EP3582152B1 (fr) Dispositif électronique et son procédé de commande
WO2020067615A1 (fr) Procédé de commande d'un dispositif d'anonymisation vidéo permettant d'améliorer les performances d'anonymisation, et dispositif associé
WO2021125539A1 (fr) Dispositif, procédé et programme informatique de classification d'objets présents dans une image
CN112766276B (zh) 卷积神经网络模型的通道调整方法、装置和设备
WO2023106870A1 (fr) Système de réseau neuronal convolutif restructuré utilisant un cmp et son procédé de fonctionnement
WO2022019356A1 (fr) Méthode d'annotation de site pathogène de maladie au moyen d'un apprentissage semi-supervisé, et système de diagnostic pour sa mise en œuvre
WO2022019355A1 (fr) Méthode de diagnostic de maladie utilisant un réseau neuronal entraîné à l'aide d'une image biométrique multiphase, et système de diagnostic de maladie la mettant en œuvre
CN114596634A (zh) 人体行为的识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18812967

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018812967

Country of ref document: EP

Effective date: 20190909

NENP Non-entry into the national phase

Ref country code: DE