[go: up one dir, main page]

WO2021057811A1 - Network node processing method, device, storage medium, and electronic apparatus - Google Patents

Network node processing method, device, storage medium, and electronic apparatus Download PDF

Info

Publication number
WO2021057811A1
WO2021057811A1 PCT/CN2020/117227 CN2020117227W WO2021057811A1 WO 2021057811 A1 WO2021057811 A1 WO 2021057811A1 CN 2020117227 W CN2020117227 W CN 2020117227W WO 2021057811 A1 WO2021057811 A1 WO 2021057811A1
Authority
WO
WIPO (PCT)
Prior art keywords
network node
processing unit
calculation processing
target network
linked list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/117227
Other languages
French (fr)
Chinese (zh)
Inventor
谭志鹏
刘耀勇
蒋燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of WO2021057811A1 publication Critical patent/WO2021057811A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of computer technology, and in particular, to a method, device, storage medium, and electronic equipment for processing network nodes.
  • each network node runs on according to the current idle state of each calculation processing unit (such as CPU, GPU, DSP) of the mobile phone.
  • the network structure of the neural network model is modified, that is, the network node in the container needs to be deleted or the network node needs to be added to the container.
  • the embodiments of the present application provide a network node processing method, device, storage medium, and electronic equipment, which only need to modify the connection relationship between the previous network node and the next network node of the network node, without frequent shift operations, and save
  • the adjustment time of the network structure can be improved, which can improve the efficiency of inference calculation using the neural network model.
  • an embodiment of the present application provides a method for processing a network node, and the method includes:
  • the inference calculation result is generated.
  • an embodiment of the present application provides a network node processing device, the device including:
  • the node determination module is configured to trigger the use of a neural network model including at least one network node to perform inference calculations, to determine a second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node uses the first linked list Is pre-stored in the first calculation processing unit, and the target network node runs on the second calculation processing unit;
  • a node allocation module configured to delete the target network node from the first linked list and modify the target in the first linked list when the first calculation processing unit is different from the second calculation processing unit The connection relationship between the previous network node and the next network node of the network node, assigning the target network node to the second calculation processing unit;
  • the node cycle module is used to obtain the next network node of the target network node, determine the next network node as the target network node, and execute the determination of the second corresponding to the target network node among the at least one network node Steps of calculation processing unit;
  • the result generation module is used to generate the inference calculation result when it is determined that there is no next network node.
  • an embodiment of the present application provides a computer storage medium, the computer storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the above method steps.
  • an embodiment of the present application provides an electronic device, which may include a processor and a memory; wherein the memory stores a computer program, and the computer program is adapted to be loaded by the processor and execute the above method steps .
  • FIG. 1 is a schematic flowchart of a method for processing a network node according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an example of a linked list structure provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an example of deleting a network node in a first linked list according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of an example of a linked list structure after deleting a network node in a first linked list according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of an example of adding a network node to a second linked list according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of an example of the structure of a second linked list after adding a network node according to an embodiment of the present application
  • FIG. 7 is a schematic flowchart of a method for processing a network node according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a network node processing device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a network node processing device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the network node processing method provided by the embodiments of the present application will be introduced in detail below in conjunction with FIG. 1 to FIG. 7.
  • the method can be realized by relying on a computer program, and can be run on a network node processing device based on the von Neumann system.
  • the computer program can be integrated in the application or run as an independent tool application.
  • the network node processing device in the embodiment of the present application may be a user terminal, and the user terminal includes, but is not limited to: a smart phone, a personal computer, a tablet computer, a handheld device, a vehicle-mounted device, a wearable device, a computing device, or Other processing equipment for wireless modems, etc.
  • FIG. 1 is a schematic flowchart of a method for processing a network node according to an embodiment of this application. As shown in Figure 1, the method of the embodiment of the present application may include the following steps:
  • S101 Trigger using a neural network model including at least one network node to perform inference calculations, and determine a second calculation processing unit corresponding to a target network node in the at least one network node, and the at least one network node pre-exists in the form of a first linked list A first calculation processing unit, where the target network node runs on the second calculation processing unit;
  • Neural network is a complex network system formed by a large number of simple processing units (neurons) widely connected to each other. Neural networks have large-scale parallelism, distributed storage and processing, self-organization, self-adaptation, and self-learning capabilities. They are particularly suitable for processing inaccurate and fuzzy information processing problems that need to consider many factors and conditions at the same time.
  • the neural network model is described based on the mathematical model of neurons.
  • each neuron is a network node, also called an operator.
  • each network is dynamically determined according to the load rate F of each calculation processing unit (CPU, GPU, DSP, etc.) and the execution time t of the network node in each calculation processing unit. Which processing unit the node needs to be allocated to run, therefore, the network structure of the neural network model needs to be modified.
  • any network node is the target network node, and all network nodes can be allocated in the same manner.
  • Specific modifications can include: dividing the network and distributing network nodes to various computing processing units; according to the characteristics of the computing processing unit, a certain network node needs to be deleted before entering a certain processing unit; according to the characteristics of the computing processing unit, Combine certain network nodes, etc.
  • each network node of the neural network model is pre-stored in the first calculation processing unit, and is pre-stored in the form of a first linked list.
  • the first calculation processing unit may be any processing unit of the user terminal, which is pre-configured by the system, such as a CPU.
  • a linked list is a non-contiguous, non-sequential storage structure on a physical storage unit, and the logical order of data elements is achieved through the link order of pointers in the linked list.
  • the linked list is composed of a series of nodes (each element in the linked list is called a network node), and the network node can be dynamically generated at runtime.
  • Figure 2 shows a schematic diagram of a linked list structure, which includes multiple network nodes.
  • Each network node includes two parts: one is the data field data storing data elements, and the other is the pointer that stores the address of the next node. Domain next. Since it is not necessary to store in order, the linked list can reach O(1) complexity when inserting.
  • the second calculation processing unit and the first calculation processing unit may be the same or different.
  • the second calculation processing unit may include at least one.
  • the user terminal includes three calculation processing units: CPU, GPU, and DSP, the CPU is the first calculation processing unit.
  • the processing unit, the second calculation processing unit may be a CPU, a GPU, or a DSP.
  • the second calculation processing unit to which each network node belongs can be determined first, and then node allocation can be performed, or the second calculation described by one network node can be determined first
  • the processing unit then allocates and allocates the next network node in the same manner after the allocation of the network node is completed.
  • the second calculation processing unit to which each network node belongs is the carrier (second calculation processing unit) that each network node is about to run. In the embodiment of the present application, description is made by taking the sequential allocation of each network node as an example.
  • the network node needs to be deleted from the first calculation processing unit and added to the second calculation processing unit.
  • the target network node is q, and it is determined that the network node needs to be allocated to a second calculation processing unit different from the first calculation processing unit, then q needs to be deleted from the first linked list.
  • the value of q The previous node is p and the next node is s.
  • the structure of the first linked list is shown in Figure 4.
  • the network nodes are also stored in the form of a linked list (second linked list) on the second calculation processing unit.
  • q is the first network node allocated to the second calculation processing unit
  • q can be directly placed in the first node in the second linked list.
  • the target network node is q
  • the next network node of the target network node is s. After s is determined as the target network node, it is processed in the same way as q, and processed accordingly The remaining network nodes.
  • the first calculation processing unit and the second calculation processing unit can be used to run each network node at the same time, and the inference calculation is obtained result. Or it can be understood that after each network node is allocated, run the network node immediately, and use the operation result as the input of the next network node, until the last network node on each processing unit is completed, and the operation results are combined to generate The final inference calculation result.
  • the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in order. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. You can run the node after the allocation of node 1 is completed, then allocate node 2, and use the operation result of node 1 as the input of node 2, and so on, output the first inference calculation result after node 10 is completed , Run 11-20 and 21-30 respectively in the same way, output the second inference calculation result after the 20 operation is completed, and output the third inference calculation result after the 30 operation is completed, and then according to the first inference calculation result and the second inference calculation result The calculation result and the third inference calculation result obtain the final inference settlement result. At this time, the operation of the neural network model ends.
  • the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in sequence. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. , After all the nodes are allocated, the nodes on different calculation processing units are run at the same time, and finally the combination on each calculation processing unit generates the final inference settlement result.
  • the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit.
  • the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way.
  • Each network node is expressed in the form of a linked list, that is, each network node in the neural network model is connected in a linked list. Every time the network structure is adjusted, when the network node is frequently inserted and deleted, even if the number of network nodes is large, only The connection relationship between the previous network node and the next network node of the network node needs to be modified without frequent shift operations, which saves time for network structure adjustment, and can improve the efficiency of inference calculations using neural network models.
  • FIG. 7 is a schematic flowchart of a method for processing a network node according to an embodiment of this application.
  • the network node processing method is applied to a smart phone as an example.
  • the network node processing method may include the following steps:
  • S201 Trigger to use a neural network model including at least one network node to perform inference calculation, and the at least one network node pre-stores in the first calculation processing unit in the form of a first linked list;
  • the load factor F refers to the ratio of the actual load of the transformer to its capacity, which is used to reflect the load-bearing capacity of the transformer and whether its operating curve lies between the optimal 75-80%.
  • the current load rate is the load rate of each calculation processing unit when the neural network model including at least one network node is used for inference calculations.
  • the computing processing units of a smartphone include CPU, GPU, and DSP
  • the execution time of the target network node on these three computing processing units is t1, t2, and t3, respectively, and the current load rate of each computing processing unit is F1, F2, respectively And F3.
  • S204 Determine a minimum execution expectation among the execution expectations of each calculation processing unit, and determine the calculation processing unit indicated by the minimum execution expectation as a second calculation processing unit, where the second calculation processing unit includes a second linked list;
  • E1 is the smallest, determine that the device corresponds to the CPU, use the CPU as the second calculation processing unit, and allocate the target network node to the CPU.
  • the first calculation processing unit is also a CPU, it indicates that the target network node still needs to run on the first calculation processing unit.
  • the target network node needs to be replaced with another equivalent network node.
  • the network node that the target network node points to is the next network node
  • the network node that points to the target network node is the previous network node.
  • q is the target network node
  • s is the next network node
  • p is the previous network node.
  • auxiliary nodes can be added and/or deleted to the second linked list, combined to generate a new network node, and then run.
  • the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in order. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. After the allocation of node 1 is completed, if you need to add auxiliary nodes to node 1 according to the characteristics of the CPU, and combine them into a new network node to run, and then allocate node 2.
  • the next network node When it is determined that the next network node does not exist, it indicates that all network nodes stored in the first linked list are allocated. It is necessary to add or delete auxiliary nodes to the first linked list and the second linked list according to the characteristics of the calculation processing unit, or to add or delete some Several network nodes are merged to form a new network node to run the neural network model. After the addition or deletion of auxiliary nodes is completed, the first calculation processing unit and the second calculation processing unit can be used to run each network node and obtain Inference calculation results.
  • the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in order. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. 1 to 10 form the first linked list, 11 to 20 form the second linked list, and 21 to 30 form the third linked list. After all network nodes are allocated, auxiliary nodes are added or deleted from the three linked lists, or several nodes are merged. Then each network node on the three linked lists is executed separately to generate the final inference calculation result.
  • the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit.
  • the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way.
  • the inference calculation result is generated.
  • each network node in the form of a linked list that is, connect each network node in the neural network model in a linked list manner, which can support dynamic device switching, optimize the corresponding network structure storage method, and frequently perform network structure adjustments every time.
  • inserting and deleting network nodes even if the number of network nodes is large, only the connection relationship between the previous network node and the next network node of the network node needs to be modified, without frequent shift operations, which saves time for network structure adjustment. Thereby, the efficiency of inference calculation using the neural network model can be improved.
  • FIG. 8 shows a schematic structural diagram of a network node processing apparatus provided by an exemplary embodiment of the present application.
  • the network node processing device can be implemented as all or a part of the user terminal through software, hardware or a combination of the two.
  • the device 1 includes a node determination module 10, a node allocation module 20, a node circulation module 30, and a result generation module 40.
  • the node determining module 10 is configured to trigger the use of a neural network model including at least one network node to perform inference calculations, and determine the second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node adopts the first
  • the form of a linked list is pre-stored in the first calculation processing unit, and the target network node runs on the second calculation processing unit;
  • the node allocation module 20 is configured to delete the target network node from the first linked list and modify the target network node when the first calculation processing unit is different from the second calculation processing unit.
  • the node cycle module 30 is configured to obtain the next network node of the target network node, determine the next network node as the target network node, and perform the determination of the first network node corresponding to the target network node in the at least one network node 2. Steps of calculating the processing unit;
  • the result generation module 40 is used to generate the inference calculation result when it is determined that there is no next network node.
  • the node determining module 10 is specifically configured to:
  • a second calculation processing unit corresponding to the target network node is determined in each calculation processing unit.
  • the node determining module 10 is specifically configured to:
  • the minimum execution expectation among the execution expectations of the calculation processing units is determined, and the calculation processing unit indicated by the minimum execution expectation is determined as the second calculation processing unit.
  • the node allocation module 20 is specifically configured to:
  • the node allocation module 20 is specifically configured to:
  • the target network node is added to the second linked list of the second calculation processing unit, and the connection relationship between the previous network node and the next network node of the target network node in the second linked list is modified.
  • the node allocation module 20 is specifically configured to:
  • the device further includes an auxiliary node increase/decrease module 50, configured to:
  • the device further includes a node execution module 60, configured to:
  • the target network node is executed in the second calculation processing unit.
  • the network node processing method of the network node processing device provided in the above embodiment only the division of the above functional modules is used as an example for illustration. In actual applications, the above functions can be allocated to different functional modules according to needs. Complete, that is, divide the internal structure of the device into different functional modules to complete all or part of the functions described above.
  • the network node processing device provided in the foregoing embodiment and the network node processing method embodiment belong to the same concept, and the implementation process is detailed in the method embodiment, which will not be repeated here.
  • the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit.
  • the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way.
  • the inference calculation result is generated.
  • each network node in the form of a linked list that is, connect each network node in the neural network model in a linked list manner, which can support dynamic device switching, optimize the corresponding network structure storage method, and frequently perform network structure adjustments every time.
  • inserting and deleting network nodes even if the number of network nodes is large, only the connection relationship between the previous network node and the next network node of the network node needs to be modified, without frequent shift operations, which saves time for network structure adjustment. Thereby, the efficiency of inference calculation using the neural network model can be improved.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the method steps of the embodiment shown in FIG. 1 to FIG. 7 above.
  • the specific execution process please refer to the specific description of the embodiment shown in FIG. 1 to FIG. 7, which will not be repeated here.
  • the electronic device 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, a memory 1005, and at least one communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display (Display) and a camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the processor 1001 may include one or more processing cores.
  • the processor 1001 uses various excuses and lines to connect various parts of the entire electronic device 1000, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 1005, and calling data stored in the memory 1005.
  • Various functions and processing data of the electronic device 1000 may adopt at least one of digital signal processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 1001 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used to render and draw the content that needs to be displayed on the display screen; the modem is used to process wireless communication. It is understandable that the above-mentioned modem may not be integrated into the processor 1001, but may be implemented by a chip alone.
  • the memory 1005 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 1005 includes a non-transitory computer-readable storage medium.
  • the memory 1005 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 1005 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for at least one function (such as touch function, sound playback function, image playback function, etc.), Instructions used to implement the foregoing method embodiments, etc.; the storage data area can store data and the like involved in the foregoing method embodiments.
  • the memory 1005 may also be at least one storage device located far away from the foregoing processor 1001.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a network node processing application program.
  • the user interface 1003 is mainly used to provide an input interface for the user to obtain data input by the user; and the processor 1001 can be used to call a network node processing application stored in the memory 1005, and Specifically perform the following operations:
  • the inference calculation result is generated.
  • the processor 1001 specifically executes the following operations when determining the second calculation processing unit corresponding to the target network node in the at least one network node:
  • a second calculation processing unit corresponding to the target network node is determined in each calculation processing unit.
  • the processor 1001 specifically executes the second calculation processing unit corresponding to the target network node in the calculation processing units based on the current load rate and the execution time. The following operations:
  • the minimum execution expectation among the execution expectations of the calculation processing units is determined, and the calculation processing unit indicated by the minimum execution expectation is determined as the second calculation processing unit.
  • the processor 1001 deletes the target network node from the first linked list and modifies the previous network node and the next network node of the target network node in the first linked list. For the connection relationship, perform the following operations:
  • the processor 1001 specifically executes the following operations when executing the allocation of the target network node to the second calculation processing unit:
  • the target network node is added to the second linked list of the second calculation processing unit, and the connection relationship between the previous network node and the next network node of the target network node in the second linked list is modified.
  • the processor 1001 adds the target network node to the second linked list of the second calculation processing unit, and modifies the previous one of the target network node in the second linked list.
  • the processor 1001 adds the target network node to the second linked list of the second calculation processing unit, and modifies the previous one of the target network node in the second linked list.
  • the processor 1001 further performs the following operations before generating the inference calculation result:
  • the processor 1001 further executes the following operations after executing the allocation of the target network node to the second calculation processing unit:
  • the target network node is executed in the second calculation processing unit.
  • the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit.
  • the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way.
  • the inference calculation result is generated.
  • each network node in the form of a linked list that is, connect each network node in the neural network model in a linked list manner, which can support dynamic device switching, optimize the corresponding network structure storage method, and frequently perform network structure adjustments every time.
  • inserting and deleting network nodes even if the number of network nodes is large, only the connection relationship between the previous network node and the next network node of the network node needs to be modified, without frequent shift operations, which saves time for network structure adjustment. Thereby, the efficiency of inference calculation using the neural network model can be improved.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disc, a read-only storage memory or a random storage memory, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Computer And Data Communications (AREA)

Abstract

Embodiments of the present application disclose a network node processing method, a device, a storage medium, and an electronic apparatus. The method comprises: triggering and using a neural network model to perform inference computation, and determining a second computation processing unit corresponding to a target network node of at least one network node; if a first computation processing unit is different from the second computation processing unit, deleting the target network node from a first linked list, modifying connection relationships of a previous network node and a next network node of the target network node in the first linked list, and allocating the target network node to the second computation processing unit; acquiring the next network node of the target network node, setting the next network node to be the target network node, and performing determination on the second computation processing unit corresponding to the target network node of the at least one network node; and if it is determined that there is no next network node, generating an inference computation result. The embodiments of the present application improve efficiency of inference computation using a neural network model.

Description

网络节点处理方法、装置、存储介质及电子设备Network node processing method, device, storage medium and electronic equipment 技术领域Technical field

本申请涉及计算机技术领域,具体而言,尤其涉及一种网络节点处理方法、装置、存储介质及电子设备。This application relates to the field of computer technology, and in particular, to a method, device, storage medium, and electronic equipment for processing network nodes.

背景技术Background technique

当今智能手机平台上有很多适合移动端的神经网络模型,通常需要使用合适的数据结构来表示神经网络的各网络节点。例如小米的MACE,腾讯的NCNN,阿里巴巴的MNN等,它们都是用容器来表示神经网络的各网络节点。There are many neural network models suitable for mobile terminals on today's smart phone platforms, and it is usually necessary to use appropriate data structures to represent the network nodes of the neural network. For example, Xiaomi's MACE, Tencent's NCNN, Alibaba's MNN, etc., they all use containers to represent the network nodes of the neural network.

当触发采用神经网络模型推理计算时,需要根据手机当前各计算处理单元(如CPU,GPU,DSP)的空闲状态,动态地决定各网络节点运行在哪个计算处理单元上,此时,就需要对神经网络模型的网络结构进行修改,也就是需要将容器中的网络节点删除或者向容器中添加网络节点。When triggering the inference calculation using the neural network model, it is necessary to dynamically determine which calculation processing unit each network node runs on according to the current idle state of each calculation processing unit (such as CPU, GPU, DSP) of the mobile phone. The network structure of the neural network model is modified, that is, the network node in the container needs to be deleted or the network node needs to be added to the container.

发明内容Summary of the invention

本申请实施例提供了一种网络节点处理方法、装置、存储介质及电子设备,只需要修改该网络节点的上一网络节点以及下一网络节点的连接关系,而不需要频繁移位操作,节省了网络结构调整时间,从而可以提高采用神经网络模型推理计算的效率。所述技术方案如下:The embodiments of the present application provide a network node processing method, device, storage medium, and electronic equipment, which only need to modify the connection relationship between the previous network node and the next network node of the network node, without frequent shift operations, and save The adjustment time of the network structure can be improved, which can improve the efficiency of inference calculation using the neural network model. The technical solution is as follows:

第一方面,本申请实施例提供了一种网络节点处理方法,所述方法包括:In the first aspect, an embodiment of the present application provides a method for processing a network node, and the method includes:

触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;Trigger using a neural network model including at least one network node to perform inference calculations, and determine a second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node is pre-stored in the first linked list in the form of a first linked list. A calculation processing unit, where the target network node runs on the second calculation processing unit;

当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元;When the first calculation processing unit is different from the second calculation processing unit, delete the target network node from the first linked list and modify the previous network of the target network node in the first linked list The connection relationship between the node and the next network node, assigning the target network node to the second calculation processing unit;

获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;Acquiring the next network node of the target network node, determining the next network node as the target network node, and executing the step of determining the second calculation processing unit corresponding to the target network node in the at least one network node;

当确定不存在下一个网络节点时,生成推理计算结果。When it is determined that there is no next network node, the inference calculation result is generated.

第二方面,本申请实施例提供了一种网络节点处理装置,所述装置包括:In the second aspect, an embodiment of the present application provides a network node processing device, the device including:

节点确定模块,用于触发采用包括至少一个网络节点的神经网络模型进 行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;The node determination module is configured to trigger the use of a neural network model including at least one network node to perform inference calculations, to determine a second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node uses the first linked list Is pre-stored in the first calculation processing unit, and the target network node runs on the second calculation processing unit;

节点分配模块,用于当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元;A node allocation module, configured to delete the target network node from the first linked list and modify the target in the first linked list when the first calculation processing unit is different from the second calculation processing unit The connection relationship between the previous network node and the next network node of the network node, assigning the target network node to the second calculation processing unit;

节点循环模块,用于获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;The node cycle module is used to obtain the next network node of the target network node, determine the next network node as the target network node, and execute the determination of the second corresponding to the target network node among the at least one network node Steps of calculation processing unit;

结果生成模块,用于当确定不存在下一个网络节点时,生成推理计算结果。The result generation module is used to generate the inference calculation result when it is determined that there is no next network node.

第三方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行上述的方法步骤。In a third aspect, an embodiment of the present application provides a computer storage medium, the computer storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the above method steps.

第四方面,本申请实施例提供一种电子设备,可包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行上述的方法步骤。In a fourth aspect, an embodiment of the present application provides an electronic device, which may include a processor and a memory; wherein the memory stores a computer program, and the computer program is adapted to be loaded by the processor and execute the above method steps .

附图说明Description of the drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.

图1是本申请实施例提供的一种网络节点处理方法的流程示意图;FIG. 1 is a schematic flowchart of a method for processing a network node according to an embodiment of the present application;

图2是本申请实施例提供的一种链表结构的举例示意图;2 is a schematic diagram of an example of a linked list structure provided by an embodiment of the present application;

图3是本申请实施例提供的一种删除第一链表中网络节点的举例示意图;FIG. 3 is a schematic diagram of an example of deleting a network node in a first linked list according to an embodiment of the present application;

图4是本申请实施例提供的一种删除第一链表中网络节点后的链表结构的举例示意图;4 is a schematic diagram of an example of a linked list structure after deleting a network node in a first linked list according to an embodiment of the present application;

图5是本申请实施例提供的一种添加网络节点到第二链表中的举例示意图;FIG. 5 is a schematic diagram of an example of adding a network node to a second linked list according to an embodiment of the present application;

图6是本申请实施例提供的一种添加网络节点后的第二链表的结构的举例示意图;6 is a schematic diagram of an example of the structure of a second linked list after adding a network node according to an embodiment of the present application;

图7是本申请实施例提供的一种网络节点处理方法的流程示意图;FIG. 7 is a schematic flowchart of a method for processing a network node according to an embodiment of the present application;

图8是本申请实施例提供的一种网络节点处理装置的结构示意图;FIG. 8 is a schematic structural diagram of a network node processing device provided by an embodiment of the present application;

图9是本申请实施例提供的一种网络节点处理装置的结构示意图;FIG. 9 is a schematic structural diagram of a network node processing device provided by an embodiment of the present application;

图10是本申请实施例提供的一种电子设备的结构示意图。FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.

具体实施方式detailed description

为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施例方式作进一步地详细描述。In order to make the purpose, technical solutions, and advantages of the present application clearer, the following will further describe the embodiments of the present application in detail with reference to the accompanying drawings.

下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。When the following description refers to the drawings, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements. The implementation manners described in the following exemplary embodiments do not represent all implementation manners consistent with the present application. On the contrary, they are merely examples of devices and methods consistent with some aspects of the present application as detailed in the appended claims.

在本申请的描述中,需要理解的是,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。此外,在本申请的描述中,除非另有说明,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。In the description of this application, it should be understood that the terms "first", "second", etc. are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance. For those of ordinary skill in the art, the specific meanings of the above-mentioned terms in this application can be understood under specific circumstances. In addition, in the description of this application, unless otherwise specified, "plurality" means two or more. "And/or" describes the association relationship of the associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone. The character "/" generally indicates that the associated objects before and after are in an "or" relationship.

在实现上述网络节点删除或增加的过程中,每删除或添加一个网络节点到容器,都需要把容器中位于它之后的所有网络节点进行移位操作,尤其当网络节点数量较多时,每进行一次网络结构调整,都需要耗费较长的时间,从而降低了采用神经网络模型推理计算的效率。In the process of implementing the deletion or addition of the above network nodes, every time a network node is deleted or added to the container, all network nodes behind it in the container need to be shifted, especially when the number of network nodes is large, every time Network structure adjustments require a long time, which reduces the efficiency of inference calculations using neural network models.

为了解决上述技术问题,下面将结合附图1-附图7,对本申请实施例提供的网络节点处理方法进行详细介绍。该方法可依赖于计算机程序实现,可运行于基于冯诺依曼体系的网络节点处理装置上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。其中,本申请实施例中的网络节点处理装置可以为用户终端,所述用户终端包括但不限于:智能手机、个人电脑、平板电脑、手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其它处理设备等。In order to solve the above technical problems, the network node processing method provided by the embodiments of the present application will be introduced in detail below in conjunction with FIG. 1 to FIG. 7. The method can be realized by relying on a computer program, and can be run on a network node processing device based on the von Neumann system. The computer program can be integrated in the application or run as an independent tool application. Among them, the network node processing device in the embodiment of the present application may be a user terminal, and the user terminal includes, but is not limited to: a smart phone, a personal computer, a tablet computer, a handheld device, a vehicle-mounted device, a wearable device, a computing device, or Other processing equipment for wireless modems, etc.

请参见图1,为本申请实施例提供的一种网络节点处理方法的流程示意图。如图1所示,本申请实施例的所述方法可以包括以下步骤:Refer to FIG. 1, which is a schematic flowchart of a method for processing a network node according to an embodiment of this application. As shown in Figure 1, the method of the embodiment of the present application may include the following steps:

S101,触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;S101. Trigger using a neural network model including at least one network node to perform inference calculations, and determine a second calculation processing unit corresponding to a target network node in the at least one network node, and the at least one network node pre-exists in the form of a first linked list A first calculation processing unit, where the target network node runs on the second calculation processing unit;

神经网络是由大量的、简单的处理单元(神经元)广泛地互相连接而形成的复杂网络系统。神经网络具有大规模并行、分布式存储和处理、自组织、 自适应和自学能力,特别适合处理需要同时考虑许多因素和条件的、不精确和模糊的信息处理问题。神经网络模型是以神经元的数学模型为基础来描述的。Neural network is a complex network system formed by a large number of simple processing units (neurons) widely connected to each other. Neural networks have large-scale parallelism, distributed storage and processing, self-organization, self-adaptation, and self-learning capabilities. They are particularly suitable for processing inaccurate and fuzzy information processing problems that need to consider many factors and conditions at the same time. The neural network model is described based on the mathematical model of neurons.

其中,每一个神经元为一个网络节点,也叫一个算子。Among them, each neuron is a network node, also called an operator.

当需要采用神经网络模型进行推理计算时,根据各计算处理单元(CPU,GPU,DSP等)的负载率F以及该网络节点在各计算处理单元的执行时间t等信息,动态地决定每个网络节点需要分配到哪个处理单元上运行,因此,需要对神经网络模型的网络结构进行修改。在至少一个网络节点中,任一网络节点为目标网络节点,所有网络节点均可采用相同的方式分配。When the neural network model needs to be used for inference calculations, each network is dynamically determined according to the load rate F of each calculation processing unit (CPU, GPU, DSP, etc.) and the execution time t of the network node in each calculation processing unit. Which processing unit the node needs to be allocated to run, therefore, the network structure of the neural network model needs to be modified. In at least one network node, any network node is the target network node, and all network nodes can be allocated in the same manner.

具体修改可以包括:把网络进行切分,将各网络节点分散到各计算处理单元;根据计算处理单元的特点,需要在进入某个处理单元之前删除某个网络节点;根据计算处理单元的特点,把某几个网络节点进行合并等。Specific modifications can include: dividing the network and distributing network nodes to various computing processing units; according to the characteristics of the computing processing unit, a certain network node needs to be deleted before entering a certain processing unit; according to the characteristics of the computing processing unit, Combine certain network nodes, etc.

需要说明的是,在本申请实施例中,在触发神经网络模型进行推理计算前,神经网络模型的各个网络节点预存在第一计算处理单元,且以第一链表的形式预存。It should be noted that, in the embodiment of the present application, before triggering the neural network model to perform inference calculation, each network node of the neural network model is pre-stored in the first calculation processing unit, and is pre-stored in the form of a first linked list.

第一计算处理单元可以为用户终端的任一处理单元,为系统预先配置的,如CPU。The first calculation processing unit may be any processing unit of the user terminal, which is pre-configured by the system, such as a CPU.

链表是一种物理存储单元上非连续、非顺序的存储结构,数据元素的逻辑顺序是通过链表中的指针链接次序实现的。链表由一系列结点(链表中每一个元素称为网络结点)组成,网络结点可以在运行时动态生成。A linked list is a non-contiguous, non-sequential storage structure on a physical storage unit, and the logical order of data elements is achieved through the link order of pointers in the linked list. The linked list is composed of a series of nodes (each element in the linked list is called a network node), and the network node can be dynamically generated at runtime.

如图2所示为一种链表结构示意图,其中包括多个网络节点,每个网络结点包括两个部分:一个是存储数据元素的数据域data,另一个是存储下一个结点地址的指针域next。由于不必按顺序存储,链表在插入的时候可以达到O(1)的复杂度。Figure 2 shows a schematic diagram of a linked list structure, which includes multiple network nodes. Each network node includes two parts: one is the data field data storing data elements, and the other is the pointer that stores the address of the next node. Domain next. Since it is not necessary to store in order, the linked list can reach O(1) complexity when inserting.

当前,需要确定第一计算处理单元上各网络节点具体分配到哪个第二计算处理单元上运行。第二计算处理单元与第一计算处理单元可以相同,也可以不同,第二计算处理单元可以包括至少一个,例如,若用户终端包括CPU、GPU以及DSP三个计算处理单元,CPU为第一计算处理单元,第二计算处理单元可以为CPU或GPU或DSP。Currently, it is necessary to determine to which second calculation processing unit each network node on the first calculation processing unit is specifically allocated to run. The second calculation processing unit and the first calculation processing unit may be the same or different. The second calculation processing unit may include at least one. For example, if the user terminal includes three calculation processing units: CPU, GPU, and DSP, the CPU is the first calculation processing unit. The processing unit, the second calculation processing unit may be a CPU, a GPU, or a DSP.

可以理解的是,对于神经网络模型包含的多个网络节点,可以先确定每个网络节点所属的第二计算处理单元,然后再进行节点分配,也可以先确定一个网络节点所述的第二计算处理单元,再进行分配,并在该网络节点分配完成后再按照相同的方式分配下一个网络节点。其中,每个网络节点所属的第二计算处理单元即为每个网络节点即将运行的载体(第二计算处理单元)。在本申请实施例中,以每个网络节点依次分配为例进行说明。It is understandable that for multiple network nodes included in the neural network model, the second calculation processing unit to which each network node belongs can be determined first, and then node allocation can be performed, or the second calculation described by one network node can be determined first The processing unit then allocates and allocates the next network node in the same manner after the allocation of the network node is completed. Among them, the second calculation processing unit to which each network node belongs is the carrier (second calculation processing unit) that each network node is about to run. In the embodiment of the present application, description is made by taking the sequential allocation of each network node as an example.

S102,当所述第一计算处理单元与所述第二计算处理单元不同时,将所 述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元;S102: When the first calculation processing unit is different from the second calculation processing unit, delete the target network node from the first linked list and modify the upper limit of the target network node in the first linked list. The connection relationship between one network node and the next network node, assigning the target network node to the second calculation processing unit;

当所述第一计算处理单元与所述第二计算处理单元相同时,表明该节点仍然在原计算处理单元上而不需要移动到其他计算处理单元。当第一计算处理单元与所述第二计算处理单元不同时,需要将该网络节点从第一计算处理单元删除并添加至第二计算处理单元上。When the first calculation processing unit is the same as the second calculation processing unit, it indicates that the node is still on the original calculation processing unit and does not need to be moved to another calculation processing unit. When the first calculation processing unit is different from the second calculation processing unit, the network node needs to be deleted from the first calculation processing unit and added to the second calculation processing unit.

若目标网络节点为q,确定该网络节点需要分配到不同于第一计算处理单元的第二计算处理单元上,则需要将q从第一链表中删除,例如,如图3所示,q的上一个节点为p,下一个节点为s,则通过将p->next=q->next;free q,可以使p指向s,从而将q删除,而其它网络节点保持不变,删除后的第一链表结构如图4所示。If the target network node is q, and it is determined that the network node needs to be allocated to a second calculation processing unit different from the first calculation processing unit, then q needs to be deleted from the first linked list. For example, as shown in Figure 3, the value of q The previous node is p and the next node is s. By setting p->next=q->next; free q, you can make p point to s and delete q, while other network nodes remain unchanged. The structure of the first linked list is shown in Figure 4.

当然,还需要将所删除的q插入到第二计算处理单元上。在第二计算处理单元上同样以链表形式(第二链表)存储网络节点。当q为第一个分配到第二计算处理单元上的网络节点时,可将q直接放在第二链表中的第一个节点。当需要将q插入到第二链表上某两个网络节点之间时,如图5所示,若需要将q插入到第二链表上节点o和节点r之间时,设置q->next=r->next;o->next=q,从而将q插入,而其它网络节点保持不变,插入后的第二链表结构如图6所示。Of course, it is also necessary to insert the deleted q into the second calculation processing unit. The network nodes are also stored in the form of a linked list (second linked list) on the second calculation processing unit. When q is the first network node allocated to the second calculation processing unit, q can be directly placed in the first node in the second linked list. When q needs to be inserted between two network nodes on the second linked list, as shown in Figure 5, if q needs to be inserted between node o and node r on the second linked list, set q->next= r->next; o->next=q, so that q is inserted, while other network nodes remain unchanged, and the structure of the second linked list after insertion is shown in Figure 6.

S103,获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;S103. Obtain the next network node of the target network node, determine the next network node as the target network node, and execute the determination of the second calculation processing unit corresponding to the target network node in the at least one network node. step;

例如,如图3所示,若目标网络节点为q,则目标网络节点的下一个网络节点为s,将s确定为目标网络节点后,按照与q相同的方式进行处理,并以此处理处理剩下的网络节点。For example, as shown in Figure 3, if the target network node is q, then the next network node of the target network node is s. After s is determined as the target network node, it is processed in the same way as q, and processed accordingly The remaining network nodes.

S104,当确定不存在下一个网络节点时,生成推理计算结果。S104: When it is determined that there is no next network node, a reasoning calculation result is generated.

当确定不存在下一个网络节点时,表明第一链表中所存储的所有网络节点均分配完成,因此,可采用第一计算处理单元以及第二计算处理单元同时运行各网络节点,并得到推理计算结果。或者理解为,在每分配完成一个网络节点后,立即运行该网络节点,并将运行结果作为下一个网络节点的输入,直到每个处理单元上的最后一个网络节点运行完成,将运行结果组合生成最终的推理计算结果。When it is determined that there is no next network node, it indicates that all network nodes stored in the first linked list have been allocated. Therefore, the first calculation processing unit and the second calculation processing unit can be used to run each network node at the same time, and the inference calculation is obtained result. Or it can be understood that after each network node is allocated, run the network node immediately, and use the operation result as the input of the next network node, until the last network node on each processing unit is completed, and the operation results are combined to generate The final inference calculation result.

例如,该神经网络模型包括30个网络节点,这些网络节点按照先后顺序编号分别为1~30,若1~10分配在CPU上,11~20分配在GPU上,21~30分配在DSP上,可以在1号节点分配完成后运行该节点,然后分配2号节点,并将1号节点的运行结果作为2号节点的输入,以此类推,在10号节点运行 完成后输出第一推理计算结果,按照相同的方式分别运行11~20以及21~30,在20运行完成后输出第二推理计算结果,在30运行完成后输出第三推理计算结果,然后根据第一推理计算结果、第二推理计算结果以及第三推理计算结果得到最终的推理结算结果,此时,该神经网络模型运行结束。For example, the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in order. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. You can run the node after the allocation of node 1 is completed, then allocate node 2, and use the operation result of node 1 as the input of node 2, and so on, output the first inference calculation result after node 10 is completed , Run 11-20 and 21-30 respectively in the same way, output the second inference calculation result after the 20 operation is completed, and output the third inference calculation result after the 30 operation is completed, and then according to the first inference calculation result and the second inference calculation result The calculation result and the third inference calculation result obtain the final inference settlement result. At this time, the operation of the neural network model ends.

又例如,该神经网络模型包括30个网络节点,这些网络节点按照先后顺序编号分别为1~30,若1~10分配在CPU上,11~20分配在GPU上,21~30分配在DSP上,在所有节点均分配完成后,同时运行不同计算处理单元上的节点,最后将每个计算处理单元上的组合生成最终的推理结算结果。For another example, the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in sequence. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. , After all the nodes are allocated, the nodes on different calculation processing units are run at the same time, and finally the combination on each calculation processing unit generates the final inference settlement result.

在本申请实施例中,触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元且所述目标网络节点运行在所述第二计算处理单元,当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元,并按照相同的方式处理其他网络节点,当所有网络节点处理完成时,生成推理计算结果。通过链表形式表示各网络节点,即把神经网络模型中每个网络节点用链表的方式串联起来,每进行一次网络结构调整在频繁进行网络节点的插入和删除时,即使网络节点数量较多,只需要修改该网络节点的上一网络节点以及下一网络节点的连接关系,而不需要频繁移位操作,节省了网络结构调整时间,从而可以提高采用神经网络模型推理计算的效率。In the embodiment of the present application, the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit. When the first calculation processing unit is different from the second calculation processing unit, the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way. When all network nodes are processed, the inference calculation result is generated. Each network node is expressed in the form of a linked list, that is, each network node in the neural network model is connected in a linked list. Every time the network structure is adjusted, when the network node is frequently inserted and deleted, even if the number of network nodes is large, only The connection relationship between the previous network node and the next network node of the network node needs to be modified without frequent shift operations, which saves time for network structure adjustment, and can improve the efficiency of inference calculations using neural network models.

请参见图7,为本申请实施例提供的一种网络节点处理方法的流程示意图。本实施例以网络节点处理方法应用于智能手机中来举例说明。该网络节点处理方法可以包括以下步骤:Refer to FIG. 7, which is a schematic flowchart of a method for processing a network node according to an embodiment of this application. In this embodiment, the network node processing method is applied to a smart phone as an example. The network node processing method may include the following steps:

S201,触发采用包括至少一个网络节点的神经网络模型进行推理计算,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元;S201: Trigger to use a neural network model including at least one network node to perform inference calculation, and the at least one network node pre-stores in the first calculation processing unit in the form of a first linked list;

具体可参见S101,此处不再赘述。For details, please refer to S101, which will not be repeated here.

S202,获取各计算处理单元的当前负载率,以及所述目标网络节点分别在所述各计算处理单元上的执行时间;S202: Obtain the current load rate of each calculation processing unit and the execution time of the target network node on each calculation processing unit.

负载率F指该变压器实际承担的负荷与其容量之比,用于反应变压器的承载能力,其运行曲线是否位于最佳的75~80%之间。当前负载率即为在触发采用包括至少一个网络节点的神经网络模型进行推理计算时各计算处理单元的负载率。The load factor F refers to the ratio of the actual load of the transformer to its capacity, which is used to reflect the load-bearing capacity of the transformer and whether its operating curve lies between the optimal 75-80%. The current load rate is the load rate of each calculation processing unit when the neural network model including at least one network node is used for inference calculations.

假设智能手机的计算处理单元包括CPU、GPU、DSP,目标网络节点在这三个计算处理单元上的执行时间分别为t1、t2和t3,每个计算处理单元的 当前负载率分别为F1、F2和F3。Assuming that the computing processing units of a smartphone include CPU, GPU, and DSP, the execution time of the target network node on these three computing processing units is t1, t2, and t3, respectively, and the current load rate of each computing processing unit is F1, F2, respectively And F3.

S203,基于所述当前负载率以及所述执行时间,计算所述目标网络节点分别在所述各计算处理单元的执行期望;S203, based on the current load rate and the execution time, calculate the respective execution expectations of the target network node in the respective calculation processing units;

执行期望E(device)=epsilon*F/t,其中epsilon为修正系数,是一个可根据实验情况修改的常数。Implementation expectation E(device)=epsilon*F/t, where epsilon is the correction coefficient, which is a constant that can be modified according to experimental conditions.

根据上式,可得到目标网络节点在CPU、GPU、DSP的执行期望分别为:E1(device)=epsilon*F1/t1,E2(device)=epsilon*F2/t2,E3(device)=epsilon*F3/t3。According to the above formula, the execution expectations of the target network node on the CPU, GPU, and DSP are: E1(device)=epsilon*F1/t1, E2(device)=epsilon*F2/t2, E3(device)=epsilon* F3/t3.

S204,确定所述各计算处理单元的执行期望中的最小执行期望,将所述最小执行期望指示的计算处理单元确定为第二计算处理单元,所述第二计算处理单元包括第二链表;S204: Determine a minimum execution expectation among the execution expectations of each calculation processing unit, and determine the calculation processing unit indicated by the minimum execution expectation as a second calculation processing unit, where the second calculation processing unit includes a second linked list;

具体的,找出E1、E2、E3中的最小值,若E1最小,则确定device对应CPU,将CPU作为第二计算处理单元,将目标网络节点分配至CPU上。当然,若第一计算处理单元也为CPU,表明该目标网络节点仍然需要在第一计算处理单元上运行。Specifically, find the smallest value among E1, E2, and E3. If E1 is the smallest, determine that the device corresponds to the CPU, use the CPU as the second calculation processing unit, and allocate the target network node to the CPU. Of course, if the first calculation processing unit is also a CPU, it indicates that the target network node still needs to run on the first calculation processing unit.

可选的,若第二计算处理单元不支持该目标网络节点,则需要用其他等效网络节点替代该目标网络节点。Optionally, if the second calculation processing unit does not support the target network node, the target network node needs to be replaced with another equivalent network node.

S205,当所述第一计算处理单元与所述第二计算处理单元不同时,控制所述第一链表中所述目标网络节点的上一个网络节点指向所述目标网络节点的下一个网络节点,并释放目标网络节点;S205: When the first calculation processing unit is different from the second calculation processing unit, control the previous network node of the target network node in the first linked list to point to the next network node of the target network node, And release the target network node;

沿着链表指向,目标网络节点指向的网络节点为下一个网络节点,指向目标网络节点的网络节点为上一个网络节点。如图3所示,q为目标网络节点,s为下一个网络节点,p为上一个网络节点。Pointing along the linked list, the network node that the target network node points to is the next network node, and the network node that points to the target network node is the previous network node. As shown in Figure 3, q is the target network node, s is the next network node, and p is the previous network node.

设置p->next=q->next;free q,可以使p指向s,从而将q释放。Setting p->next=q->next; free q can make p point to s, thereby releasing q.

S206,控制所述第二链表中所述目标网络节点指向所述目标网络节点的下一个网络节点,并控制所述目标网络节点的上一个网络节点指向所述目标网络节点;S206, controlling the target network node in the second linked list to point to the next network node of the target network node, and controlling the previous network node of the target network node to point to the target network node;

如图5所示,若需要将q插入到第二链表上节点o和节点r之间时,设置q->next=r->next;o->next=q。As shown in Figure 5, if q needs to be inserted between node o and node r on the second linked list, set q->next=r->next; o->next=q.

S207,在所述第二计算处理单元执行所述目标网络节点;S207: Execute the target network node in the second calculation processing unit;

将q插入到第二链表上后,可执行q,从而生成相应的运行结果。After inserting q into the second linked list, q can be executed to generate the corresponding running result.

可选的,在执行目标网络节点前,可根据第二计算处理单元的特点,向所述第二链表添加和/或删除辅助节点,并组合生成新的网络节点后再运行。Optionally, before executing the target network node, according to the characteristics of the second calculation processing unit, auxiliary nodes can be added and/or deleted to the second linked list, combined to generate a new network node, and then run.

例如,该神经网络模型包括30个网络节点,这些网络节点按照先后顺序编号分别为1~30,若1~10分配在CPU上,11~20分配在GPU上,21~30分配在DSP上,可以在1号节点分配完成后,若需要根据CPU的特点为1号 节点添加辅助节点,并组合成新的网络节点运行,然后分配2号节点。For example, the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in order. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. After the allocation of node 1 is completed, if you need to add auxiliary nodes to node 1 according to the characteristics of the CPU, and combine them into a new network node to run, and then allocate node 2.

S208,获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述获取各计算处理单元的当前负载率,以及所述目标网络节点分别在所述各计算处理单元上的执行时间的步骤;S208. Acquire the next network node of the target network node, determine the next network node as the target network node, and execute the acquisition of the current load rate of each calculation processing unit, and the target network node is located in each of the target network nodes. The steps of calculating the execution time on each processing unit;

具体可参见S103,此处不再赘述。For details, refer to S103, which will not be repeated here.

S209,当确定不存在下一个网络节点时,向所述第一链表添加和/或删除辅助节点,和/或向所述第二链表添加和/或删除辅助节点,生成推理计算结果。S209: When it is determined that there is no next network node, add and/or delete auxiliary nodes to the first linked list, and/or add and/or delete auxiliary nodes to the second linked list, and generate a reasoning calculation result.

当确定不存在下一个网络节点时,表明第一链表中所存储的所有网络节点均分配完成,则需要根据计算处理单元的特点,对第一链表以及第二链表添加或删除辅助节点或将某几个网络节点进行合并,从而组成新的网络节点,以运行该神经网络模型,在添加或删除辅助节点完成后,可采用第一计算处理单元以及第二计算处理单元运行各网络节点,并得到推理计算结果。When it is determined that the next network node does not exist, it indicates that all network nodes stored in the first linked list are allocated. It is necessary to add or delete auxiliary nodes to the first linked list and the second linked list according to the characteristics of the calculation processing unit, or to add or delete some Several network nodes are merged to form a new network node to run the neural network model. After the addition or deletion of auxiliary nodes is completed, the first calculation processing unit and the second calculation processing unit can be used to run each network node and obtain Inference calculation results.

例如,该神经网络模型包括30个网络节点,这些网络节点按照先后顺序编号分别为1~30,若1~10分配在CPU上,11~20分配在GPU上,21~30分配在DSP上,1~10组成第一链表,11~20组成第二链表,21~30组成第三链表,在所有网络节点分配完成后分别在这三个链表中添加或删除辅助节点,或者合并几个节点,然后分别执行这三个链表上的各网络节点,从而生成最终的推理计算结果。For example, the neural network model includes 30 network nodes. These network nodes are numbered from 1 to 30 in order. If 1 to 10 are allocated to the CPU, 11 to 20 are allocated to the GPU, and 21 to 30 are allocated to the DSP. 1 to 10 form the first linked list, 11 to 20 form the second linked list, and 21 to 30 form the third linked list. After all network nodes are allocated, auxiliary nodes are added or deleted from the three linked lists, or several nodes are merged. Then each network node on the three linked lists is executed separately to generate the final inference calculation result.

在本申请实施例中,触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元且所述目标网络节点运行在所述第二计算处理单元,当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元,并按照相同的方式处理其他网络节点,当所有网络节点处理完成时,生成推理计算结果。通过链表形式表示各网络节点,即把神经网络模型中每个网络节点用链表的方式串联起来,能够支持动态设备切换,进行相应的网络结构存储方式优化,且每进行一次网络结构调整在频繁进行网络节点的插入和删除时,即使网络节点数量较多,只需要修改该网络节点的上一网络节点以及下一网络节点的连接关系,而不需要频繁移位操作,节省了网络结构调整时间,从而可以提高采用神经网络模型推理计算的效率。In the embodiment of the present application, the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit. When the first calculation processing unit is different from the second calculation processing unit, the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way. When all network nodes are processed, the inference calculation result is generated. Express each network node in the form of a linked list, that is, connect each network node in the neural network model in a linked list manner, which can support dynamic device switching, optimize the corresponding network structure storage method, and frequently perform network structure adjustments every time. When inserting and deleting network nodes, even if the number of network nodes is large, only the connection relationship between the previous network node and the next network node of the network node needs to be modified, without frequent shift operations, which saves time for network structure adjustment. Thereby, the efficiency of inference calculation using the neural network model can be improved.

下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。The following are device embodiments of this application, which can be used to execute the method embodiments of this application. For details not disclosed in the device embodiment of this application, please refer to the method embodiment of this application.

请参见图8,其示出了本申请一个示例性实施例提供的网络节点处理装置的结构示意图。该网络节点处理装置可以通过软件、硬件或者两者的结合实现成为用户终端的全部或一部分。该装置1包括节点确定模块10、节点分配模块20、节点循环模块30和结果生成模块40。Please refer to FIG. 8, which shows a schematic structural diagram of a network node processing apparatus provided by an exemplary embodiment of the present application. The network node processing device can be implemented as all or a part of the user terminal through software, hardware or a combination of the two. The device 1 includes a node determination module 10, a node allocation module 20, a node circulation module 30, and a result generation module 40.

节点确定模块10,用于触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;The node determining module 10 is configured to trigger the use of a neural network model including at least one network node to perform inference calculations, and determine the second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node adopts the first The form of a linked list is pre-stored in the first calculation processing unit, and the target network node runs on the second calculation processing unit;

节点分配模块20,用于当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元;The node allocation module 20 is configured to delete the target network node from the first linked list and modify the target network node when the first calculation processing unit is different from the second calculation processing unit. The connection relationship between the previous network node and the next network node of the target network node, assigning the target network node to the second calculation processing unit;

节点循环模块30,用于获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;The node cycle module 30 is configured to obtain the next network node of the target network node, determine the next network node as the target network node, and perform the determination of the first network node corresponding to the target network node in the at least one network node 2. Steps of calculating the processing unit;

结果生成模块40,用于当确定不存在下一个网络节点时,生成推理计算结果。The result generation module 40 is used to generate the inference calculation result when it is determined that there is no next network node.

可选的,所述节点确定模块10,具体用于:Optionally, the node determining module 10 is specifically configured to:

获取各计算处理单元的当前负载率,以及所述目标网络节点分别在所述各计算处理单元上的执行时间;Acquiring the current load rate of each calculation processing unit and the execution time of the target network node on each calculation processing unit;

基于所述当前负载率以及所述执行时间,在所述各计算处理单元中确定所述目标网络节点对应的第二计算处理单元。Based on the current load rate and the execution time, a second calculation processing unit corresponding to the target network node is determined in each calculation processing unit.

可选的,所述节点确定模块10,具体用于:Optionally, the node determining module 10 is specifically configured to:

基于所述当前负载率以及所述执行时间,计算所述目标节点分别在所述各计算处理单元的执行期望;Based on the current load rate and the execution time, calculating the execution expectations of the target nodes in the respective calculation processing units;

确定所述各计算处理单元的执行期望中的最小执行期望,将所述最小执行期望指示的计算处理单元确定为第二计算处理单元。The minimum execution expectation among the execution expectations of the calculation processing units is determined, and the calculation processing unit indicated by the minimum execution expectation is determined as the second calculation processing unit.

可选的,所述节点分配模块20,具体用于:Optionally, the node allocation module 20 is specifically configured to:

控制所述第一链表中所述目标网络节点的上一个网络节点指向所述目标网络节点的下一个网络节点,并释放目标网络节点。Control the previous network node of the target network node in the first linked list to point to the next network node of the target network node, and release the target network node.

可选的,所述节点分配模块20,具体用于:Optionally, the node allocation module 20 is specifically configured to:

将所述目标网络节点添加至所述第二计算处理单元的第二链表中,并修改所述第二链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系。The target network node is added to the second linked list of the second calculation processing unit, and the connection relationship between the previous network node and the next network node of the target network node in the second linked list is modified.

可选的,所述节点分配模块20,具体用于:Optionally, the node allocation module 20 is specifically configured to:

控制所述第二链表中所述目标网络节点指向所述目标网络节点的下一个 网络节点,并控制所述目标网络节点的上一个网络节点指向所述目标网络节点。Control the target network node in the second linked list to point to the next network node of the target network node, and control the previous network node of the target network node to point to the target network node.

可选的,如图9所示,所述装置还包括辅助节点增减模块50,用于:Optionally, as shown in FIG. 9, the device further includes an auxiliary node increase/decrease module 50, configured to:

向所述第一链表添加和/或删除辅助节点;和/或,Add and/or delete auxiliary nodes to the first linked list; and/or,

向所述第二链表添加和/或删除辅助节点Add and/or delete auxiliary nodes to the second linked list

可选的,如图9所示,所述装置还包括节点执行模块60,用于:Optionally, as shown in FIG. 9, the device further includes a node execution module 60, configured to:

在所述第二计算处理单元执行所述目标网络节点。The target network node is executed in the second calculation processing unit.

需要说明的是,上述实施例提供的网络节点处理装置在网络节点处理方法时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的网络节点处理装置与网络节点处理方法实施例属于同一构思,其体现实现过程详见方法实施例,这里不再赘述。It should be noted that, in the network node processing method of the network node processing device provided in the above embodiment, only the division of the above functional modules is used as an example for illustration. In actual applications, the above functions can be allocated to different functional modules according to needs. Complete, that is, divide the internal structure of the device into different functional modules to complete all or part of the functions described above. In addition, the network node processing device provided in the foregoing embodiment and the network node processing method embodiment belong to the same concept, and the implementation process is detailed in the method embodiment, which will not be repeated here.

上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments.

在本申请实施例中,触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元且所述目标网络节点运行在所述第二计算处理单元,当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元,并按照相同的方式处理其他网络节点,当所有网络节点处理完成时,生成推理计算结果。通过链表形式表示各网络节点,即把神经网络模型中每个网络节点用链表的方式串联起来,能够支持动态设备切换,进行相应的网络结构存储方式优化,且每进行一次网络结构调整在频繁进行网络节点的插入和删除时,即使网络节点数量较多,只需要修改该网络节点的上一网络节点以及下一网络节点的连接关系,而不需要频繁移位操作,节省了网络结构调整时间,从而可以提高采用神经网络模型推理计算的效率。In the embodiment of the present application, the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit. When the first calculation processing unit is different from the second calculation processing unit, the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way. When all network nodes are processed, the inference calculation result is generated. Express each network node in the form of a linked list, that is, connect each network node in the neural network model in a linked list manner, which can support dynamic device switching, optimize the corresponding network structure storage method, and frequently perform network structure adjustments every time. When inserting and deleting network nodes, even if the number of network nodes is large, only the connection relationship between the previous network node and the next network node of the network node needs to be modified, without frequent shift operations, which saves time for network structure adjustment. Thereby, the efficiency of inference calculation using the neural network model can be improved.

本申请实施例还提供了一种计算机存储介质,所述计算机存储介质可以存储有多条指令,所述指令适于由处理器加载并执行如上述图1-图7所示实施例的方法步骤,具体执行过程可以参见图1-图7所示实施例的具体说明,在此不进行赘述。The embodiment of the present application also provides a computer storage medium, the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the method steps of the embodiment shown in FIG. 1 to FIG. 7 above. For the specific execution process, please refer to the specific description of the embodiment shown in FIG. 1 to FIG. 7, which will not be repeated here.

请参见图10,为本申请实施例提供了一种电子设备的结构示意图。如图10所示,所述电子设备1000可以包括:至少一个处理器1001,至少一个网 络接口1004,用户接口1003,存储器1005,至少一个通信总线1002。Please refer to FIG. 10, which provides a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in FIG. 10, the electronic device 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, a memory 1005, and at least one communication bus 1002.

其中,通信总线1002用于实现这些组件之间的连接通信。Among them, the communication bus 1002 is used to implement connection and communication between these components.

其中,用户接口1003可以包括显示屏(Display)、摄像头(Camera),可选用户接口1003还可以包括标准的有线接口、无线接口。The user interface 1003 may include a display (Display) and a camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.

其中,网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。Among them, the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).

其中,处理器1001可以包括一个或者多个处理核心。处理器1001利用各种借口和线路连接整个电子设备1000内的各个部分,通过运行或执行存储在存储器1005内的指令、程序、代码集或指令集,以及调用存储在存储器1005内的数据,执行电子设备1000的各种功能和处理数据。可选的,处理器1001可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1001可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示屏所需要显示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1001中,单独通过一块芯片进行实现。The processor 1001 may include one or more processing cores. The processor 1001 uses various excuses and lines to connect various parts of the entire electronic device 1000, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 1005, and calling data stored in the memory 1005. Various functions and processing data of the electronic device 1000. Optionally, the processor 1001 may adopt at least one of digital signal processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). A kind of hardware form to realize. The processor 1001 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like. Among them, the CPU mainly processes the operating system, user interface, and application programs; the GPU is used to render and draw the content that needs to be displayed on the display screen; the modem is used to process wireless communication. It is understandable that the above-mentioned modem may not be integrated into the processor 1001, but may be implemented by a chip alone.

其中,存储器1005可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选的,该存储器1005包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器1005可用于存储指令、程序、代码、代码集或指令集。存储器1005可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等;存储数据区可存储上面各个方法实施例中涉及到的数据等。存储器1005可选的还可以是至少一个位于远离前述处理器1001的存储装置。如图10所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及网络节点处理应用程序。The memory 1005 may include random access memory (RAM) or read-only memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable storage medium. The memory 1005 may be used to store instructions, programs, codes, code sets or instruction sets. The memory 1005 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for at least one function (such as touch function, sound playback function, image playback function, etc.), Instructions used to implement the foregoing method embodiments, etc.; the storage data area can store data and the like involved in the foregoing method embodiments. Optionally, the memory 1005 may also be at least one storage device located far away from the foregoing processor 1001. As shown in FIG. 10, the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a network node processing application program.

在图10所示的电子设备1000中,用户接口1003主要用于为用户提供输入的接口,获取用户输入的数据;而处理器1001可以用于调用存储器1005中存储的网络节点处理应用程序,并具体执行以下操作:In the electronic device 1000 shown in FIG. 10, the user interface 1003 is mainly used to provide an input interface for the user to obtain data input by the user; and the processor 1001 can be used to call a network node processing application stored in the memory 1005, and Specifically perform the following operations:

触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;Trigger using a neural network model including at least one network node to perform inference calculations, and determine a second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node is pre-stored in the first linked list in the form of a first linked list. A calculation processing unit, where the target network node runs on the second calculation processing unit;

当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元;When the first calculation processing unit is different from the second calculation processing unit, delete the target network node from the first linked list and modify the previous network of the target network node in the first linked list The connection relationship between the node and the next network node, assigning the target network node to the second calculation processing unit;

获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;Acquiring the next network node of the target network node, determining the next network node as the target network node, and executing the step of determining the second calculation processing unit corresponding to the target network node in the at least one network node;

当确定不存在下一个网络节点时,生成推理计算结果。When it is determined that there is no next network node, the inference calculation result is generated.

在一个实施例中,所述处理器1001在执行确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元时,具体执行以下操作:In an embodiment, the processor 1001 specifically executes the following operations when determining the second calculation processing unit corresponding to the target network node in the at least one network node:

获取各计算处理单元的当前负载率,以及所述目标网络节点分别在所述各计算处理单元上的执行时间;Acquiring the current load rate of each calculation processing unit and the execution time of the target network node on each calculation processing unit;

基于所述当前负载率以及所述执行时间,在所述各计算处理单元中确定所述目标网络节点对应的第二计算处理单元。Based on the current load rate and the execution time, a second calculation processing unit corresponding to the target network node is determined in each calculation processing unit.

在一个实施例中,所述处理器1001在执行基于所述当前负载率以及所述执行时间,在所述各计算处理单元中确定所述目标网络节点对应的第二计算处理单元时,具体执行以下操作:In an embodiment, the processor 1001 specifically executes the second calculation processing unit corresponding to the target network node in the calculation processing units based on the current load rate and the execution time. The following operations:

基于所述当前负载率以及所述执行时间,计算所述目标节点分别在所述各计算处理单元的执行期望;Based on the current load rate and the execution time, calculating the execution expectations of the target nodes in the respective calculation processing units;

确定所述各计算处理单元的执行期望中的最小执行期望,将所述最小执行期望指示的计算处理单元确定为第二计算处理单元。The minimum execution expectation among the execution expectations of the calculation processing units is determined, and the calculation processing unit indicated by the minimum execution expectation is determined as the second calculation processing unit.

在一个实施例中,所述处理器1001在执行将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系时,具体执行以下操作:In one embodiment, the processor 1001 deletes the target network node from the first linked list and modifies the previous network node and the next network node of the target network node in the first linked list. For the connection relationship, perform the following operations:

控制所述第一链表中所述目标网络节点的上一个网络节点指向所述目标网络节点的下一个网络节点,并释放目标网络节点。Control the previous network node of the target network node in the first linked list to point to the next network node of the target network node, and release the target network node.

在一个实施例中,所述处理器1001在执行将所述目标网络节点分配至所述第二计算处理单元时,具体执行以下操作:In an embodiment, the processor 1001 specifically executes the following operations when executing the allocation of the target network node to the second calculation processing unit:

将所述目标网络节点添加至所述第二计算处理单元的第二链表中,并修改所述第二链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系。The target network node is added to the second linked list of the second calculation processing unit, and the connection relationship between the previous network node and the next network node of the target network node in the second linked list is modified.

在一个实施例中,所述处理器1001在执行将所述目标网络节点添加至所述第二计算处理单元的第二链表中,并修改所述第二链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系时,具体执行以下操作:In one embodiment, the processor 1001 adds the target network node to the second linked list of the second calculation processing unit, and modifies the previous one of the target network node in the second linked list. When connecting a network node and the next network node, perform the following operations:

控制所述第二链表中所述目标网络节点指向所述目标网络节点的下一个网络节点,并控制所述目标网络节点的上一个网络节点指向所述目标网络节 点。Control the target network node in the second linked list to point to the next network node of the target network node, and control the previous network node of the target network node to point to the target network node.

在一个实施例中,所述处理器1001在执行生成推理计算结果之前,还执行以下操作:In an embodiment, the processor 1001 further performs the following operations before generating the inference calculation result:

向所述第一链表添加和/或删除辅助节点;和/或,Add and/or delete auxiliary nodes to the first linked list; and/or,

向所述第二链表添加和/或删除辅助节点。Adding and/or deleting auxiliary nodes to the second linked list.

在一个实施例中,所述处理器1001在执行将所述目标网络节点分配至所述第二计算处理单元之后,还执行以下操作:In an embodiment, the processor 1001 further executes the following operations after executing the allocation of the target network node to the second calculation processing unit:

在所述第二计算处理单元执行所述目标网络节点。The target network node is executed in the second calculation processing unit.

在本申请实施例中,触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元且所述目标网络节点运行在所述第二计算处理单元,当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元,并按照相同的方式处理其他网络节点,当所有网络节点处理完成时,生成推理计算结果。通过链表形式表示各网络节点,即把神经网络模型中每个网络节点用链表的方式串联起来,能够支持动态设备切换,进行相应的网络结构存储方式优化,且每进行一次网络结构调整在频繁进行网络节点的插入和删除时,即使网络节点数量较多,只需要修改该网络节点的上一网络节点以及下一网络节点的连接关系,而不需要频繁移位操作,节省了网络结构调整时间,从而可以提高采用神经网络模型推理计算的效率。In the embodiment of the present application, the neural network model including at least one network node is used to perform inference calculations, and the second calculation processing unit corresponding to the target network node in the at least one network node is determined, and the at least one network node adopts the first The form of the linked list is pre-stored in the first calculation processing unit and the target network node runs on the second calculation processing unit. When the first calculation processing unit is different from the second calculation processing unit, the target network The node deletes and modifies the connection relationship between the previous network node and the next network node of the target network node in the first linked list from the first linked list, and assigns the target network node to the second calculation process Unit, and process other network nodes in the same way. When all network nodes are processed, the inference calculation result is generated. Express each network node in the form of a linked list, that is, connect each network node in the neural network model in a linked list manner, which can support dynamic device switching, optimize the corresponding network structure storage method, and frequently perform network structure adjustments every time. When inserting and deleting network nodes, even if the number of network nodes is large, only the connection relationship between the previous network node and the next network node of the network node needs to be modified, without frequent shift operations, which saves time for network structure adjustment. Thereby, the efficiency of inference calculation using the neural network model can be improved.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体或随机存储记忆体等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments. Wherein, the storage medium can be a magnetic disk, an optical disc, a read-only storage memory or a random storage memory, etc.

以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。The above-disclosed are only preferred embodiments of this application, and of course the scope of rights of this application cannot be limited by this. Therefore, equivalent changes made in accordance with the claims of this application still fall within the scope of this application.

Claims (20)

一种网络节点处理方法,其特征在于,所述方法包括:A method for processing network nodes, characterized in that the method includes: 触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;Trigger using a neural network model including at least one network node to perform inference calculations, and determine a second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node is pre-stored in the first linked list in the form of a first linked list. A calculation processing unit, where the target network node runs on the second calculation processing unit; 当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所述目标网络节点分配至所述第二计算处理单元;When the first calculation processing unit is different from the second calculation processing unit, delete the target network node from the first linked list and modify the previous network of the target network node in the first linked list The connection relationship between the node and the next network node, assigning the target network node to the second calculation processing unit; 获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;Acquiring the next network node of the target network node, determining the next network node as the target network node, and executing the step of determining the second calculation processing unit corresponding to the target network node in the at least one network node; 当确定不存在下一个网络节点时,生成推理计算结果。When it is determined that there is no next network node, the inference calculation result is generated. 根据权利要求1所述的方法,其特征在于,所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,包括:The method according to claim 1, wherein the determining a second calculation processing unit corresponding to a target network node in the at least one network node comprises: 获取各计算处理单元的当前负载率,以及所述目标网络节点分别在所述各计算处理单元上的执行时间;Acquiring the current load rate of each calculation processing unit and the execution time of the target network node on each calculation processing unit; 基于所述当前负载率以及所述执行时间,在所述各计算处理单元中确定所述目标网络节点对应的第二计算处理单元。Based on the current load rate and the execution time, a second calculation processing unit corresponding to the target network node is determined in each calculation processing unit. 根据权利要求2所述的方法,其特征在于,所述基于所述当前负载率以及所述执行时间,在所述各计算处理单元中确定所述目标网络节点对应的第二计算处理单元,包括:The method according to claim 2, wherein the determining, in each of the calculation processing units, a second calculation processing unit corresponding to the target network node based on the current load rate and the execution time, comprises : 基于所述当前负载率以及所述执行时间,计算所述目标节点分别在所述各计算处理单元的执行期望;Based on the current load rate and the execution time, calculating the execution expectations of the target nodes in the respective calculation processing units; 确定所述各计算处理单元的执行期望中的最小执行期望,将所述最小执行期望指示的计算处理单元确定为第二计算处理单元。The minimum execution expectation among the execution expectations of the calculation processing units is determined, and the calculation processing unit indicated by the minimum execution expectation is determined as the second calculation processing unit. 根据权利要求1所述的方法,其特征在于,所述将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,包括:The method according to claim 1, wherein said deleting said target network node from said first linked list and modifying the previous network node and the next one of said target network node in said first linked list The connection relationship of network nodes includes: 控制所述第一链表中所述目标网络节点的上一个网络节点指向所述目标 网络节点的下一个网络节点,并释放目标网络节点。Control the previous network node of the target network node in the first linked list to point to the next network node of the target network node, and release the target network node. 根据权利要求1所述的方法,其特征在于,所述将所述目标网络节点分配至所述第二计算处理单元,包括:The method according to claim 1, wherein the allocating the target network node to the second calculation processing unit comprises: 将所述目标网络节点添加至所述第二计算处理单元的第二链表中,并修改所述第二链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系。The target network node is added to the second linked list of the second calculation processing unit, and the connection relationship between the previous network node and the next network node of the target network node in the second linked list is modified. 根据权利要求5所述的方法,其特征在于,所述将所述目标网络节点添加至所述第二计算处理单元的第二链表中,并修改所述第二链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,包括:The method according to claim 5, wherein the adding the target network node to a second linked list of the second calculation processing unit, and modifying the target network node in the second linked list The connection relationship between the previous network node and the next network node includes: 控制所述第二链表中所述目标网络节点指向所述目标网络节点的下一个网络节点,并控制所述目标网络节点的上一个网络节点指向所述目标网络节点。Control the target network node in the second linked list to point to the next network node of the target network node, and control the previous network node of the target network node to point to the target network node. 根据权利要求6所述的方法,其特征在于,所述生成推理计算结果之前,还包括:The method according to claim 6, characterized in that, before said generating the inference calculation result, it further comprises: 向所述第一链表添加和/或删除辅助节点;和/或,Add and/or delete auxiliary nodes to the first linked list; and/or, 向所述第二链表添加和/或删除辅助节点。Adding and/or deleting auxiliary nodes to the second linked list. 根据权利要求1所述的方法,其特征在于,所述将所述目标网络节点分配至所述第二计算处理单元之后,还包括:The method according to claim 1, wherein after the allocating the target network node to the second calculation processing unit, the method further comprises: 在所述第二计算处理单元执行所述目标网络节点。The target network node is executed in the second calculation processing unit. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises: 若所述第二计算处理单元不支持所述目标网络节点,获取所述目标网络节点的等效网络节点,采用所述等效网络节点替代所述目标网络节点。If the second calculation processing unit does not support the target network node, obtain an equivalent network node of the target network node, and use the equivalent network node to replace the target network node. 一种网络节点处理装置,其特征在于,所述装置包括:A network node processing device, characterized in that the device includes: 节点确定模块,用于触发采用包括至少一个网络节点的神经网络模型进行推理计算,确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元,所述至少一个网络节点采用第一链表的形式预存在第一计算处理单元,所述目标网络节点运行在所述第二计算处理单元;The node determination module is configured to trigger the use of a neural network model including at least one network node to perform inference calculations, to determine a second calculation processing unit corresponding to the target network node in the at least one network node, and the at least one network node uses the first linked list Is pre-stored in the first calculation processing unit, and the target network node runs on the second calculation processing unit; 节点分配模块,用于当所述第一计算处理单元与所述第二计算处理单元不同时,将所述目标网络节点从所述第一链表中删除并修改所述第一链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系,将所 述目标网络节点分配至所述第二计算处理单元;A node allocation module, configured to delete the target network node from the first linked list and modify the target in the first linked list when the first calculation processing unit is different from the second calculation processing unit The connection relationship between the previous network node and the next network node of the network node, assigning the target network node to the second calculation processing unit; 节点循环模块,用于获取所述目标网络节点的下一个网络节点,将所述下一个网络节点确定为目标网络节点,并执行所述确定所述至少一个网络节点中目标网络节点对应的第二计算处理单元的步骤;The node cycle module is used to obtain the next network node of the target network node, determine the next network node as the target network node, and execute the determination of the second corresponding to the target network node among the at least one network node Steps of calculation processing unit; 结果生成模块,用于当确定不存在下一个网络节点时,生成推理计算结果。The result generation module is used to generate the inference calculation result when it is determined that there is no next network node. 根据权利要求10所述的装置,其特征在于,所述节点确定模块,具体用于:The device according to claim 10, wherein the node determining module is specifically configured to: 获取各计算处理单元的当前负载率,以及所述目标网络节点分别在所述各计算处理单元上的执行时间;Acquiring the current load rate of each calculation processing unit and the execution time of the target network node on each calculation processing unit; 基于所述当前负载率以及所述执行时间,在所述各计算处理单元中确定所述目标网络节点对应的第二计算处理单元。Based on the current load rate and the execution time, a second calculation processing unit corresponding to the target network node is determined in each calculation processing unit. 根据权利要求11所述的装置,其特征在于,所述节点确定模块,具体用于:The device according to claim 11, wherein the node determining module is specifically configured to: 基于所述当前负载率以及所述执行时间,计算所述目标节点分别在所述各计算处理单元的执行期望;Based on the current load rate and the execution time, calculating the execution expectations of the target nodes in the respective calculation processing units; 确定所述各计算处理单元的执行期望中的最小执行期望,将所述最小执行期望指示的计算处理单元确定为第二计算处理单元。The minimum execution expectation among the execution expectations of the calculation processing units is determined, and the calculation processing unit indicated by the minimum execution expectation is determined as the second calculation processing unit. 根据权利要求101所述的装置,其特征在于,所述节点分配模块,具体用于:The device according to claim 101, wherein the node allocation module is specifically configured to: 控制所述第一链表中所述目标网络节点的上一个网络节点指向所述目标网络节点的下一个网络节点,并释放目标网络节点。Control the previous network node of the target network node in the first linked list to point to the next network node of the target network node, and release the target network node. 根据权利要求10所述的装置,其特征在于,所述节点分配模块,具体用于:The device according to claim 10, wherein the node allocation module is specifically configured to: 将所述目标网络节点添加至所述第二计算处理单元的第二链表中,并修改所述第二链表中所述目标网络节点的上一个网络节点以及下一个网络节点的连接关系。The target network node is added to the second linked list of the second calculation processing unit, and the connection relationship between the previous network node and the next network node of the target network node in the second linked list is modified. 根据权利要求14所述的装置,其特征在于,所述节点分配模块,具体用于:The device according to claim 14, wherein the node allocation module is specifically configured to: 控制所述第二链表中所述目标网络节点指向所述目标网络节点的下一个网络节点,并控制所述目标网络节点的上一个网络节点指向所述目标网络节 点。Control the target network node in the second linked list to point to the next network node of the target network node, and control the previous network node of the target network node to point to the target network node. 根据权利要求15所述的装置,其特征在于,所述装置还包括辅助节点增减模块,用于:The device according to claim 15, wherein the device further comprises an auxiliary node increase or decrease module, configured to: 向所述第一链表添加和/或删除辅助节点;和/或,Add and/or delete auxiliary nodes to the first linked list; and/or, 向所述第二链表添加和/或删除辅助节点。Adding and/or deleting auxiliary nodes to the second linked list. 根据权利要求10所述的装置,其特征在于,所述装置还包括节点执行模块,用于:The device according to claim 10, wherein the device further comprises a node execution module for: 在所述第二计算处理单元执行所述目标网络节点。The target network node is executed in the second calculation processing unit. 根据权利要求10所述的装置,其特征在于,所述装置还包括节点替代模块,用于:The device according to claim 10, wherein the device further comprises a node replacement module for: 若所述第二计算处理单元不支持所述目标网络节点,获取所述目标网络节点的等效网络节点,采用所述等效网络节点替代所述目标网络节点。If the second calculation processing unit does not support the target network node, obtain an equivalent network node of the target network node, and use the equivalent network node to replace the target network node. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行如权利要求1~9任意一项的方法步骤。A computer storage medium, wherein the computer storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the method steps according to any one of claims 1-9. 一种电子设备,其特征在于,包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行如权利要求1~9任意一项的方法步骤。An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program, and the computer program is adapted to be loaded by the processor and executed as claimed in any one of claims 1-9. Method steps.
PCT/CN2020/117227 2019-09-24 2020-09-23 Network node processing method, device, storage medium, and electronic apparatus Ceased WO2021057811A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910903008.1A CN110689114B (en) 2019-09-24 2019-09-24 Network node processing method, device, storage medium and electronic equipment
CN201910903008.1 2019-09-24

Publications (1)

Publication Number Publication Date
WO2021057811A1 true WO2021057811A1 (en) 2021-04-01

Family

ID=69109976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117227 Ceased WO2021057811A1 (en) 2019-09-24 2020-09-23 Network node processing method, device, storage medium, and electronic apparatus

Country Status (2)

Country Link
CN (1) CN110689114B (en)
WO (1) WO2021057811A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906989A (en) * 2021-09-30 2023-04-04 鸿海精密工业股份有限公司 Image detection method, electronic device, and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689114B (en) * 2019-09-24 2023-07-18 Oppo广东移动通信有限公司 Network node processing method, device, storage medium and electronic equipment
CN112862100B (en) * 2021-01-29 2022-02-08 网易有道信息技术(北京)有限公司 Method and apparatus for optimizing neural network model inference
CN113630472A (en) * 2021-09-13 2021-11-09 东软集团股份有限公司 Method, device, electronic device and medium for avoiding channel waste between network nodes

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307950A1 (en) * 2017-04-24 2018-10-25 Intel Corporation Compute optimizations for neural networks
WO2018217828A1 (en) * 2017-05-23 2018-11-29 Intel Corporation Methods and apparatus for discriminative semantic transfer and physics-inspired optimization of features in deep learning
CN109598407A (en) * 2018-10-26 2019-04-09 阿里巴巴集团控股有限公司 A kind of execution method and device of operation flow
CN109947565A (en) * 2019-03-08 2019-06-28 北京百度网讯科技有限公司 Method and apparatus for allocating computing tasks
CN109981457A (en) * 2017-12-27 2019-07-05 华为技术有限公司 Message processing method, network node and system
CN110008028A (en) * 2019-04-10 2019-07-12 北京旷视科技有限公司 Computational resource allocation method, apparatus, computer equipment and storage medium
CN110689114A (en) * 2019-09-24 2020-01-14 Oppo广东移动通信有限公司 Network node processing method, device, storage medium and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10585703B2 (en) * 2017-06-03 2020-03-10 Apple Inc. Dynamic operation allocation for neural networks
CN109361596B (en) * 2018-10-26 2021-07-06 新华三技术有限公司合肥分公司 Route calculation method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307950A1 (en) * 2017-04-24 2018-10-25 Intel Corporation Compute optimizations for neural networks
WO2018217828A1 (en) * 2017-05-23 2018-11-29 Intel Corporation Methods and apparatus for discriminative semantic transfer and physics-inspired optimization of features in deep learning
CN109981457A (en) * 2017-12-27 2019-07-05 华为技术有限公司 Message processing method, network node and system
CN109598407A (en) * 2018-10-26 2019-04-09 阿里巴巴集团控股有限公司 A kind of execution method and device of operation flow
CN109947565A (en) * 2019-03-08 2019-06-28 北京百度网讯科技有限公司 Method and apparatus for allocating computing tasks
CN110008028A (en) * 2019-04-10 2019-07-12 北京旷视科技有限公司 Computational resource allocation method, apparatus, computer equipment and storage medium
CN110689114A (en) * 2019-09-24 2020-01-14 Oppo广东移动通信有限公司 Network node processing method, device, storage medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906989A (en) * 2021-09-30 2023-04-04 鸿海精密工业股份有限公司 Image detection method, electronic device, and storage medium

Also Published As

Publication number Publication date
CN110689114B (en) 2023-07-18
CN110689114A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
WO2021057811A1 (en) Network node processing method, device, storage medium, and electronic apparatus
US11087203B2 (en) Method and apparatus for processing data sequence
CN108287708B (en) A data processing method, device, server and computer-readable storage medium
CN112015521A (en) Configuration method and device of inference service, electronic equipment and storage medium
WO2023098241A1 (en) Request processing method and apparatus
CN115016735B (en) Control method, device and medium of distributed cache system
CN110471701A (en) Method, apparatus, storage medium and the electronic equipment of image rendering
CN118445351B (en) Data display method, device, electronic device, storage medium and program product
CN112631682A (en) Applet processing method, device, equipment and storage medium
CN108664249B (en) Method, device, electronic device and computer-readable storage medium for improving string storage efficiency
CN111459634B (en) Task scheduling method, device, terminal and storage medium
CN112905270B (en) Workflow implementation method, device, platform, electronic device and storage medium
CN110471700A (en) Graphic processing method, device, storage medium and electronic equipment
CN111767149B (en) Scheduling method, device, equipment and storage equipment
CN111290850B (en) Data storage method, device and equipment
CN110960858A (en) Game resource processing method, device, equipment and storage medium
CN113901033A (en) Data migration method, apparatus, device and medium
CN113568796B (en) Module testing method and device
CN114257701A (en) Access configuration method, device and storage medium of video processing algorithm
CN108345470B (en) Data processing and storing method and device and electronic equipment
CN119809910B (en) Video memory control method, device, equipment and storage medium for model training
US12317309B2 (en) Method and apparatus for maximizing a number of connections that can be executed from a mobile application
CN114911514B (en) Algorithm resource configuration method, device, electronic device and storage medium
CN116233051B (en) A page sharing method, device, equipment and storage medium for mini program
CN112540835B (en) A hybrid machine learning model operating method, device and related equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20869304

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20869304

Country of ref document: EP

Kind code of ref document: A1