[go: up one dir, main page]

WO2020093205A1 - Deep learning computation method and related device - Google Patents

Deep learning computation method and related device Download PDF

Info

Publication number
WO2020093205A1
WO2020093205A1 PCT/CN2018/113991 CN2018113991W WO2020093205A1 WO 2020093205 A1 WO2020093205 A1 WO 2020093205A1 CN 2018113991 W CN2018113991 W CN 2018113991W WO 2020093205 A1 WO2020093205 A1 WO 2020093205A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
scene
model data
deep learning
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/113991
Other languages
French (fr)
Chinese (zh)
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Heytap Technology Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Heytap Technology Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Heytap Technology Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201880097754.XA priority Critical patent/CN112714917B/en
Priority to PCT/CN2018/113991 priority patent/WO2020093205A1/en
Publication of WO2020093205A1 publication Critical patent/WO2020093205A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application relates to the field of electronic technology, in particular to a deep learning calculation method and related equipment.
  • electronic devices such as smart phones, tablet computers, etc.
  • the intelligentization of electronic devices depends on the deep learning calculations of electronic devices.
  • the electronic device needs to read the model for deep learning calculation, and then calculate based on the read model.
  • the read model is large, it is easy to cause the electronic device to freeze.
  • Embodiments of the present application provide a deep learning calculation method and related equipment, which are used to solve the lag phenomenon.
  • an embodiment of the present application provides a deep learning calculation method, which is applied to an electronic device and includes:
  • the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;
  • the calculation is performed on the second thread based on the layer of model data read in, and this step is repeated until all calculations are completed.
  • an embodiment of the present application provides a deep learning computing device, which is applied to an electronic device and includes:
  • a model determining unit configured to determine a target model required for this deep learning calculation, the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;
  • a data reading unit for sequentially reading the multi-layer model data according to the set sequence on the first thread
  • the calculation unit is used to perform calculation on the second thread based on the completed layer of model data when the layer of model data is read in, and repeat this step until all calculations are completed.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be processed by the above
  • the above program includes instructions for performing the steps in the method described in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes the computer to execute the first embodiment of the present application. Part or all of the steps described in the method described in one aspect.
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing the computer program, and the computer program is operable to cause the computer to execute as implemented in the present application Examples of some or all of the steps described in the method described in the first aspect.
  • the computer program product may be a software installation package.
  • the model includes multiple layers of model data, a layer of model data is read on the first thread, and then after reading a layer of model data, based on the second thread The model data after reading is calculated until all calculations are completed.
  • the entire model data is read at one time and then calculated, and the application layer-by-layer model calculation and calculation are performed in layers to avoid the phenomenon of jamming caused by the one-time reading of the model.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a deep learning calculation method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of another deep learning calculation method disclosed in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
  • FIG. 5 is a block diagram of functional units of a deep learning computing device disclosed in an embodiment of the present application.
  • Electronic devices can include various handheld devices with wireless communication capabilities, in-vehicle devices, wearable devices, computing devices or other processing devices connected to wireless modems, as well as various forms of user equipment (User Equipment, UE), Mobile Station (MS), terminal device (terminal) and so on.
  • UE User Equipment
  • MS Mobile Station
  • terminal terminal
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor, a memory, a signal processor, a transceiver, a display screen, a speaker, a microphone, a random access memory (Random Access Memory, RAM), a camera, a sensor, and so on.
  • the memory, signal processor, display screen, speaker, microphone, RAM, camera and sensor are connected to the processor, and the transceiver is connected to the signal processor.
  • the display screen may be a liquid crystal display (Liquid Crystal Display), organic or inorganic light-emitting diode (Organic Light-Emitting Diode, OLED), active matrix organic light-emitting diode panel (Active Matrix / Organic Light Emitting Diode, AMOLED )Wait.
  • Liquid Crystal Display organic or inorganic light-emitting diode
  • OLED Organic Light-Emitting Diode
  • AMOLED Active Matrix / Organic Light Emitting Diode
  • the camera may be an ordinary camera or an infrared camera, which is not limited herein.
  • the camera may be a front camera or a rear camera, which is not limited herein.
  • the sensor includes at least one of the following: a light sensor, a gyroscope, an infrared proximity sensor, a fingerprint sensor, a pressure sensor, and so on.
  • the light sensor also known as the ambient light sensor, is used to detect the ambient light brightness.
  • the light sensor may include a photosensitive element and an analog-to-digital converter.
  • the photosensitive element is used to convert the collected optical signal into an electrical signal
  • the analog-to-digital converter is used to convert the aforementioned electrical signal into a digital signal.
  • the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output it to the analog-to-digital converter.
  • the photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photovoltaic cell.
  • the processor is the control center of the electronic device, using various interfaces and lines to connect the various parts of the entire electronic device, by running or executing the software programs and / or modules stored in the memory, and calling the data stored in the memory, Perform various functions and process data of electronic devices to monitor the electronic devices as a whole.
  • the processor may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, etc., and the modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor.
  • the memory is used to store software programs and / or modules, and the processor executes various functional applications and data processing of the electronic device by running the software programs and / or modules stored in the memory.
  • the memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, software programs required for at least one function, and the like; the storage data area may store data created according to the use of electronic devices and the like.
  • the memory may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • FIG. 2 is a schematic flowchart of a deep learning calculation method provided by an embodiment of the present application, which is applied to an electronic device. The method includes:
  • Step 201 Determine a target model required for this deep learning calculation.
  • the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order.
  • the model refers to the problem-solving steps of a class of problems, that is, the algorithm of a class of problems.
  • the model data included in the model refers to the data of the algorithm of a type of problem, then the layer of model data refers to the data of the algorithm to solve a problem.
  • the target model may be stored in the electronic device or in a storage device associated with the electronic device, which is not limited herein.
  • Step 202 Read the multi-layer model data in sequence on the first thread according to the set order.
  • Step 203 When the first layer of model data is read in, calculation is performed on the second thread based on the completed layer of model data read in, and this step is repeated until all calculations are completed.
  • the target model includes three layers of model data.
  • the three layers of model data include the first layer model data, the second layer model data, and the third layer model data.
  • the arrangement order of the three layer model data is the first layer model data From the second layer model data to the third layer model data, then read the first layer model data on the first thread, and read the second layer model on the first thread when the first layer model data is completed.
  • the data is calculated on the second thread based on the first layer model data, when the second layer model data is read in, the third layer model data is read on the first thread and the second layer model is based on the second thread
  • the data is calculated, and when the third layer model data is read in, the calculation is performed on the second thread based on the third layer model data.
  • Some existing deep learning computing frameworks such as Huawei's MACE and Tencent's NCNN, first read all model data synchronously, and then perform calculation after the data reading is completed. This has a drawback.
  • the model data volume When it is very large (for example, some model data files may have hundreds of MB), it may take several hundred ms to read all the model data at one time.
  • it When the model data is read in, there will be a significant stuck.
  • a layer of model data is read on the first thread, and then after reading the layer of model data, calculation is performed on the second thread based on the read model data. Until all calculations are completed, the phenomenon of jamming caused by reading the model at one time is avoided.
  • the determining the target model required for this deep learning calculation includes: determining a first scenario of a first application running in the foreground, and determining this time based on the first scenario of the first application The target model required for deep learning calculations.
  • the models used in different scenes are different, for example, in the preview photo scene, the model is the preview model, in the photo scene, the model is the photo model, and so on.
  • the resources occupied by the first thread and the second thread are resources in a reserved resource pool, and the first condition At least one of the following: the first application is a setting application, the usage priority of the first application is greater than or equal to the setting priority, and the number of times the first application is used in the first setting period is greater than or equal to Equal to the set number of times.
  • the setting application refers to an application previously set by the user, for example, one or more game applications, one or more video applications, one or more shopping applications, one or more instant messaging applications, One or more news applications, one or more office applications, etc.
  • the first set time period is, for example, 1 hour, 5 hours, 1 day, 1 week, 1 month, or other values.
  • the set number of times is, for example, 10 times, 15 times, 21 times, 30 times, or other values.
  • the application priority may be set by the user in advance or determined by the electronic device, which is not limited herein.
  • the specific method for the electronic device to determine the use priority of the first application includes: the electronic device determining the target number of uses of the first application in the second set period, and determining the location based on the mapping relationship between the use number and the use priority The use priority corresponding to the target use count; or, the electronic device determines the target download count of the first application to be downloaded, and determines the use priority corresponding to the target download count according to the mapping relationship between the download count and the use priority.
  • the second setting period may be the same as the first setting period, or may be different from the first setting period, which is not limited herein.
  • the resources in the reserved resource pool are used for deep learning calculation and learning, thus avoiding other applications from seizing resources with the first application, and further avoiding the phenomenon of jam.
  • the method before the sequentially reading in the multi-layer model data according to the set order on the first thread, the method further includes:
  • the size of the target model is greater than or equal to the first threshold, and / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.
  • the first threshold is, for example, 50MB, 60MB, 100MB, 150MB or other values.
  • the second threshold is, for example, 60%, 70%, 75%, 78%, 80%, or other values.
  • the method further includes:
  • the resource occupied by the third thread may be a resource in the reserved resource pool, or may not be a resource in the reserved resource pool, which is not limited herein.
  • the model reading and calculation are performed in layers.
  • the model can be directly read and then calculated, which realizes the dynamic adjustment of the deep learning calculation method and improves the intelligence of the electronic device.
  • the method further includes:
  • the setting time period is, for example, 6s, 10s, 30s, 1min or other values.
  • the setting time corresponding to different scenes is different, for example, the setting time corresponding to the preview photo scene is duration 1, the setting time corresponding to the photographing scene is duration 2, the setting duration corresponding to the game battle scene is duration 3, duration 1, Duration 2 and duration 3 are different from each other.
  • the setting duration corresponding to different scenes can also be the same.
  • the method before the suspending the deep learning calculation of the first scene of the first application, the method further includes:
  • the importance level of the second scene of the second application is less than the importance level of the first scene of the first application, and / or the second scene of the second application is a setting scene.
  • the setting scene includes, for example, a system desktop scene, a system setting scene, an SMS viewing scene, and so on.
  • the importance level of the scene can be customized by the user in advance, or can be determined by the electronic device (for example, the electronic device determines the target number of occurrences of the scene in the third set period, and is determined according to the mapping relationship between the number of occurrences and the important level
  • the importance level corresponding to the number of occurrences of the target is not limited herein.
  • the importance level of the second scene of the second application is less than the importance level of the first scene of the first application
  • the second scene of the second application is a set scene
  • the following indicates that the first scenario of the first application is more important than the second scenario of the second application, and priority is given to ensuring the running experience of the first scenario of the first application to improve the experience of important scenarios.
  • the method further includes:
  • the importance level of the second scene of the second application is greater than the importance level of the first scene of the first application, and / or the second scene of the second application is not a set scene, release the first A thread and the second thread, and performing deep learning calculation of the second scenario of the second application.
  • the deep learning calculation of the second scene of the second application is the same as the deep learning calculation of the first scene of the first application, and will not be described here.
  • the importance level of the second scene of the second application is determined to be greater than the importance level of the first scene of the first application, and / or the second scene of the second application is not a set scene
  • the deep learning calculation of the first scene of the first application is directly released to provide the second scene of the second application with sufficient resources for deep learning calculation, so as to avoid the phenomenon that the resource is occupied and the card is stuck.
  • FIG. 3 is a schematic flowchart of a deep learning calculation method provided by an embodiment of the present application. The method is applied to the above electronic device. The method includes:
  • Step 301 Determine the first scenario of the first application running in the foreground.
  • Step 302 Determine a target model required for this deep learning calculation based on the first scenario of the first application, the target model includes multi-layer model data, and the multi-layer model data is set according to the target model The order is one after the other.
  • Step 303 Determine whether the size of the target model is greater than or equal to the first threshold, and / or determine whether the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.
  • step 304 If yes, go to step 304.
  • Step 304 Read the multi-layer model data in sequence on the first thread according to the set order.
  • Step 305 When the reading of the model data of the first layer is completed, calculation is performed on the second thread based on the model data of the layer of the reading, and this step is repeated until all calculations are completed.
  • Step 306 Read in the multi-layer model data on the third thread, and after the multi-layer model data is read in, perform calculation on the third thread based on the multi-layer model data.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device includes a processor and a memory , A communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
  • the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;
  • the calculation is performed on the second thread based on the layer of model data read in, and this step is repeated until all calculations are completed.
  • the model includes multiple layers of model data, a layer of model data is read on the first thread, and then after reading a layer of model data, based on the second thread The model data after reading is calculated until all calculations are completed.
  • the entire model data is read at a time and then calculated, and the application layer-by-layer model reading and calculation are performed in layers to avoid the phenomenon of jamming caused by reading the model at one time.
  • the above program includes instructions specifically for performing the following steps:
  • the first scene of the first application running in the foreground is determined, and the target model required for this deep learning calculation is determined based on the first scene of the first application.
  • the first application when the first application is a setting application, and / or the use priority of the first application is greater than or equal to the setting priority, the first thread and the first application
  • the resources occupied by the two threads are the resources in the reserved resource pool.
  • the above program before sequentially reading in the multi-layer model data according to the set order on the first thread, the above program includes instructions that are further used to perform the following steps:
  • the size of the target model is greater than or equal to the first threshold, and / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.
  • the above program includes instructions that are also used to perform the following steps:
  • the foregoing program includes instructions that are further used to perform the following steps:
  • the above program includes instructions that are further used to perform the following steps:
  • the importance level of the second scene of the second application is less than the importance level of the first scene of the first application, and / or the second scene of the second application is a setting scene.
  • the electronic device includes a hardware structure and / or a software module corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed by hardware or computer software driven hardware depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiments of the present application may divide the functional unit of the electronic device according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the integrated unit may be implemented in the form of hardware or a software functional unit. It should be noted that the division of the units in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • FIG. 5 is a deep learning computing device provided by an embodiment of the present application, which is applied to an electronic device.
  • the deep learning computing device includes:
  • the model determination unit 501 is used to determine a target model required for this deep learning calculation.
  • the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;
  • the data reading unit 502 is configured to sequentially read in the multi-layer model data according to the set order on the first thread
  • the calculation unit 503 is configured to perform calculation on the second thread based on the completed layer of model data read in when the layer of model data is read in, and repeat this step until all calculations are completed.
  • the model includes multiple layers of model data, a layer of model data is read on the first thread, and then after reading a layer of model data, based on the second thread The model data after reading is calculated until all calculations are completed.
  • the entire model data is read at one time and then calculated, and the present application performs model reading and calculation in layers to avoid the phenomenon of jamming caused by the one-time reading of the model.
  • the model determining unit 501 is specifically used to:
  • the first scene of the first application running in the foreground is determined, and the target model required for this deep learning calculation is determined based on the first scene of the first application.
  • the first application when the first application is a setting application, and / or the use priority of the first application is greater than or equal to the setting priority, the first thread and the first application
  • the resources occupied by the two threads are the resources in the reserved resource pool.
  • the device further includes:
  • the first determining unit 504 is configured to determine that the size of the target model is greater than or equal to the first threshold before the data reading unit 502 sequentially reads the multi-layer model data according to the set order on the first thread, And / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.
  • the data reading unit 502 is further used when the size of the target model is less than the first threshold, and / or the current memory occupancy rate of the electronic device is less than the second threshold In the case of, read the multi-layer model data on the third thread;
  • the calculation unit 503 is further configured to perform calculation based on the multi-layer model data on the third thread after the multi-layer model data is read in.
  • the device further includes:
  • the pause unit 505 is configured to pause the deep learning calculation of the first scene of the first application if it is detected that the scene running in the foreground is switched from the first scene of the first application to the second scene of the second application;
  • the releasing unit 506 is configured to release the first thread and the second thread if the scene running in the foreground is not detected to switch to the first scene of the first application within a set duration;
  • the execution unit 507 is configured to continue the deep learning calculation of the first scene of the first application if it is detected that the scene running in the foreground is switched to the first scene of the first application within the set duration.
  • the device further includes:
  • the second determining unit 508 is configured to determine that the importance level of the second scene of the second application is less than the first of the first application before the pause unit 505 pauses the deep learning calculation of the first scene of the first application
  • the importance level of the scene, and / or the second scene of the second application is a setting scene.
  • model determination unit 501 the data reading unit 502, the calculation unit 503, the first determination unit 504, the pause unit 505, the release unit 506, the execution unit 507, and the second determination unit 508 may be implemented by a processor.
  • An embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes the computer to perform part or all of the steps of any method described in the foregoing method embodiments ,
  • the aforementioned computer includes electronic devices.
  • An embodiment of the present application also provides a computer program product, the computer program product includes a non-transitory computer-readable storage medium that stores the computer program, and the computer program is operable to cause the computer to perform any of the methods described in the foregoing method embodiments Some or all steps of the method.
  • the computer program product may be a software installation package, and the computer includes an electronic device.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the above-mentioned units is only a division of logical functions.
  • there may be other division methods for example, multiple units or components may be combined or integrated To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer readable memory.
  • the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a memory, Several instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above methods in various embodiments of the present application.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program may be stored in a computer-readable memory, and the memory may include: a flash disk , Read-Only Memory (English: Read-Only Memory, abbreviation: ROM), Random Access Device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Device
  • magnetic disk or optical disk etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Stored Programmes (AREA)

Abstract

Disclosed in the present application are a deep learning computation method and a related device, which are applied to an electronic device; the method comprises: determining a target model required for the present instance of deep learning computation, the target model comprising multiple layers of model data, and the multiple layers of model data being arranged in sequential order in the target model; on a first thread, sequentially reading the multiple layers of model data according to the set sequence; and when a layer of model data is completely read, performing computation on a second thread on the basis of the layer of model data that is completely read, and repeating said step until all computation is complete. By employing the embodiments of the present application, the phenomenon of running slowly is resolved.

Description

深度学习计算方法及相关设备Deep learning calculation method and related equipment 技术领域Technical field

本申请涉及电子技术领域,尤其涉及一种深度学习计算方法及相关设备。This application relates to the field of electronic technology, in particular to a deep learning calculation method and related equipment.

背景技术Background technique

随着电子技术的不断发展,电子设备(如智能手机、平板电脑等)越来越智能化。电子设备的智能化依赖于电子设备的深度学习计算。电子设备进行深度学习计算需要读取模型,然后基于读取的模型进行计算。目前,当读取的模型较大时,容易导致电子设备出现卡顿现象。With the continuous development of electronic technology, electronic devices (such as smart phones, tablet computers, etc.) are becoming more and more intelligent. The intelligentization of electronic devices depends on the deep learning calculations of electronic devices. The electronic device needs to read the model for deep learning calculation, and then calculate based on the read model. At present, when the read model is large, it is easy to cause the electronic device to freeze.

发明内容Summary of the invention

本申请实施例提供一种深度学习计算方法及相关设备,用于解决卡顿现象。Embodiments of the present application provide a deep learning calculation method and related equipment, which are used to solve the lag phenomenon.

第一方面,本申请实施例提供一种深度学习计算方法,应用于电子设备,包括:In a first aspect, an embodiment of the present application provides a deep learning calculation method, which is applied to an electronic device and includes:

确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列;Determine the target model required for this deep learning calculation, the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;

在第一线程上依据所述设定顺序依次读入所述多层模型数据;Sequentially read in the multi-layer model data according to the set order on the first thread;

在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。When the layer of model data is read in, the calculation is performed on the second thread based on the layer of model data read in, and this step is repeated until all calculations are completed.

第二方面,本申请实施例提供一种深度学习计算装置,应用于电子设备,包括:In a second aspect, an embodiment of the present application provides a deep learning computing device, which is applied to an electronic device and includes:

模型确定单元,用于确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列;A model determining unit, configured to determine a target model required for this deep learning calculation, the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;

数据读入单元,用于在第一线程上依据所述设定顺序依次读入所述多层模型数据;A data reading unit for sequentially reading the multi-layer model data according to the set sequence on the first thread;

计算单元,用于在一层模型数据完成读入时,在第二线程上基于完成读入 的一层模型数据进行计算,重复此步骤直至所有计算完成。The calculation unit is used to perform calculation on the second thread based on the completed layer of model data when the layer of model data is read in, and repeat this step until all calculations are completed.

第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面所述的方法中的步骤的指令。In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be processed by the above The above program includes instructions for performing the steps in the method described in the first aspect of the embodiments of the present application.

第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面所述的方法中所描述的部分或全部步骤。According to a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes the computer to execute the first embodiment of the present application. Part or all of the steps described in the method described in one aspect.

第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面所述的方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。According to a fifth aspect, an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing the computer program, and the computer program is operable to cause the computer to execute as implemented in the present application Examples of some or all of the steps described in the method described in the first aspect. The computer program product may be a software installation package.

可以看出,在本申请实施例中,模型包括多层模型数据,在第一线程上进行一层层模型数据的读取,然后在读取完一层模型数据之后,在第二线程上基于读取完的模型数据进行计算,直至所有计算完成。相较于现有技术一次性读取完整个模型数据再进行计算,本申请将模型读取和计算分层进行,避免了因一次性读取模型而导致卡顿的现象。It can be seen that in the embodiment of the present application, the model includes multiple layers of model data, a layer of model data is read on the first thread, and then after reading a layer of model data, based on the second thread The model data after reading is calculated until all calculations are completed. Compared with the prior art, the entire model data is read at one time and then calculated, and the application layer-by-layer model calculation and calculation are performed in layers to avoid the phenomenon of jamming caused by the one-time reading of the model.

附图说明BRIEF DESCRIPTION

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly explain the embodiments of the present application or the technical solutions in the prior art, the following will briefly introduce the drawings required in the embodiments or the description of the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, without paying any creative work, other drawings can be obtained based on these drawings.

图1是一种本申请实施例提供的一种电子设备的结构示意图;1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application;

图2是本申请实施例提供的一种深度学习计算方法的流程示意图;2 is a schematic flowchart of a deep learning calculation method provided by an embodiment of the present application;

图3是本申请实施例公开的另一种深度学习计算方法的流程示意图;3 is a schematic flowchart of another deep learning calculation method disclosed in an embodiment of the present application;

图4是本申请实施例公开的一种电子设备的结构示意图;4 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;

图5是本申请实施例公开的一种深度学习计算装置的功能单元组成框图。FIG. 5 is a block diagram of functional units of a deep learning computing device disclosed in an embodiment of the present application.

具体实施方式detailed description

为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only It is a part of the embodiments of this application, but not all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the scope of protection of this application.

以下分别进行详细说明。The details are described below.

本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", and "fourth" in the description and claims of the present application and the accompanying drawings are used to distinguish different objects, not to describe a specific order . In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes steps or units that are not listed, or optionally also includes Other steps or units inherent to these processes, methods, products or equipment.

在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to "embodiments" means that specific features, structures, or characteristics described in connection with the embodiments may be included in at least one embodiment of the present application. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment mutually exclusive with other embodiments. Those skilled in the art understand explicitly and implicitly that the embodiments described herein can be combined with other embodiments.

电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。Electronic devices can include various handheld devices with wireless communication capabilities, in-vehicle devices, wearable devices, computing devices or other processing devices connected to wireless modems, as well as various forms of user equipment (User Equipment, UE), Mobile Station (MS), terminal device (terminal) and so on.

如图1所示,图1是本申请实施例提供的一种电子设备的结构示意图。该电子设备包括处理器、存储器、信号处理器、收发器、显示屏、扬声器、麦克风、随机存取存储器(Random Access Memory,RAM)、摄像头和传感器等等。其中,存储器、信号处理器、显示屏、扬声器、麦克风、RAM、摄像头 和传感器与处理器连接,收发器与信号处理器连接。As shown in FIG. 1, FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes a processor, a memory, a signal processor, a transceiver, a display screen, a speaker, a microphone, a random access memory (Random Access Memory, RAM), a camera, a sensor, and so on. Among them, the memory, signal processor, display screen, speaker, microphone, RAM, camera and sensor are connected to the processor, and the transceiver is connected to the signal processor.

其中,显示屏可以是液晶显示器(Liquid Crystal Display,LCD)、有机或无机发光二极管(Organic Light-Emitting Diode,OLED)、有源矩阵有机发光二极体面板(Active Matrix/Organic Light Emitting Diode,AMOLED)等。Among them, the display screen may be a liquid crystal display (Liquid Crystal Display), organic or inorganic light-emitting diode (Organic Light-Emitting Diode, OLED), active matrix organic light-emitting diode panel (Active Matrix / Organic Light Emitting Diode, AMOLED )Wait.

其中,该摄像头可以是普通摄像头、也可以是红外摄像,在此不作限定。该摄像头可以是前置摄像头或后置摄像头,在此不作限定。Wherein, the camera may be an ordinary camera or an infrared camera, which is not limited herein. The camera may be a front camera or a rear camera, which is not limited herein.

其中,传感器包括以下至少一种:光感传感器、陀螺仪、红外接近传感器、指纹传感器、压力传感器等等。其中,光感传感器,也称为环境光传感器,用于检测环境光亮度。光线传感器可以包括光敏元件和模数转换器。其中,光敏元件用于将采集的光信号转换为电信号,模数转换器用于将上述电信号转换为数字信号。可选的,光线传感器还可以包括信号放大器,信号放大器可以将光敏元件转换的电信号进行放大后输出至模数转换器。上述光敏元件可以包括光电二极管、光电三极管、光敏电阻、硅光电池中的至少一种。Among them, the sensor includes at least one of the following: a light sensor, a gyroscope, an infrared proximity sensor, a fingerprint sensor, a pressure sensor, and so on. Among them, the light sensor, also known as the ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a photosensitive element and an analog-to-digital converter. Among them, the photosensitive element is used to convert the collected optical signal into an electrical signal, and the analog-to-digital converter is used to convert the aforementioned electrical signal into a digital signal. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output it to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photovoltaic cell.

其中,处理器是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器内的软体程序和/或模块,以及调用存储在存储器内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。Among them, the processor is the control center of the electronic device, using various interfaces and lines to connect the various parts of the entire electronic device, by running or executing the software programs and / or modules stored in the memory, and calling the data stored in the memory, Perform various functions and process data of electronic devices to monitor the electronic devices as a whole.

其中,处理器可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器中。Among them, the processor may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, etc., and the modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor.

其中,存储器用于存储软体程序和/或模块,处理器通过运行存储在存储器的软件程序和/或模块,从而执行电子设备的各种功能应用以及数据处理。存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的软体程序等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。Among them, the memory is used to store software programs and / or modules, and the processor executes various functional applications and data processing of the electronic device by running the software programs and / or modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, software programs required for at least one function, and the like; the storage data area may store data created according to the use of electronic devices and the like. In addition, the memory may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.

请参阅图2,图2是本申请实施例提供的一种深度学习计算方法的流程示 意图,应用于电子设备,该方法包括:Please refer to FIG. 2. FIG. 2 is a schematic flowchart of a deep learning calculation method provided by an embodiment of the present application, which is applied to an electronic device. The method includes:

步骤201:确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列。Step 201: Determine a target model required for this deep learning calculation. The target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order.

其中,模型指的是一类问题的解题步骤,亦即一类问题的算法。模型包括的模型数据指的是一类问题的算法的数据,那么一层模型数据指的是解决一个问题的算法的数据。Among them, the model refers to the problem-solving steps of a class of problems, that is, the algorithm of a class of problems. The model data included in the model refers to the data of the algorithm of a type of problem, then the layer of model data refers to the data of the algorithm to solve a problem.

目标模型可存储在电子设备中,或存储在与电子设备关联的存储设备中,在此不作限定。The target model may be stored in the electronic device or in a storage device associated with the electronic device, which is not limited herein.

步骤202:在第一线程上依据所述设定顺序依次读入所述多层模型数据。Step 202: Read the multi-layer model data in sequence on the first thread according to the set order.

步骤203:在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。Step 203: When the first layer of model data is read in, calculation is performed on the second thread based on the completed layer of model data read in, and this step is repeated until all calculations are completed.

举例来说,假设目标模型包括3层模型数据,这3层模型数据有第一层模型数据、第二层模型数据和第三层模型数据,3层模型数据的排列顺序为第一层模型数据到第二层模型数据到第三层模型数据,那么在第一线程上先读入第一层模型数据,在第一层模型数据完成读入时,在第一线程上读入第二层模型数据以及在第二线程上基于第一层模型数据进行计算,在第二层模型数据完成读入时,在第一线程上读入第三层模型数据以及在第二线程上基于第二层模型数据进行计算,在第三层模型数据完成读入时,在第二线程上基于第三层模型数据进行计算。For example, suppose that the target model includes three layers of model data. The three layers of model data include the first layer model data, the second layer model data, and the third layer model data. The arrangement order of the three layer model data is the first layer model data From the second layer model data to the third layer model data, then read the first layer model data on the first thread, and read the second layer model on the first thread when the first layer model data is completed. The data is calculated on the second thread based on the first layer model data, when the second layer model data is read in, the third layer model data is read on the first thread and the second layer model is based on the second thread The data is calculated, and when the third layer model data is read in, the calculation is performed on the second thread based on the third layer model data.

现有的一些深度学习计算框架,如小米的MACE,腾讯的NCNN,都是先同步地把所有的模型数据读入,数据读入完成之后再进行计算,这样处理有一个弊端,当模型数据量很大的时候(比如有的模型数据文件可能有几百MB),一次性读入所有的模型数据大概需要几百ms,这样在一些场景中,如某个依赖于深度学习计算框架的应用,在模型数据读入时就会出现明显地卡顿。Some existing deep learning computing frameworks, such as Xiaomi's MACE and Tencent's NCNN, first read all model data synchronously, and then perform calculation after the data reading is completed. This has a drawback. When the model data volume When it is very large (for example, some model data files may have hundreds of MB), it may take several hundred ms to read all the model data at one time. In some scenarios, such as an application that depends on a deep learning computing framework, When the model data is read in, there will be a significant stuck.

而在本申请的实施例中,在第一线程上进行一层层模型数据的读取,然后在读取完一层模型数据之后,在第二线程上基于读取完的模型数据进行计算,直至所有计算完成,避免了因一次性读取模型而导致卡顿的现象。In the embodiment of the present application, a layer of model data is read on the first thread, and then after reading the layer of model data, calculation is performed on the second thread based on the read model data. Until all calculations are completed, the phenomenon of jamming caused by reading the model at one time is avoided.

在本申请的一实现方式中,所述确定本次深度学习计算所需的目标模型, 包括:确定前台运行的第一应用的第一场景,基于所述第一应用的第一场景确定本次深度学习计算所需的目标模型。In an implementation manner of the present application, the determining the target model required for this deep learning calculation includes: determining a first scenario of a first application running in the foreground, and determining this time based on the first scenario of the first application The target model required for deep learning calculations.

在不同的场景下所使用的模型是不同的,例如在预览照片场景下,模型为预览模型,在拍照场景下,模型为拍照模型,等等。The models used in different scenes are different, for example, in the preview photo scene, the model is the preview model, in the photo scene, the model is the photo model, and so on.

在本申请的一实现方式中,在所述第一应用满足第一条件时,所述第一线程和所述第二线程所占用的资源为预留资源池中的资源,所述第一条件包括以下至少一种:所述第一应用为设定应用,所述第一应用的使用优先级大于或等于设定优先级,所述第一应用在第一设定时段内的使用次数大于或等于设定次数。In an implementation manner of the present application, when the first application meets the first condition, the resources occupied by the first thread and the second thread are resources in a reserved resource pool, and the first condition At least one of the following: the first application is a setting application, the usage priority of the first application is greater than or equal to the setting priority, and the number of times the first application is used in the first setting period is greater than or equal to Equal to the set number of times.

其中,设定应用指的是用户事先设定的应用,例如有某一个或多个游戏应用、某一个或多个视频应用、某一个或多个购物应用、某一个或多个即时通信应用、某一个或多个新闻应用、某一个或多个办公应用等等。Among them, the setting application refers to an application previously set by the user, for example, one or more game applications, one or more video applications, one or more shopping applications, one or more instant messaging applications, One or more news applications, one or more office applications, etc.

其中,第一设定时段例如为1个小时、5个小时、1天、1周、1个月或其他值。设定次数例如为10次、15次、21次、30次或是其他值。The first set time period is, for example, 1 hour, 5 hours, 1 day, 1 week, 1 month, or other values. The set number of times is, for example, 10 times, 15 times, 21 times, 30 times, or other values.

其中,应用的使用优先级可以是用户事先设定的,也可以是电子设备确定的,在此不作限定。电子设备确定所述第一应用的使用优先级的具体方式有:电子设备确定所述第一应用在第二设定时段内的目标使用次数,以及根据使用次数与使用优先级的映射关系确定所述目标使用次数对应的使用优先级;或者,电子设备确定所述第一应用被下载的目标下载次数,以及根据下载次数与使用优先级的映射关系确定所述目标下载次数对应的使用优先级。其中,第二设定时段可以与第一设定时段相同,也可以与第一设定时段不相同,在此不作限定。The application priority may be set by the user in advance or determined by the electronic device, which is not limited herein. The specific method for the electronic device to determine the use priority of the first application includes: the electronic device determining the target number of uses of the first application in the second set period, and determining the location based on the mapping relationship between the use number and the use priority The use priority corresponding to the target use count; or, the electronic device determines the target download count of the first application to be downloaded, and determines the use priority corresponding to the target download count according to the mapping relationship between the download count and the use priority. The second setting period may be the same as the first setting period, or may be different from the first setting period, which is not limited herein.

可以看出,在第一应用为比较重要的应用的情况下,使用预留资源池中的资源进行深度学习计算学习,这样避免了其他应用跟第一应用抢占资源,进一步避免了卡顿现象。It can be seen that, in the case where the first application is a more important application, the resources in the reserved resource pool are used for deep learning calculation and learning, thus avoiding other applications from seizing resources with the first application, and further avoiding the phenomenon of jam.

在本申请的一实现方式中,所述在第一线程上依据所述设定顺序依次读入所述多层模型数据之前,所述方法还包括:In an implementation manner of the present application, before the sequentially reading in the multi-layer model data according to the set order on the first thread, the method further includes:

确定所述目标模型的大小大于或等于第一阈值,和/或所述电子设备的当前内存占用率大于或等于第二阈值。It is determined that the size of the target model is greater than or equal to the first threshold, and / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.

其中,第一阈值例如为50MB、60MB、100MB、150MB或是其他值。第二阈值例如为60%、70%、75%、78%、80%或是其他值。The first threshold is, for example, 50MB, 60MB, 100MB, 150MB or other values. The second threshold is, for example, 60%, 70%, 75%, 78%, 80%, or other values.

在本申请的一实现方式中,所述方法还包括:In an implementation manner of the present application, the method further includes:

在所述目标模型的大小小于所述第一阈值,和/或所述电子设备的当前内存占用率小于所述第二阈值的情况下,在第三线程上读入所述多层模型数据,以及在所述多层模型数据完成读入后,在第三线程上基于所述多层模型数据进行计算。When the size of the target model is less than the first threshold, and / or the current memory occupancy rate of the electronic device is less than the second threshold, reading the multi-layer model data on a third thread, And after the multi-layer model data is read in, calculation is performed on the third thread based on the multi-layer model data.

其中,第三线程所占用资源可以是预留资源池中的资源,也可以不是预留资源池中的资源,在此不作限定。The resource occupied by the third thread may be a resource in the reserved resource pool, or may not be a resource in the reserved resource pool, which is not limited herein.

可以看出,在目标模型较大,和/或电子设备的当前内存占用率较大时,才将模型读取和计算分层进行。在目标模型较小,和/或电子设备的当前内存占用率较小时,可将模型直接先读取再进行计算,实现了动态调整深度学习计算的方式,提升了电子设备的智能性。It can be seen that only when the target model is large and / or the current memory occupancy rate of the electronic device is large, the model reading and calculation are performed in layers. When the target model is small and / or the current memory occupancy rate of the electronic device is small, the model can be directly read and then calculated, which realizes the dynamic adjustment of the deep learning calculation method and improves the intelligence of the electronic device.

在本申请的一实现方式中,在进行所述第一应用的第一场景的深度学习计算过程中,所述方法还包括:In an implementation manner of the present application, during the deep learning calculation process of the first scenario of the first application, the method further includes:

若检测到前台运行的场景从所述第一应用的第一场景切换至第二应用的第二场景,则暂停所述第一应用的第一场景的深度学习计算,以及进行所述第二应用的第二场景的深度学习计算;If it is detected that the scene running in the foreground is switched from the first scene of the first application to the second scene of the second application, the deep learning calculation of the first scene of the first application is suspended, and the second application is performed The deep learning calculation of the second scene;

若在设定时长内未检测到前台运行的场景切换至所述第一应用的第一场景,则释放所述第一线程和所述第二线程;If it is not detected that the scene running in the foreground is switched to the first scene of the first application within the set duration, the first thread and the second thread are released;

若在所述设定时长内检测到前台运行的场景切换至所述第一应用的第一场景,则继续进行所述第一应用的第一场景的深度学习计算。If it is detected that the scene running in the foreground is switched to the first scene of the first application within the set duration, the deep learning calculation of the first scene of the first application is continued.

其中,设定时长例如有6s、10s、30s、1min或是其他值。不同场景对应的设定时长是不同的,比如预览照片场景对应的设定时长为时长1、拍照场景对应的设定时长为时长2、游戏对战场景对应的设定时长为时长3,时长1、时长2和时长3互不相同。当然,不同场景对应的设定时长也可以是相同的。Wherein, the setting time period is, for example, 6s, 10s, 30s, 1min or other values. The setting time corresponding to different scenes is different, for example, the setting time corresponding to the preview photo scene is duration 1, the setting time corresponding to the photographing scene is duration 2, the setting duration corresponding to the game battle scene is duration 3, duration 1, Duration 2 and duration 3 are different from each other. Of course, the setting duration corresponding to different scenes can also be the same.

可以看出,在进行深度学习计算过程中,如果场景发生变化,先暂停此次进行的深度学习计算,如果在设定时长内场景没有切换回来,就直接释放第一线程和第二线程,以释放此次深度学习计算,避免了资源被占用而影响其他应 用的运行速率,如果在设定时长内场景被切换回来,继续进行深度学习计算,避免了该深度学习计算重新运行,提升了电子设备的智能性。It can be seen that in the process of performing deep learning calculations, if the scene changes, the deep learning calculations performed this time are suspended first. If the scene does not switch back within the set duration, the first thread and the second thread are released directly to The release of this deep learning calculation prevents resources from being occupied and affects the running rate of other applications. If the scene is switched back within a set time period, continue the deep learning calculation to avoid the deep learning calculation from re-running and improve the electronic device Intelligence.

在本申请的一实现方式中,所述暂停所述第一应用的第一场景的深度学习计算之前,所述方法还包括:In an implementation manner of the present application, before the suspending the deep learning calculation of the first scene of the first application, the method further includes:

确定所述第二应用的第二场景的重要等级小于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景为设定场景。It is determined that the importance level of the second scene of the second application is less than the importance level of the first scene of the first application, and / or the second scene of the second application is a setting scene.

其中,设定场景例如有系统桌面场景、系统设置场景、短信查看场景等等。Among them, the setting scene includes, for example, a system desktop scene, a system setting scene, an SMS viewing scene, and so on.

其中,场景的重要等级可以是用户事先自定义的,也可以是电子设备确定的(如电子设备确定在第三设定时段内场景的目标出现次数,以及根据出现次数与重要等级的映射关系确定所述目标出现次数对应的重要等级),在此不作限定。Among them, the importance level of the scene can be customized by the user in advance, or can be determined by the electronic device (for example, the electronic device determines the target number of occurrences of the scene in the third set period, and is determined according to the mapping relationship between the number of occurrences and the important level The importance level corresponding to the number of occurrences of the target) is not limited herein.

可以看出,在确定所述第二应用的第二场景的重要等级小于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景为设定场景的情况下,表示第一应用的第一场景比第二应用的第二场景重要,优先保证第一应用的第一场景的运行体验,以提升重要场景的体验。It can be seen that when it is determined that the importance level of the second scene of the second application is less than the importance level of the first scene of the first application, and / or the second scene of the second application is a set scene The following indicates that the first scenario of the first application is more important than the second scenario of the second application, and priority is given to ensuring the running experience of the first scenario of the first application to improve the experience of important scenarios.

在本申请的一实现方式中,所述方法还包括:In an implementation manner of the present application, the method further includes:

在所述第二应用的第二场景的重要等级大于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景不为设定场景时,释放所述第一线程和所述第二线程,以及进行所述第二应用的第二场景的深度学习计算。When the importance level of the second scene of the second application is greater than the importance level of the first scene of the first application, and / or the second scene of the second application is not a set scene, release the first A thread and the second thread, and performing deep learning calculation of the second scenario of the second application.

需要说明的是,进行第二应用的第二场景的深度学习计算与进行第一应用的第一场景的深度学习计算的方式一样,在此不再叙述。It should be noted that the deep learning calculation of the second scene of the second application is the same as the deep learning calculation of the first scene of the first application, and will not be described here.

可以看出,在确定所述第二应用的第二场景的重要等级大于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景不为设定场景的情况下,直接释放第一应用的第一场景的深度学习计算,以给第二应用的第二场景有足够的资源进行深度学习计算,避免了资源被占用较多而出现卡顿的现象。It can be seen that the importance level of the second scene of the second application is determined to be greater than the importance level of the first scene of the first application, and / or the second scene of the second application is not a set scene In this case, the deep learning calculation of the first scene of the first application is directly released to provide the second scene of the second application with sufficient resources for deep learning calculation, so as to avoid the phenomenon that the resource is occupied and the card is stuck.

与所述图2所示的实施例一致的,请参阅图3,图3是本申请实施例提供的一种深度学习计算方法的流程示意图,应用于上述电子设备,方法包括:Consistent with the embodiment shown in FIG. 2, please refer to FIG. 3. FIG. 3 is a schematic flowchart of a deep learning calculation method provided by an embodiment of the present application. The method is applied to the above electronic device. The method includes:

步骤301:确定前台运行的第一应用的第一场景。Step 301: Determine the first scenario of the first application running in the foreground.

步骤302:基于所述第一应用的第一场景确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列。Step 302: Determine a target model required for this deep learning calculation based on the first scenario of the first application, the target model includes multi-layer model data, and the multi-layer model data is set according to the target model The order is one after the other.

步骤303:确定所述目标模型的大小是否大于或等于第一阈值,和/或确定所述电子设备的当前内存占用率是否大于或等于第二阈值。Step 303: Determine whether the size of the target model is greater than or equal to the first threshold, and / or determine whether the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.

若是,则执行步骤304。If yes, go to step 304.

若否,则执行步骤306。If not, go to step 306.

步骤304:在第一线程上依据所述设定顺序依次读入所述多层模型数据。Step 304: Read the multi-layer model data in sequence on the first thread according to the set order.

步骤305:在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。Step 305: When the reading of the model data of the first layer is completed, calculation is performed on the second thread based on the model data of the layer of the reading, and this step is repeated until all calculations are completed.

步骤306:在第三线程上读入所述多层模型数据,以及在所述多层模型数据完成读入后,在所述第三线程上基于所述多层模型数据进行计算。Step 306: Read in the multi-layer model data on the third thread, and after the multi-layer model data is read in, perform calculation on the third thread based on the multi-layer model data.

需要说明的是,本实施例的具体实现过程可参见上述方法实施例所述的具体实现过程,在此不再叙述。It should be noted that, for the specific implementation process of this embodiment, reference may be made to the specific implementation process described in the foregoing method embodiments, and details are not described herein.

与上述图2和图3所示的实施例一致的,请参阅图4,图4是本申请实施例提供的一种电子设备的结构示意图,如图所示,该电子设备包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行以下步骤的指令:Consistent with the above embodiments shown in FIG. 2 and FIG. 3, please refer to FIG. 4, which is a schematic structural diagram of an electronic device provided by an embodiment of the present application. As shown in the figure, the electronic device includes a processor and a memory , A communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:

确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列;Determine the target model required for this deep learning calculation, the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;

在第一线程上依据所述设定顺序依次读入所述多层模型数据;Sequentially read in the multi-layer model data according to the set order on the first thread;

在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。When the layer of model data is read in, the calculation is performed on the second thread based on the layer of model data read in, and this step is repeated until all calculations are completed.

可以看出,在本申请实施例中,模型包括多层模型数据,在第一线程上进行一层层模型数据的读取,然后在读取完一层模型数据之后,在第二线程上基于读取完的模型数据进行计算,直至所有计算完成。相较于现有技术一次性读取完整个模型数据再进行计算,本申请将模型读取和计算分层进行,避免了因 一次性读取模型而导致卡顿的现象。It can be seen that in the embodiment of the present application, the model includes multiple layers of model data, a layer of model data is read on the first thread, and then after reading a layer of model data, based on the second thread The model data after reading is calculated until all calculations are completed. Compared with the prior art, the entire model data is read at a time and then calculated, and the application layer-by-layer model reading and calculation are performed in layers to avoid the phenomenon of jamming caused by reading the model at one time.

在本申请的一实现方式中,在确定本次深度学习计算所需的目标模型方面,上述程序包括具体用于执行以下步骤的指令:In an implementation of the present application, in terms of determining the target model required for this deep learning calculation, the above program includes instructions specifically for performing the following steps:

确定前台运行的第一应用的第一场景,基于所述第一应用的第一场景确定本次深度学习计算所需的目标模型。The first scene of the first application running in the foreground is determined, and the target model required for this deep learning calculation is determined based on the first scene of the first application.

在本申请的一实现方式中,在所述第一应用为设定应用,和/或所述第一应用的使用优先级大于或等于设定优先级时,所述第一线程和所述第二线程所占用的资源为预留资源池中的资源。In an implementation manner of the present application, when the first application is a setting application, and / or the use priority of the first application is greater than or equal to the setting priority, the first thread and the first application The resources occupied by the two threads are the resources in the reserved resource pool.

在本申请的一实现方式中,在在第一线程上依据所述设定顺序依次读入所述多层模型数据之前,上述程序包括还用于执行以下步骤的指令:In an implementation of the present application, before sequentially reading in the multi-layer model data according to the set order on the first thread, the above program includes instructions that are further used to perform the following steps:

确定所述目标模型的大小大于或等于第一阈值,和/或所述电子设备的当前内存占用率大于或等于第二阈值。It is determined that the size of the target model is greater than or equal to the first threshold, and / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.

在本申请的一实现方式中,上述程序包括还用于执行以下步骤的指令:In an implementation of the present application, the above program includes instructions that are also used to perform the following steps:

在所述目标模型的大小小于所述第一阈值,和/或所述电子设备的当前内存占用率小于所述第二阈值的情况下,在第三线程上读入所述多层模型数据,以及在所述多层模型数据完成读入后,在第三线程上基于所述多层模型数据进行计算。When the size of the target model is less than the first threshold, and / or the current memory occupancy rate of the electronic device is less than the second threshold, reading the multi-layer model data on a third thread, And after the multi-layer model data is read in, calculation is performed on the third thread based on the multi-layer model data.

在本申请的一实现方式中,在进行所述第一应用的第一场景的深度学习计算过程中,上述程序包括还用于执行以下步骤的指令:In an implementation manner of the present application, during the deep learning calculation process of the first scenario where the first application is performed, the foregoing program includes instructions that are further used to perform the following steps:

若检测到前台运行的场景从所述第一应用的第一场景切换至第二应用的第二场景,则暂停所述第一应用的第一场景的深度学习计算;If it is detected that the scene running in the foreground is switched from the first scene of the first application to the second scene of the second application, the deep learning calculation of the first scene of the first application is suspended;

若在设定时长内未检测到前台运行的场景切换至所述第一应用的第一场景,则释放所述第一线程和所述第二线程;If it is not detected that the scene running in the foreground is switched to the first scene of the first application within the set duration, the first thread and the second thread are released;

若在所述设定时长内检测到前台运行的场景切换至所述第一应用的第一场景,则继续进行所述第一应用的第一场景的深度学习计算。If it is detected that the scene running in the foreground is switched to the first scene of the first application within the set duration, the deep learning calculation of the first scene of the first application is continued.

在本申请的一实现方式中,在暂停所述第一应用的第一场景的深度学习计算之前,上述程序包括还用于执行以下步骤的指令:In an implementation manner of the present application, before the deep learning calculation of the first scenario of the first application is suspended, the above program includes instructions that are further used to perform the following steps:

确定所述第二应用的第二场景的重要等级小于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景为设定场景。It is determined that the importance level of the second scene of the second application is less than the importance level of the first scene of the first application, and / or the second scene of the second application is a setting scene.

需要说明的是,本实施例的具体实现过程可参见上述方法实施例所述的具体实现过程,在此不再叙述。It should be noted that, for the specific implementation process of this embodiment, reference may be made to the specific implementation process described in the foregoing method embodiments, and details are not described herein.

上述实施例主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。The above embodiments mainly introduce the solutions of the embodiments of the present application from the perspective of the execution process on the method side. It can be understood that, in order to realize the above-mentioned functions, the electronic device includes a hardware structure and / or a software module corresponding to each function. Those skilled in the art should easily realize that, in combination with the exemplary units and algorithm steps described in the embodiments disclosed herein, the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed by hardware or computer software driven hardware depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.

本申请实施例可以根据所述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiments of the present application may divide the functional unit of the electronic device according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit may be implemented in the form of hardware or a software functional unit. It should be noted that the division of the units in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.

下面为本申请装置实施例,本申请装置实施例用于执行本申请方法实施例所实现的方法。请参阅图5,图5是本申请实施例提供的一种深度学习计算装置,应用于电子设备,所述深度学习计算装置包括:The following is a device embodiment of the present application. The device embodiment of the present application is used to execute the method implemented by the method embodiment of the present application. Please refer to FIG. 5. FIG. 5 is a deep learning computing device provided by an embodiment of the present application, which is applied to an electronic device. The deep learning computing device includes:

模型确定单元501,用于确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列;The model determination unit 501 is used to determine a target model required for this deep learning calculation. The target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order;

数据读入单元502,用于在第一线程上依据所述设定顺序依次读入所述多层模型数据;The data reading unit 502 is configured to sequentially read in the multi-layer model data according to the set order on the first thread;

计算单元503,用于在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。The calculation unit 503 is configured to perform calculation on the second thread based on the completed layer of model data read in when the layer of model data is read in, and repeat this step until all calculations are completed.

可以看出,在本申请实施例中,模型包括多层模型数据,在第一线程上进行一层层模型数据的读取,然后在读取完一层模型数据之后,在第二线程上基于读取完的模型数据进行计算,直至所有计算完成。相较于现有技术一次性读 取完整个模型数据再进行计算,本申请将模型读取和计算分层进行,避免了因一次性读取模型而导致卡顿的现象。It can be seen that in the embodiment of the present application, the model includes multiple layers of model data, a layer of model data is read on the first thread, and then after reading a layer of model data, based on the second thread The model data after reading is calculated until all calculations are completed. Compared with the prior art, the entire model data is read at one time and then calculated, and the present application performs model reading and calculation in layers to avoid the phenomenon of jamming caused by the one-time reading of the model.

在本申请的一实现方式中,在确定本次深度学习计算所需的目标模型方面,模型确定单元501具体用于:In an implementation manner of the present application, in terms of determining a target model required for this deep learning calculation, the model determining unit 501 is specifically used to:

确定前台运行的第一应用的第一场景,基于所述第一应用的第一场景确定本次深度学习计算所需的目标模型。The first scene of the first application running in the foreground is determined, and the target model required for this deep learning calculation is determined based on the first scene of the first application.

在本申请的一实现方式中,在所述第一应用为设定应用,和/或所述第一应用的使用优先级大于或等于设定优先级时,所述第一线程和所述第二线程所占用的资源为预留资源池中的资源。In an implementation manner of the present application, when the first application is a setting application, and / or the use priority of the first application is greater than or equal to the setting priority, the first thread and the first application The resources occupied by the two threads are the resources in the reserved resource pool.

在本申请的一实现方式中,所述装置还包括:In an implementation manner of the present application, the device further includes:

第一确定单元504,用于在数据读入单元502在第一线程上依据所述设定顺序依次读入所述多层模型数据之前,确定所述目标模型的大小大于或等于第一阈值,和/或所述电子设备的当前内存占用率大于或等于第二阈值。The first determining unit 504 is configured to determine that the size of the target model is greater than or equal to the first threshold before the data reading unit 502 sequentially reads the multi-layer model data according to the set order on the first thread, And / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold.

在本申请的一实现方式中,数据读入单元502,还用于在所述目标模型的大小小于所述第一阈值,和/或所述电子设备的当前内存占用率小于所述第二阈值的情况下,在第三线程上读入所述多层模型数据;In an implementation manner of the present application, the data reading unit 502 is further used when the size of the target model is less than the first threshold, and / or the current memory occupancy rate of the electronic device is less than the second threshold In the case of, read the multi-layer model data on the third thread;

计算单元503,还用于在所述多层模型数据完成读入后,在所述第三线程上基于所述多层模型数据进行计算。The calculation unit 503 is further configured to perform calculation based on the multi-layer model data on the third thread after the multi-layer model data is read in.

在本申请的一实现方式中,所述装置还包括:In an implementation manner of the present application, the device further includes:

暂停单元505,用于若检测到前台运行的场景从所述第一应用的第一场景切换至第二应用的第二场景,则暂停所述第一应用的第一场景的深度学习计算;The pause unit 505 is configured to pause the deep learning calculation of the first scene of the first application if it is detected that the scene running in the foreground is switched from the first scene of the first application to the second scene of the second application;

释放单元506,用于若在设定时长内未检测到前台运行的场景切换至所述第一应用的第一场景,则释放所述第一线程和所述第二线程;The releasing unit 506 is configured to release the first thread and the second thread if the scene running in the foreground is not detected to switch to the first scene of the first application within a set duration;

执行单元507,用于若在所述设定时长内检测到前台运行的场景切换至所述第一应用的第一场景,则继续进行所述第一应用的第一场景的深度学习计算。The execution unit 507 is configured to continue the deep learning calculation of the first scene of the first application if it is detected that the scene running in the foreground is switched to the first scene of the first application within the set duration.

在本申请的一实现方式中,所述装置还包括:In an implementation manner of the present application, the device further includes:

第二确定单元508,用于在暂停单元505暂停所述第一应用的第一场景的 深度学习计算之前,确定所述第二应用的第二场景的重要等级小于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景为设定场景。The second determining unit 508 is configured to determine that the importance level of the second scene of the second application is less than the first of the first application before the pause unit 505 pauses the deep learning calculation of the first scene of the first application The importance level of the scene, and / or the second scene of the second application is a setting scene.

需要说明的是,模型确定单元501、数据读入单元502、计算单元503、第一确定单元504、暂停单元505、释放单元506、执行单元507和第二确定单元508可以通过处理器来实现。It should be noted that the model determination unit 501, the data reading unit 502, the calculation unit 503, the first determination unit 504, the pause unit 505, the release unit 506, the execution unit 507, and the second determination unit 508 may be implemented by a processor.

本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。An embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes the computer to perform part or all of the steps of any method described in the foregoing method embodiments , The aforementioned computer includes electronic devices.

本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括电子设备。An embodiment of the present application also provides a computer program product, the computer program product includes a non-transitory computer-readable storage medium that stores the computer program, and the computer program is operable to cause the computer to perform any of the methods described in the foregoing method embodiments Some or all steps of the method. The computer program product may be a software installation package, and the computer includes an electronic device.

需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that, for the sake of simple description, the foregoing method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that this application is not limited by the described action sequence, Because according to the present application, certain steps can be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the involved actions and modules are not necessarily required by this application.

在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above embodiments, the description of each embodiment has its own emphasis. For a part that is not detailed in an embodiment, you can refer to related descriptions in other embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device may be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the above-mentioned units is only a division of logical functions. In actual implementation, there may be other division methods, for example, multiple units or components may be combined or integrated To another system, or some features can be ignored, or not implemented. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices or units, and may be in electrical or other forms.

上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部 单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above integrated unit may be implemented in the form of hardware or software functional unit.

上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。If the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a memory, Several instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above methods in various embodiments of the present application. The aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.

本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by a program instructing relevant hardware. The program may be stored in a computer-readable memory, and the memory may include: a flash disk , Read-Only Memory (English: Read-Only Memory, abbreviation: ROM), Random Access Device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.

以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The embodiments of the present application are described in detail above, and specific examples are used to explain the principle and implementation of the present application. The descriptions of the above embodiments are only used to help understand the method and core idea of the present application; at the same time, Those of ordinary skill in the art, according to the ideas of the present application, may have changes in specific implementations and application scopes. In summary, the content of this specification should not be construed as limiting the present application.

Claims (10)

一种深度学习计算方法,其特征在于,应用于电子设备,包括:A deep learning calculation method characterized by being applied to electronic equipment, including: 确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列;Determine the target model required for this deep learning calculation, the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order; 在第一线程上依据所述设定顺序依次读入所述多层模型数据;Sequentially read in the multi-layer model data according to the set order on the first thread; 在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。When the layer of model data is read in, the calculation is performed on the second thread based on the layer of model data read in, and this step is repeated until all calculations are completed. 根据权利要求1所述的方法,其特征在于,所述确定本次深度学习计算所需的目标模型,包括:The method according to claim 1, wherein the determining the target model required for the deep learning calculation includes: 确定前台运行的第一应用的第一场景,基于所述第一应用的第一场景确定本次深度学习计算所需的目标模型。The first scene of the first application running in the foreground is determined, and the target model required for this deep learning calculation is determined based on the first scene of the first application. 根据权利要求2所述的方法,其特征在于,在所述第一应用为设定应用,和/或所述第一应用的使用优先级大于或等于设定优先级时,所述第一线程和所述第二线程所占用的资源为预留资源池中的资源。The method according to claim 2, wherein when the first application is a setting application, and / or the use priority of the first application is greater than or equal to the setting priority, the first thread And the resources occupied by the second thread are resources in the reserved resource pool. 根据权利要求1-3任一项所述的方法,其特征在于,所述在第一线程上依据所述设定顺序依次读入所述多层模型数据之前,所述方法还包括:The method according to any one of claims 1 to 3, wherein before the sequentially reading the multi-layer model data on the first thread according to the set order, the method further comprises: 确定所述目标模型的大小大于或等于第一阈值,和/或所述电子设备的当前内存占用率大于或等于第二阈值。It is determined that the size of the target model is greater than or equal to the first threshold, and / or the current memory occupancy rate of the electronic device is greater than or equal to the second threshold. 根据权利要求4所述的方法,其特征在于,所述方法还包括:The method according to claim 4, wherein the method further comprises: 在所述目标模型的大小小于所述第一阈值,和/或所述电子设备的当前内存占用率小于所述第二阈值的情况下,在第三线程上读入所述多层模型数据,以及在所述多层模型数据完成读入后,在所述第三线程上基于所述多层模型数据进行计算。When the size of the target model is less than the first threshold, and / or the current memory occupancy rate of the electronic device is less than the second threshold, reading the multi-layer model data on a third thread, And after the multi-layer model data is read in, calculation is performed on the third thread based on the multi-layer model data. 根据权利要求2-5任一项所述的方法,其特征在于,在进行所述第一应用的第一场景的深度学习计算过程中,所述方法还包括:The method according to any one of claims 2-5, wherein during the deep learning calculation process of the first scene of the first application, the method further comprises: 若检测到前台运行的场景从所述第一应用的第一场景切换至第二应用的第二场景,则暂停所述第一应用的第一场景的深度学习计算;If it is detected that the scene running in the foreground is switched from the first scene of the first application to the second scene of the second application, the deep learning calculation of the first scene of the first application is suspended; 若在设定时长内未检测到前台运行的场景切换至所述第一应用的第一场 景,则释放所述第一线程和所述第二线程;If it is not detected that the scene running in the foreground is switched to the first scene of the first application within the set duration, the first thread and the second thread are released; 若在所述设定时长内检测到前台运行的场景切换至所述第一应用的第一场景,则继续进行所述第一应用的第一场景的深度学习计算。If it is detected that the scene running in the foreground is switched to the first scene of the first application within the set duration, the deep learning calculation of the first scene of the first application is continued. 根据权利要求6所述的方法,其特征在于,所述暂停所述第一应用的第一场景的深度学习计算之前,所述方法还包括:The method according to claim 6, wherein before the suspending the deep learning calculation of the first scene of the first application, the method further comprises: 确定所述第二应用的第二场景的重要等级小于所述第一应用的第一场景的重要等级,和/或所述第二应用的第二场景为设定场景。It is determined that the importance level of the second scene of the second application is less than the importance level of the first scene of the first application, and / or the second scene of the second application is a setting scene. 一种深度学习计算装置,其特征在于,应用于电子设备,包括:A deep learning computing device characterized by being applied to electronic equipment, including: 模型确定单元,用于确定本次深度学习计算所需的目标模型,所述目标模型包括多层模型数据,所述多层模型数据在所述目标模型中依据设定顺序先后排列;A model determining unit, configured to determine a target model required for this deep learning calculation, the target model includes multi-layer model data, and the multi-layer model data is arranged in the target model according to a set order; 数据读入单元,用于在第一线程上依据所述设定顺序依次读入所述多层模型数据;A data reading unit for sequentially reading the multi-layer model data according to the set sequence on the first thread; 计算单元,用于在一层模型数据完成读入时,在第二线程上基于完成读入的一层模型数据进行计算,重复此步骤直至所有计算完成。The calculation unit is used to perform calculation on the second thread based on the completed layer of model data when the layer of model data is read in, and repeat this step until all calculations are completed. 一种电子设备,其特征在于,包括处理器、存储器、通信接口,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-7任一项所述的方法中的步骤的指令。An electronic device, characterized in that it includes a processor, a memory, a communication interface, and one or more programs, the one or more programs are stored in the memory, and are configured to be executed by the processor, The program includes instructions for performing the steps in the method of any one of claims 1-7. 一种计算机可读存储介质,其特征在于,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-7任一项所述的方法。A computer-readable storage medium characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method according to any one of claims 1-7.
PCT/CN2018/113991 2018-11-05 2018-11-05 Deep learning computation method and related device Ceased WO2020093205A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880097754.XA CN112714917B (en) 2018-11-05 2018-11-05 Deep learning calculation method and related equipment
PCT/CN2018/113991 WO2020093205A1 (en) 2018-11-05 2018-11-05 Deep learning computation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/113991 WO2020093205A1 (en) 2018-11-05 2018-11-05 Deep learning computation method and related device

Publications (1)

Publication Number Publication Date
WO2020093205A1 true WO2020093205A1 (en) 2020-05-14

Family

ID=70612359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113991 Ceased WO2020093205A1 (en) 2018-11-05 2018-11-05 Deep learning computation method and related device

Country Status (2)

Country Link
CN (1) CN112714917B (en)
WO (1) WO2020093205A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510749B2 (en) * 2010-05-27 2013-08-13 International Business Machines Corporation Framework for scheduling multicore processors
CN106484324A (en) * 2016-09-13 2017-03-08 郑州云海信息技术有限公司 Method, system and RAID that a kind of RAID rebuilds
CN107563512A (en) * 2017-08-24 2018-01-09 腾讯科技(上海)有限公司 A kind of data processing method, device and storage medium
CN108520300A (en) * 2018-04-09 2018-09-11 郑州云海信息技术有限公司 A method and device for implementing a deep learning network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510749B2 (en) * 2010-05-27 2013-08-13 International Business Machines Corporation Framework for scheduling multicore processors
CN106484324A (en) * 2016-09-13 2017-03-08 郑州云海信息技术有限公司 Method, system and RAID that a kind of RAID rebuilds
CN107563512A (en) * 2017-08-24 2018-01-09 腾讯科技(上海)有限公司 A kind of data processing method, device and storage medium
CN108520300A (en) * 2018-04-09 2018-09-11 郑州云海信息技术有限公司 A method and device for implementing a deep learning network

Also Published As

Publication number Publication date
CN112714917A (en) 2021-04-27
CN112714917B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
US11997382B2 (en) Method for providing different indicator for image based on shooting mode and electronic device thereof
CN111782102B (en) Window display method and related device
US10536637B2 (en) Method for controlling camera system, electronic device, and storage medium
KR102524498B1 (en) The Electronic Device including the Dual Camera and Method for controlling the Dual Camera
CN105981365B (en) Electronic device photographing method and electronic device thereof
EP3287866A1 (en) Electronic device and method of providing image acquired by image sensor to application
CN106210330B (en) A kind of image processing method and terminal
US10609276B2 (en) Electronic device and method for controlling operation of camera-related application based on memory status of the electronic device thereof
CN107409180A (en) Electronic device with camera module and image processing method for electronic device
KR102664060B1 (en) Method for controlling a plurality of cameras and Electronic device
EP3352449B1 (en) Electronic device and photographing method
CN106534669A (en) Shooting composition method and mobile terminal
KR102358002B1 (en) Contents transmission controlling method and electronic device supporting the same
KR20170098119A (en) Electronic device and method for controlling brightness of display thereof
CN108353152A (en) Image processing device and method of operation thereof
CN104133711A (en) Camera safe switching method based on Android system
KR20170046915A (en) Apparatus and method for controlling camera thereof
KR20160099435A (en) Method for controlling camera system, electronic apparatus and storage medium
KR20170098093A (en) Electronic apparatus and operating method thereof
EP3054709B1 (en) Electronic apparatus and short-range communication method thereof
US20170140733A1 (en) Electronic device and method for displaying content thereof
CN104994284B (en) A kind of control method and electric terminal of wide-angle camera
KR102255361B1 (en) Method and electronic device for processing intent
KR20160103444A (en) Method for image processing and electronic device supporting thereof
WO2020093205A1 (en) Deep learning computation method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18939630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18939630

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18939630

Country of ref document: EP

Kind code of ref document: A1