[go: up one dir, main page]

WO2024041481A1 - Method, apparatus, and system for executing instruction, and server - Google Patents

Method, apparatus, and system for executing instruction, and server Download PDF

Info

Publication number
WO2024041481A1
WO2024041481A1 PCT/CN2023/114014 CN2023114014W WO2024041481A1 WO 2024041481 A1 WO2024041481 A1 WO 2024041481A1 CN 2023114014 W CN2023114014 W CN 2023114014W WO 2024041481 A1 WO2024041481 A1 WO 2024041481A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
queue
target
address
mapping relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/114014
Other languages
French (fr)
Chinese (zh)
Inventor
戴书舟
廖志佳
余峰
鄢林
程欣
刘强军
王俊
郭成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Publication of WO2024041481A1 publication Critical patent/WO2024041481A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Definitions

  • the present disclosure belongs to the field of storage technology, and specifically relates to a method, device, server and system for executing instructions.
  • NVME Non Volatile Memory Host Controller Interface, non-volatile memory host controller interface
  • Each NVME device is bound to the corresponding NVME controller, and each NVME device and its NVME controller perform storage interactions through multiple queues. Multiple NVME devices will correspond to a large number of NVME queues.
  • SQ Submission Queue, submission queue
  • CQ Completion Queue, completion queue
  • SQ and its corresponding CQ can be called a QP (Queue Pair).
  • embodiments of the present disclosure provide a method for executing instructions, and the method is applied to a virtual device.
  • the method includes: obtaining an instruction acquisition request in a target queue, where the instruction acquisition request is used to indicate acquisition of a target instruction; and based on the instruction acquisition request, determining an instruction acquisition corresponding to the target queue according to a preset mapping relationship.
  • Address, the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address; the target instruction is obtained according to the instruction acquisition address; and the operation corresponding to the target instruction is performed according to the target instruction.
  • embodiments of the present disclosure provide a method for executing instructions, and the method is applied to a server.
  • the method includes: sending an instruction acquisition request to a target queue of the virtual device, where the instruction acquisition request is used to indicate acquisition of the target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship, and the mapping relationship indicates The corresponding relationship between the queue information where the target queue is located and the instruction acquisition address.
  • inventions of the present disclosure provide a device for executing instructions.
  • the device includes: a request acquisition unit, used to acquire an instruction acquisition request in a target queue, the instruction acquisition request being used to indicate acquisition of a target instruction; an address determination unit, based on the instruction acquisition request, according to a preset mapping relationship, determine the instruction acquisition address corresponding to the target queue, the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address; the instruction acquisition unit is used to obtain the instruction acquisition address according to the instruction acquisition address Target instruction; an instruction execution unit, configured to execute operations corresponding to the target instruction according to the target instruction.
  • inventions of the present disclosure provide a server.
  • the server includes: a request sending unit, configured to send an instruction acquisition request to a target queue of the virtual device, where the instruction acquisition request is used to indicate acquisition of the target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship.
  • the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address.
  • inventions of the present disclosure provide a system for executing instructions.
  • the system includes the above-mentioned server, and At least one virtual device, wherein the at least one virtual device includes the above-mentioned device for executing instructions.
  • an embodiment of the present disclosure provides an electronic device.
  • the electronic device includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the programs or instructions are executed by the processor as described in the first aspect or the second aspect. steps of the method described.
  • embodiments of the present disclosure provide a readable storage medium. Programs or instructions are stored on the readable storage medium, and when the programs or instructions are run by a processor, the steps of implementing the method described in the first aspect or the second aspect are executed.
  • Figure 1 is a schematic flowchart of a method for executing instructions according to some embodiments
  • Figure 2 is yet another flowchart of a method for executing instructions according to some embodiments
  • Figure 3 is another schematic flowchart of a method for executing instructions according to some embodiments.
  • Figure 4 is another schematic flowchart of a method for executing instructions according to some embodiments.
  • Figure 5 is a structural block diagram of an execution instruction system according to some embodiments.
  • Figure 6 is a structural block diagram of a device for executing instructions according to some embodiments.
  • Figure 7 is a structural block diagram of a server according to some embodiments.
  • Figure 8 is a working principle diagram of an execution instruction system according to some embodiments.
  • Figure 9 is a working principle diagram of a user storage control system according to some embodiments.
  • Figure 10 is a schematic structural diagram of an electronic device according to some embodiments.
  • NVME devices have multi-queue characteristics, that is, multiple channels can be constructed between the host and the NVME device for command transmission and data interaction, and the carriers corresponding to these channels are queues.
  • the existence of multiple NVME device queues allows the host to use multiple cores or threads to submit commands and process command completion results in parallel.
  • NVME devices may have different queue requirements based on their own services, and with the development of cloud storage services, users have an increasing demand for the number of NVME devices. Each NVME device requires certain queue resources to achieve its business and functions. How to effectively manage multiple NVME devices under limited hardware conditions to meet various storage business requirements is an urgent problem that needs to be solved.
  • embodiments of the present disclosure provide an instruction execution solution, which is configured by The mapping relationship between virtual device queues and queue storage resources realizes dynamic management of virtual device queue storage resources, so that multiple virtual devices can be effectively managed under limited storage resources, and various storage business needs can be better realized.
  • the instruction execution scheme of the embodiments of the present disclosure can be applied to virtual devices in the storage field.
  • the virtual device is described in detail by taking an NVME device as an example.
  • Figure 1 is a schematic flowchart of a method for executing instructions according to some embodiments. As shown in Figure 1, the method may include the following S110 to S140.
  • the instruction acquisition request is used to indicate acquisition of the target instruction.
  • the queue implements instruction transmission between the NVME device and the server (corresponding to the above-mentioned host), and the queue may be called a QP queue.
  • the instruction here can be a storage service instruction, and the instruction acquisition request is used to instruct to obtain the target instruction from the server.
  • At least one queue is constructed according to business requirements. Generally speaking, multiple queues need to be built to achieve various storage business requirements. Subsequently, the storage space (ie, hardware resource, or storage resource) can be divided into the same number of sub-storage spaces as the constructed queues, and each sub-storage space is allocated to the corresponding queue respectively.
  • the storage space ie, hardware resource, or storage resource
  • the storage space here can be the storage space corresponding to the server.
  • the storage space can be the storage space of the server or the plug-in storage space corresponding to the server.
  • This storage space can store target instructions and can also store other information.
  • the sub-storage space here is at least used to store the instruction acquisition address and the instruction result storage address.
  • the instruction acquisition address refers to the address where the instruction is stored, which can be called the SQ base address.
  • the instruction result storage address refers to the address where the instruction result is stored, which can be called the CQ base address.
  • the storage space needs to be divided into 128 sub-storage spaces, and each sub-storage space is allocated to each queue. In this way, the queue can transmit data to the server through the sub-storage space.
  • the storage space used for queues is collectively called storage resources
  • the sub-storage spaces are collectively called sub-resources
  • the servers are collectively called hosts.
  • the storage resources can be divided into multiple sub-resources in an equal manner, or the sub-resources can be divided according to actual conditions, and the present disclosure does not place a limit on this.
  • the embodiments of the present disclosure are described in detail by taking the resource equalization method as an example.
  • the mapping relationship can be set in the following manner: after allocating the sub-resources to which the queue belongs to each queue, the corresponding relationship between the queue information and the sub-resource information to which the queue belongs can be obtained; and the mapping is set according to the obtained corresponding relationship. relation.
  • queue information and sub-resources can be numbered respectively, and the corresponding relationship can be represented by numbering.
  • queue 3 corresponds to resource number 32, which means that among multiple queues of multiple virtual devices, resource number 32 is the exclusive resource of queue 3.
  • the sub-resource corresponding to queue 3 can implement instructions between queue 3 and the host. transfer.
  • the mapping relationship may also indicate the corresponding relationship between queue information (for example, queue number), virtual device to which the queue belongs, and sub-resource information (for example, resource number) to which the queue belongs.
  • queue information for example, queue number
  • sub-resource information for example, resource number
  • queue information, virtual devices, and resources can be numbered respectively, and the corresponding relationships are expressed through numbering.
  • queue 3 of virtual device 5 corresponds to resource number 32, indicating that among multiple virtual devices, the resource Number 32 is the exclusive storage resource of queue 3 of virtual device 5.
  • the storage resource corresponding to number 32 can be used to implement instruction transfer between queue 3 of virtual device 5 and the host.
  • the execution result of the target instruction can be sent to the storage space corresponding to the instruction result storage address in the sub-storage space.
  • the instruction result storage address in the sub-storage space indicates the detailed storage location used to store the instruction result.
  • the embodiment of the present disclosure determines the instruction acquisition address corresponding to the target queue according to the mapping relationship, and accordingly obtains the target instruction according to the instruction acquisition address and executes the target instruction.
  • queues and their associated resources can be effectively managed, virtual devices can be effectively managed under limited storage resources, storage service instructions can be better executed, and various storage service needs can be met.
  • the queue's sub-resources are released. In this way, the released storage resources can still be used for new queues built subsequently, further achieving effective management of limited storage resources and improving resource utilization.
  • embodiments of the present disclosure also provide a method for executing instructions, which method is applied to a server (corresponding to the above-mentioned host).
  • Figure 2 is a flow chart of a method of executing instructions applied to a server. As shown in Figure 2, the method includes S210.
  • S210 Send an instruction acquisition request to the target queue of the virtual device.
  • the instruction acquisition request is used to instruct the acquisition of the target instruction, so that the virtual device obtains the target instruction based on a preset mapping relationship.
  • the mapping relationship indicates that the queue information where the target queue is located and The corresponding relationship of the instruction acquisition address.
  • the above mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address.
  • the method also includes: receiving the execution result of the target instruction from the above-mentioned target queue; and based on the mapping relationship, sending the execution result of the target instruction to the storage space corresponding to the instruction result storage address. That is, the instruction result storage address in the sub-storage space indicates the detailed storage location used to store the instruction result.
  • the instruction execution method includes the following processes.
  • IO QP input and output QP, that is, the queue in S110
  • IO QP can be called QP queue or QP
  • data transmission between the NVME device and the host can be achieved.
  • a certain total storage resource is evenly divided into several parts. Since each pair of IO QP queues requires a certain amount of basic storage resources, the scale of the total storage resources can be calculated based on the number of IO QP queues.
  • the number of sub-resources that need to be divided is related to the capabilities required by the current business. For example, some services require more NVME devices and require a total of 1024 pairs of IO QPs to complete; while some services require fewer NVME devices and only require 128 QPs to fully meet business needs.
  • a certain total storage resource can be divided into a total of 1024 sub-resources. Allocate each sub-resource to an IO QP as the storage resource required by the IO QP.
  • IO QP storage resources are used to store: SQ base address information (corresponding to the above instruction acquisition address), SQ queue depth information, SQ doorbell information (SQ Doorbell), CQ base address information (corresponding to the above instruction results) storage location address), CQ queue depth information, CQ doorbell information, CQ interrupt vector information, etc.
  • SQ base address information corresponding to the above instruction acquisition address
  • SQ queue depth information SQ doorbell information (SQ Doorbell)
  • CQ base address information (corresponding to the above instruction results) storage location address)
  • CQ queue depth information corresponding to the above instruction results
  • CQ interrupt vector information etc.
  • S302 Establish a numbering mechanism for the resources to which each IO QP belongs (which can also be called QP storage resources). For example, 1024 resources belonging to IO QP are numbered and numbered from 0 to 1023. Each number corresponds to one resource belonging to IO QP.
  • the NVME device when an NVME device has multiple IO QP queues, the NVME device can correspond to each IO QP and its associated resource number.
  • S303 Assign a resource number to each IO QP according to the order in which each IO QP is created, and save the IO QP information data when each IO QP is created (for example, the device information to which the QP belongs, etc.).
  • S304 Number each NVME device. The number corresponds to the device ID (identification). For each IO QP of the NVME device, there is a corresponding QP ID in the NVME protocol. The device ID can be added to the QP ID within the device. (Device ID + QP ID within the device) as its unique identifier, which is then bound to a storage resource number that can be allocated, thereby enabling different IO QPs of each NVME device to share the overall storage resources.
  • This disclosed embodiment provides a resource number allocation management mechanism to divide and reasonably number existing limited storage resources. Each individual queue corresponds to a numbered storage resource, and the queue completes the corresponding NVME device based on this resource. certain businesses and functions. For an independent NVME device, it can create multiple queues, and accordingly allocate multiple storage resources with different numbers to the NVME device.
  • S305 Conduct business and functional management of all internal storage resource numbers and corresponding resources, and use the mutual mapping between storage resource numbers and device IDs plus QP ID (device ID + QP ID) to control the connection between the host and each NVME device storage commands and data interaction.
  • the queue can be bound and mapped to the storage resource number, so that when the NVME device and the host interact, the actual device and the actual device can be found accurately and quickly. Detailed parameters of the queue, and complete the storage interaction of instructions and data with the host.
  • the number of the deleted queue and its resources in the NVME device are recycled. After recycling, its number and resources can be allocated to subsequent newly created queues to achieve the purpose of recycling resources.
  • the embodiments of the present disclosure provide a dynamic sharing process of queue storage resources under multiple NVME devices by establishing a mechanism for resource number allocation management, resource recycling and allocation, and a device ID, device queue ID, and resource number binding strategy.
  • IO QP in virtual devices can be effectively managed under limited storage resources, so that storage services and functions can be flexibly implemented.
  • the instruction execution method includes the following processes.
  • S402 Establish a numbering mechanism based on unit IO QP storage resources (that is, the resource to which the IO QP belongs).
  • the resources belonging to 1024 IO QPs are numbered and numbered from 0 to 1023. Each number corresponds to an IO QP storage resource.
  • S403 The host creates 8 IO QPs for each NVME device based on the IO capabilities presented by the 128 virtual NVME devices, for a total of 1024 QPs.
  • the numbers are QP0 to QP1023 in the order of creation.
  • the number of IO QPs for each NVME device is set equally. In actual operation, different numbers of IO QPs can also be created for different NVME devices based on the business capabilities and needs of each NVME device to achieve different business needs. This disclosure does not limit this.
  • S404 Number 128 NVME devices.
  • the device IDs are 0 to 127.
  • the QP ID is set to 0 to 7.
  • each IO QP can correspond to resource number.
  • QP3 of device 5 can be mapped to resource number 32.
  • mapping table of correspondences between resource numbers, device IDs and QP IDs may be constructed.
  • S405 Control the storage command and data interaction between the host and 128 NVME devices based on the mutual mapping between resource numbers 0 to 1023 and device ID + QP ID.
  • the detailed implementation is: search the mapping table according to the device ID (0 to 127) + QP ID (0 to 7) to which the current instruction belongs, find its corresponding resource number, and obtain the storage resource information corresponding to the QP ID corresponding to the resource number.
  • the storage resource information includes: SQ base address information, SQ queue depth information, SQ doorbell information, CQ base address information, CQ queue depth information, CQ doorbell information, CQ interrupt vector information, etc.
  • IO QP 3 of device 5 among 128 NVME devices receives the SQ Doorbell sent from the host, its corresponding resource can be found from the mapping table based on the device ID and queue ID "5+3" number. The number is 32, and then the SQ base address information corresponding to resource number 32 is read out. Based on the SQ base address information, the SQ Entry command of QP 3 of device 5 is read back to the user side from the host to complete the subsequent SQ Entry command parsing. Perform and store data transfer and other operations.
  • the disclosed embodiment realizes the dynamic sharing process of queue storage resources under multiple NVME devices by establishing a resource number allocation management and recycling allocation mechanism, as well as a device ID, queue ID and resource number binding strategy.
  • the embodiments of the present disclosure can implement more NVME virtual devices with limited storage resources, and the flexible allocation of queue resources in each NVME device can also enable the NVME storage system to support more storage services, which better improves the efficiency of the storage system. Flexibility, can flexibly support NVME devices with different IO capabilities, and flexibly manage the creation, work and death of all devices.
  • embodiments of the present disclosure also provide an execution instruction system.
  • the system includes: a server and at least one virtual device (corresponding to the above-mentioned NVME device), and the at least one virtual device includes an instruction execution device.
  • the instruction execution device may be installed in the virtual device or outside the virtual device.
  • Figure 5 is a structural block diagram of the system. As shown in Figure 5, the system 1 shows: an instruction execution device 10, a server 20 and a virtual device 30 (shown as a virtual device in Figure 5). In this figure, the instruction execution device is provided outside the virtual device.
  • each virtual device corresponds to at least one queue.
  • Figure 6 is a structural block diagram of the instruction execution device 10 according to some embodiments. As shown in Figure 6, the instruction execution device 10 includes: a request acquisition unit 101, an address determination unit 102, an instruction acquisition unit 103 and an execution unit 104.
  • the request acquisition unit 101 is configured to acquire an instruction acquisition request in the target queue, where the instruction acquisition request is used to indicate acquisition of a target instruction.
  • the address determination unit 102 is configured to determine the instruction acquisition address corresponding to the target queue based on the instruction acquisition request and according to a preset mapping relationship, where the mapping relationship indicates the relationship between the queue information where the target queue is located and the instruction acquisition address. Correspondence.
  • the instruction acquisition unit 103 is configured to acquire the target instruction according to the instruction acquisition address.
  • the execution unit 104 is configured to execute operations corresponding to the target instructions according to the target instructions.
  • the address determination unit 102 determines the instruction acquisition address corresponding to the target queue according to the mapping relationship, and the instruction acquisition unit 103 acquires the target instruction according to the instruction acquisition address, and then executes Unit 104 executes the target instruction.
  • the address determination unit 102 determines the instruction acquisition address corresponding to the target queue according to the mapping relationship
  • the instruction acquisition unit 103 acquires the target instruction according to the instruction acquisition address
  • executes Unit 104 executes the target instruction.
  • the above device further includes: a queue building unit, a storage space dividing unit and an allocation unit.
  • Queue building unit used to build at least one queue according to business requirements.
  • a storage space dividing unit is configured to divide the storage space into the same number of sub-storage spaces as the at least one queue.
  • An allocation unit is used to allocate each sub-storage space to a corresponding queue, wherein the sub-storage space is at least used to store an instruction acquisition address and an instruction result storage address.
  • the above device further includes: a mapping relationship setting unit, which includes: a correspondence relationship acquisition module and a mapping relationship setting module.
  • the correspondence acquisition module is used to obtain the correspondence between the queue information and its sub-storage space information.
  • a mapping relationship setting module is used to set the mapping relationship according to the obtained corresponding relationship.
  • the above device further includes: an execution result sending unit.
  • the execution result sending unit is used to send the execution result of the above-mentioned target instruction to a location corresponding to the instruction result storage address in the sub-storage space.
  • the above device further includes: a storage space releasing unit.
  • the storage space release unit is used to release the sub-storage space of the queue in response to the queue being deleted.
  • the instruction execution device 10 in the embodiment of the present disclosure may be a device with an operating system.
  • the operating system can be an Android operating system, an ios operating system, or other possible operating systems.
  • the embodiments of this disclosure are not specifically limited.
  • the instruction execution device 10 provided by the embodiment of the present disclosure can implement each process implemented in the method embodiments of FIG. 1, FIG. 3, and FIG. 4, and achieve the same technical effect. To avoid duplication, the details will not be described here.
  • FIG. 7 is a structural block diagram of the above-mentioned server 20.
  • the server 20 includes: a request sending unit 201, Used to send an instruction acquisition request to a target queue of a virtual device.
  • the instruction acquisition request is used to indicate acquisition of a target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship, and the mapping relationship indicates the target.
  • the above mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address.
  • the server further includes: an execution result receiving unit and an execution result sending unit.
  • An execution result receiving unit is configured to receive the execution result of the target instruction from the target queue.
  • An execution result sending unit is configured to send the execution result of the target instruction to a storage space corresponding to the instruction result storage address based on the mapping relationship.
  • each NVME device In actual operation, in a multi-NVME device scenario, the controller of each NVME device is stored independently. That is, each NVME device needs to occupy independent and complete storage resources to realize the storage process interaction between it and the host.
  • the detailed process includes: NVME device initialization, NVME ADMIN process and NVME IO process, etc.
  • the working principle of the instruction execution system of the embodiment of the present disclosure will be described in detail below with reference to FIG. 8 and FIG. 9 .
  • Figure 8 is a schematic diagram of the operation of an instruction execution system according to some embodiments.
  • the example system includes: a host and a user storage control system.
  • the host has the function of the above-mentioned server 20, and the user storage control system has the function of the above-mentioned execution instruction device 10.
  • the user storage control system includes: command response and processing module and queue resource dynamic management module.
  • the working principle of this example system includes: the NVME ADMIN process from steps 1 to 3 and the NVME IO process from steps 4 to 6. Each step is described below.
  • Step 1 The host sends the message of the ADMIN (management) queue, that is, the ADMIN doorbell (Doorbell), through the PCIE and DMA (PCIE+DMA (Direct Memory Access, direct memory access)) module to enter the command response and control system within the user storage control system processing module.
  • the ADMIN doorbell Doorbell
  • PCIE and DMA PCIE+DMA (Direct Memory Access, direct memory access)
  • Step 2 The command response and processing module receives the ADMIN doorbell, which can be understood as an ADMIN command acquisition request.
  • the command response and processing module sends the ADMIN command acquisition request of the corresponding queue to the PCIE+DMA module, and the PCIE+DMA module retrieves the ADMIN command data from the host.
  • Step 3 The retrieved ADMIN command enters the command response and processing module, which is parsed by the command response and processing module, and notifies the queue resource dynamic management module to perform dynamic management such as numbering, allocation, and recycling of QP hardware resources.
  • Step 4 The host sends the message of the IO QP queue, that is, the IO doorbell (Doorbell), and enters the command response and processing module in the user storage control system through the PCIE+DMA module.
  • the IO doorbell Doorbell
  • Step 5 The command response and processing module sends the IO command acquisition request of the corresponding queue to the PCIE+DMA module, and the PCIE+DMA module retrieves the IO command data from the host.
  • Step 6 The retrieved IO command enters the command response and processing module.
  • the command response and processing module interacts with the queue resource dynamic management module to realize the mapping between QP resource number and device ID + device QP ID, thereby realizing the user storage control system. Interact with storage IO commands between hosts.
  • the ADMIN command is mainly used to present the capabilities of the NVME device to the host and create the NVME device queue; while the IO command is mainly used to implement the storage data reading and writing functions between the host and the NVME device. wait.
  • Figure 9 is a schematic diagram of the working principle of the user storage control system shown in Figure 8.
  • the command response and processing module in Figure 8 corresponds to: the queue command processing unit and the queue message response unit.
  • the dynamic management of queue resources in Figure 8 corresponds to: queue number mapping management component, queue number allocation management component and queue resource recovery control unit. Each is described below. How the parts work.
  • Step 1 The host controls each NVME controller in the user storage system through the PCIE+DMA component (i.e., the above-mentioned PCIE+DMA module) (having the functions of the command response and processing module and queue resource dynamic management module of Figure 8 above) Initialize and establish multiple QPs (ie, IO QPs), including storage submission queue (SQ) and storage completion queue (CQ). After the QP is created, the Doorbell corresponding to each QP is sent to each NVME controller in the user's storage control system, and enters the queue message response unit of the corresponding NVME controller.
  • the PCIE+DMA component i.e., the above-mentioned PCIE+DMA module
  • Initialize and establish multiple QPs ie, IO QPs
  • SQ storage submission queue
  • CQ storage completion queue
  • Step 2 When the queue message response unit receives a new queue message, it can initiate command acquisition to the host (HOST, not shown in the figure) through the PCIE+DMA component, and move the command from the HOST through the PCIE+DMA component. Return to the queue command processing unit in the controller and notify the queue number allocation management component at the same time.
  • HOST host
  • PCIE+DMA PCIE+DMA
  • Step 3 Use the queue number allocation management component to assign a corresponding QP number to each created QP.
  • creating 1024 QPs corresponds to numbers 0 to 1023, and each number corresponds to a unique QP, that is, a unique SQ and CQ; and when creating a QP, the QP resource number and device are also created in the queue number mapping management component. Mapping table and anti-mapping table between ID+device QP ID.
  • a corresponding queue execution flag register can be designed for each QP.
  • the corresponding flag position can be set to 1.
  • Step 4 The queue command processing unit caches and searches QP information resources for detailed commands based on the queue execution flag register of QP and the mapping table and anti-mapping table between QP resource number and device ID + device QP ID. For example, the base address of each SQ is cached. When it is necessary to obtain an IO command (SQ entry) from the host, the base address of the SQ can be retrieved from the SQ information cache in the corresponding QP resource cache, so that the base address of the SQ can be retrieved from the host. Get the corresponding command from the corresponding address. At the same time, the base address of each CQ can be cached. When the completion status of the command (CQ entry) needs to be sent to the host, the base address of the CQ can be retrieved from the CQ information cache in the corresponding QP resource cache. CQ entry is sent to the address corresponding to the host.
  • Step 5 When the queue command processing unit parses the ADMIN command to delete the QP, the queue command processing unit notifies the queue resource recovery control unit to clear and clear all storage resources of the QP and recycle this resource for subsequent use. Repurpose.
  • the disclosed embodiment adopts a QP storage resource recycling mechanism, which is implemented by a queue resource recycling control unit.
  • This unit performs serial number recycling and cache, and stores the serial number in the cache. For example, when all 1024 numbers have been allocated by the queue number allocation management component, if a new QP is created, the number will be taken out from the number recycling cache, and the number and corresponding resources will be allocated to the newly created QP.
  • the newly created QP can perform normal storage services and functions.
  • embodiments of the present disclosure also provide an electronic device 1000.
  • the electronic device 1000 includes a processor 1010 and a memory 1020.
  • the memory 1020 stores programs or instructions that can be run on the processor 1010. For example, when the electronic device 1000 is a terminal, the program or instructions are processed by the processor 1010. During execution, each process of the above instruction execution method embodiment is realized, and the same technical effect can be achieved. To avoid repetition, they will not be repeated here.
  • An embodiment of the present disclosure also provides a readable storage medium.
  • the readable storage medium stores programs or instructions.
  • the program or instructions are executed by the processor, each process of the above-mentioned instruction execution method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details will not be described here.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage medium includes computer-readable storage media, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory (RAM), disk or optical disk, etc.
  • the readable storage media includes non-transitory computer-readable storage media.
  • An embodiment of the present disclosure also provides a chip, which includes a processor and a communication interface.
  • the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement each process of the above instruction execution method embodiment, and can achieve the same technical effect. To avoid duplication, the details will not be described here.
  • inventions of the present disclosure also provide a computer program product.
  • the computer program product includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor.
  • the program or instructions When instructions are executed by the processor, each process of the above instruction execution method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details will not be described here.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present disclosure can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM, RAM, disk, etc.) , optical disk), including several instructions to cause a terminal (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method, apparatus, and system for executing an instruction. The method comprises: acquiring an instruction acquisition request in a target queue (S110), the instruction acquisition request being used for instructing to acquire a target instruction; on the basis of the instruction acquisition request, determining an instruction acquisition address corresponding to the target queue according to a preset mapping relationship (S120), the mapping relationship indicating a correspondence between queue information of the target queue and the instruction acquisition address; acquiring the target instruction according to the instruction acquisition address (S130); and executing an operation corresponding to the target instruction according to the target instruction (S140).

Description

执行指令的方法、装置、服务器及系统Methods, devices, servers and systems for executing instructions

本公开要求于2022年08月26日提交的、申请号为202211037817.7的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This disclosure claims priority from the Chinese patent application with application number 202211037817.7, submitted on August 26, 2022, the entire content of which is incorporated into this application by reference.

技术领域Technical field

本公开属于存储技术领域,具体涉及一种执行指令的方法、装置、服务器及系统。The present disclosure belongs to the field of storage technology, and specifically relates to a method, device, server and system for executing instructions.

背景技术Background technique

目前,随着虚拟化技术的发展,主机与用户间可以通过PCIE(Peripheral Component Interconnect Express,一种高速串行计算机扩展总线标准)构造和呈现多个虚拟化存储设备,这些虚拟化存储设备以卸载方式实现释放主机CPU的资源。存储领域的虚拟设备当前以NVME(Non Volatile Memory Host Controller Interface,非易失性内存主机控制器接口)设备为主,每个NVME设备需要占用对应的硬件资源(即,存储资源)。At present, with the development of virtualization technology, multiple virtualized storage devices can be constructed and presented between the host and the user through PCIE (Peripheral Component Interconnect Express, a high-speed serial computer expansion bus standard). These virtualized storage devices can be offloaded Method to release the resources of the host CPU. Virtual devices in the storage field are currently dominated by NVME (Non Volatile Memory Host Controller Interface, non-volatile memory host controller interface) devices. Each NVME device needs to occupy corresponding hardware resources (ie, storage resources).

每个NVME设备与对应的NVME控制器进行绑定,且每个NVME设备与其NVME控制器之间通过多个队列进行存储交互,多个NVME设备则会对应大量的NVME队列。在标准的NVME协议中,SQ(Submission Queue,提交队列)与CQ(Completion Queue,完成队列)是NVME存储命令的载体,用来承载主机与NVME设备间交互的具体命令及命令完成状态。每一个SQ以及与之相对的CQ可以称之为一个QP(Queue Pair)。Each NVME device is bound to the corresponding NVME controller, and each NVME device and its NVME controller perform storage interactions through multiple queues. Multiple NVME devices will correspond to a large number of NVME queues. In the standard NVME protocol, SQ (Submission Queue, submission queue) and CQ (Completion Queue, completion queue) are the carriers of NVME storage commands, used to carry the specific commands and command completion status of the interaction between the host and the NVME device. Each SQ and its corresponding CQ can be called a QP (Queue Pair).

发明内容Contents of the invention

第一方面,本公开实施例提供了一种执行指令的方法,所述方法应用于虚拟设备。所述方法包括:获取目标队列中的指令获取请求,所述指令获取请求用于指示获取目标指令;基于所述指令获取请求,根据预先设置的映射关系,确定与所述目标队列对应的指令获取地址,所述映射关系指示目标队列所在的队列信息与指令获取地址的对应关系;根据所述指令获取地址获取所述目标指令;根据所述目标指令执行与所述目标指令相应的操作。In a first aspect, embodiments of the present disclosure provide a method for executing instructions, and the method is applied to a virtual device. The method includes: obtaining an instruction acquisition request in a target queue, where the instruction acquisition request is used to indicate acquisition of a target instruction; and based on the instruction acquisition request, determining an instruction acquisition corresponding to the target queue according to a preset mapping relationship. Address, the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address; the target instruction is obtained according to the instruction acquisition address; and the operation corresponding to the target instruction is performed according to the target instruction.

第二方面,本公开实施例提供了一种执行指令的方法,所述方法应用于服务器。所述方法包括:将指令获取请求发送至虚拟设备的目标队列,所述指令获取请求用于指示获取目标指令,以便于虚拟设备基于预先设置的映射关系获取所述目标指令,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系。In a second aspect, embodiments of the present disclosure provide a method for executing instructions, and the method is applied to a server. The method includes: sending an instruction acquisition request to a target queue of the virtual device, where the instruction acquisition request is used to indicate acquisition of the target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship, and the mapping relationship indicates The corresponding relationship between the queue information where the target queue is located and the instruction acquisition address.

第三方面,本公开实施例提供了一种执行指令的装置。所述装置包括:请求获取单元,用于获取目标队列中的指令获取请求,所述指令获取请求用于指示获取目标指令;地址确定单元,用于基于所述指令获取请求,根据预先设置的映射关系,确定与所述目标队列对应的指令获取地址,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系;指令获取单元,用于根据所述指令获取地址获取所述目标指令;指令执行单元,用于根据所述目标指令执行与所述目标指令相应的操作。In a third aspect, embodiments of the present disclosure provide a device for executing instructions. The device includes: a request acquisition unit, used to acquire an instruction acquisition request in a target queue, the instruction acquisition request being used to indicate acquisition of a target instruction; an address determination unit, based on the instruction acquisition request, according to a preset mapping relationship, determine the instruction acquisition address corresponding to the target queue, the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address; the instruction acquisition unit is used to obtain the instruction acquisition address according to the instruction acquisition address Target instruction; an instruction execution unit, configured to execute operations corresponding to the target instruction according to the target instruction.

第四方面,本公开实施例提供了服务器。所述服务器包括:请求发送单元,用于将指令获取请求发送至虚拟设备的目标队列,所述指令获取请求用于指示获取目标指令,以便于虚拟设备基于预先设置的映射关系获取所述目标指令,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系。In a fourth aspect, embodiments of the present disclosure provide a server. The server includes: a request sending unit, configured to send an instruction acquisition request to a target queue of the virtual device, where the instruction acquisition request is used to indicate acquisition of the target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship. , the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address.

第五方面,本公开实施例提供了一种执行指令的系统。所述系统包括上述的服务器,以及 至少一个虚拟设备,其中,所述至少一个虚拟设备包括上述的执行指令的装置。In a fifth aspect, embodiments of the present disclosure provide a system for executing instructions. The system includes the above-mentioned server, and At least one virtual device, wherein the at least one virtual device includes the above-mentioned device for executing instructions.

第六方面,本公开实施例提供了一种电子设备。所述电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器运行时执行如第一方面、或者第二方面所述的方法的步骤。In a sixth aspect, an embodiment of the present disclosure provides an electronic device. The electronic device includes a processor and a memory. The memory stores programs or instructions that can be run on the processor. The programs or instructions are executed by the processor as described in the first aspect or the second aspect. steps of the method described.

第七方面,本公开实施例提供了一种可读存储介质。所述可读存储介质上存储程序或指令,所述程序或指令被处理器运行时执行实现如第一方面、或者第二方面所述的方法的步骤。In a seventh aspect, embodiments of the present disclosure provide a readable storage medium. Programs or instructions are stored on the readable storage medium, and when the programs or instructions are run by a processor, the steps of implementing the method described in the first aspect or the second aspect are executed.

附图说明Description of drawings

图1为根据一些实施例的执行指令方法的一种流程示意图;Figure 1 is a schematic flowchart of a method for executing instructions according to some embodiments;

图2为根据一些实施例的执行指令方法的再一种流程示意图;Figure 2 is yet another flowchart of a method for executing instructions according to some embodiments;

图3为根据一些实施例的执行指令方法的另一种流程示意图;Figure 3 is another schematic flowchart of a method for executing instructions according to some embodiments;

图4为根据一些实施例的执行指令方法的又一种流程示意图;Figure 4 is another schematic flowchart of a method for executing instructions according to some embodiments;

图5为根据一些实施例的执行指令系统的一种结构框图;Figure 5 is a structural block diagram of an execution instruction system according to some embodiments;

图6为根据一些实施例的执行指令装置的一种结构框图;Figure 6 is a structural block diagram of a device for executing instructions according to some embodiments;

图7为根据一些实施例的服务器的一种结构框图;Figure 7 is a structural block diagram of a server according to some embodiments;

图8为根据一些实施例的执行指令系统的工作原理图;Figure 8 is a working principle diagram of an execution instruction system according to some embodiments;

图9为根据一些实施例的用户存储控制系统的工作原理图;Figure 9 is a working principle diagram of a user storage control system according to some embodiments;

图10为根据一些实施例的电子设备的一种结构示意图。Figure 10 is a schematic structural diagram of an electronic device according to some embodiments.

具体实施方式Detailed ways

为使本领域的技术人员更好地理解本公开实施例的技术方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本公开保护的范围。In order to enable those skilled in the art to better understand the technical solutions of the embodiments of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly described below in conjunction with the drawings in the embodiments of the present disclosure. Obviously, the described implementations The examples are part of the embodiments of the present disclosure, but not all of them. Based on the embodiments in this disclosure, all other embodiments obtained by those of ordinary skill in the art fall within the scope of protection of this disclosure.

需要说明的是,本公开中,“例如”等词用于表示作例子、例证或说明。本公开中被描述为“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“例如”等词旨在以详细方式呈现相关概念。It should be noted that in this disclosure, words such as "such as" are used to represent examples, illustrations or explanations. Any embodiment or design described in this disclosure as "such as" is not intended to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "such as" is intended to present the relevant concept in a detailed manner.

在相关技术中,NVME设备具有多队列特性,即在主机与NVME设备之间能够构造多个通道进行命令的传输及数据的交互,这些通道对应的载体就是队列。多个NVME设备的队列的存在使得主机可以利用多个核心或者线程并行地进行命令的提交和命令完成结果的处理。In related technologies, NVME devices have multi-queue characteristics, that is, multiple channels can be constructed between the host and the NVME device for command transmission and data interaction, and the carriers corresponding to these channels are queues. The existence of multiple NVME device queues allows the host to use multiple cores or threads to submit commands and process command completion results in parallel.

由于不同NVME设备根据自身业务对队列的需求可能不一样,此外随着云存储业务的发展,导致了用户对NVME设备数量的需求越来越大,每个NVME设备都需要一定的队列资源来实现其业务及功能。如何在有限硬件条件下对多个NVME设备进行有效管理,来完成多种存储业务需求,是目前亟待解决的问题。Since different NVME devices may have different queue requirements based on their own services, and with the development of cloud storage services, users have an increasing demand for the number of NVME devices. Each NVME device requires certain queue resources to achieve its business and functions. How to effectively manage multiple NVME devices under limited hardware conditions to meet various storage business requirements is an urgent problem that needs to be solved.

鉴于相关技术在有限硬件资源(即,存储资源)下无法对多个NVME设备有效管理,从而无法较好地完成多种存储业务需求,本公开实施例提供一种指令执行方案,该方案通过设置虚拟设备队列与队列存储资源的映射关系,实现了对虚拟设备队列存储资源的动态管理,从而可以在有限存储资源下对多个虚拟设备进行有效管理,可以较好地实现多种存储业务需求。In view of the fact that the related technology cannot effectively manage multiple NVME devices under limited hardware resources (ie, storage resources), and thus cannot better complete various storage business requirements, embodiments of the present disclosure provide an instruction execution solution, which is configured by The mapping relationship between virtual device queues and queue storage resources realizes dynamic management of virtual device queue storage resources, so that multiple virtual devices can be effectively managed under limited storage resources, and various storage business needs can be better realized.

需要说明的是,本公开实施例的指令执行方案可应用于存储领域的虚拟设备,在本公开实 施例中,虚拟设备以NVME设备为例来详细描述。It should be noted that the instruction execution scheme of the embodiments of the present disclosure can be applied to virtual devices in the storage field. In the embodiment, the virtual device is described in detail by taking an NVME device as an example.

下面结合附图,通过示例及其应用场景对本公开实施例提供的指令执行方案进行详细地说明。The instruction execution scheme provided by the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings through examples and application scenarios.

图1为根据一些实施例的执行指令方法的一种流程示意图。如图1所示,该方法可以包括以下S110至S140。Figure 1 is a schematic flowchart of a method for executing instructions according to some embodiments. As shown in Figure 1, the method may include the following S110 to S140.

S110,获取目标队列中的指令获取请求,该指令获取请求用于指示获取目标指令。S110. Obtain the instruction acquisition request in the target queue. The instruction acquisition request is used to indicate acquisition of the target instruction.

在本公开实施例中,队列实现了NVME设备与服务器(对应于上述的主机)之间的指令传输,该队列可以称为QP队列。这里的指令可以是存储业务指令,指令获取请求用于指示从服务器获取目标指令。In the embodiment of the present disclosure, the queue implements instruction transmission between the NVME device and the server (corresponding to the above-mentioned host), and the queue may be called a QP queue. The instruction here can be a storage service instruction, and the instruction acquisition request is used to instruct to obtain the target instruction from the server.

在执行S110之前,根据业务需求构建至少一个队列。一般而言,需要构建多个队列,用于实现多种存储业务需求。随后,可以将存储空间(即,硬件资源,或称为存储资源)划分为与构建的队列相同数量的子存储空间,并将各子存储空间分别分配给相应的队列。Before executing S110, at least one queue is constructed according to business requirements. Generally speaking, multiple queues need to be built to achieve various storage business requirements. Subsequently, the storage space (ie, hardware resource, or storage resource) can be divided into the same number of sub-storage spaces as the constructed queues, and each sub-storage space is allocated to the corresponding queue respectively.

这里的存储空间可以为与服务器相应的存储空间。在实际操作中,存储空间可以是服务器的存储空间,也可以是与服务器相应的外挂存储空间。该存储空间可以存储目标指令,还可以存储其他信息。The storage space here can be the storage space corresponding to the server. In actual operation, the storage space can be the storage space of the server or the plug-in storage space corresponding to the server. This storage space can store target instructions and can also store other information.

这里的子存储空间至少用于存储指令获取地址、指令结果存储地址。指令获取地址即是指存放指令的地址,可以称为SQ基地址。指令结果存储地址即是指指令结果存放的地址,可以称为CQ基地址。The sub-storage space here is at least used to store the instruction acquisition address and the instruction result storage address. The instruction acquisition address refers to the address where the instruction is stored, which can be called the SQ base address. The instruction result storage address refers to the address where the instruction result is stored, which can be called the CQ base address.

例如,根据业务需求构建了128个队列,则需要将存储空间划分为128份子存储空间,并将每份子存储空间分别分配给各个队列。如此,通过子存储空间,队列就可以与服务器进行数据传输。For example, if 128 queues are built according to business requirements, the storage space needs to be divided into 128 sub-storage spaces, and each sub-storage space is allocated to each queue. In this way, the queue can transmit data to the server through the sub-storage space.

为了描述方便,以下将用于队列的存储空间统一称为存储资源,子存储空间统一称为子资源,服务器统一称为主机。For the convenience of description, the storage space used for queues is collectively called storage resources, the sub-storage spaces are collectively called sub-resources, and the servers are collectively called hosts.

在本公开实施例中,可以将存储资源以均分方式划分为多份子资源,也可以依据实际情况来划分子资源,本公开对此不做限制。为了描述方便,本公开实施例以资源均分方式为例来详细描述。In the embodiment of the present disclosure, the storage resources can be divided into multiple sub-resources in an equal manner, or the sub-resources can be divided according to actual conditions, and the present disclosure does not place a limit on this. For convenience of description, the embodiments of the present disclosure are described in detail by taking the resource equalization method as an example.

S120,基于上述的指令获取请求,根据预先设置的映射关系,确定与目标队列对应的指令获取地址,所述映射关系指示目标队列所在的队列信息与指令获取地址的对应关系。S120. Based on the above instruction acquisition request, determine the instruction acquisition address corresponding to the target queue according to the preset mapping relationship, where the mapping relationship indicates the correspondence between the queue information where the target queue is located and the instruction acquisition address.

在一些实施例中,映射关系可以通过如下方式设置:在上述对各队列分配队列所属子资源之后,可以获取队列信息及其队列所属子资源信息的对应关系;并根据该获取的对应关系设置映射关系。In some embodiments, the mapping relationship can be set in the following manner: after allocating the sub-resources to which the queue belongs to each queue, the corresponding relationship between the queue information and the sub-resource information to which the queue belongs can be obtained; and the mapping is set according to the obtained corresponding relationship. relation.

在一个实施例中,可以对队列信息、子资源分别进行编号,通过编号方式来表示对应关系。例如,队列3与资源编号32相对应,表示在多个虚拟设备的多个队列中,资源编号32为队列3的专属资源,该编号32对应的子资源可以实现队列3与主机之间的指令传递。In one embodiment, the queue information and sub-resources can be numbered respectively, and the corresponding relationship can be represented by numbering. For example, queue 3 corresponds to resource number 32, which means that among multiple queues of multiple virtual devices, resource number 32 is the exclusive resource of queue 3. The sub-resource corresponding to queue 3 can implement instructions between queue 3 and the host. transfer.

在一些实施例中,映射关系还可以指示队列信息(例如,队列编号)、队列所属虚拟设备、队列所属子资源信息(例如,资源编号)的对应关系。In some embodiments, the mapping relationship may also indicate the corresponding relationship between queue information (for example, queue number), virtual device to which the queue belongs, and sub-resource information (for example, resource number) to which the queue belongs.

在一个实施例中,可以对队列信息、虚拟设备、资源分别进行编号,通过编号方式来表示对应关系。例如,虚拟设备5的队列3与资源编号32相对应,表示在多个虚拟设备中,资源 编号32为虚拟设备5的队列3的专属存储资源,该编号32对应的存储资源可以用于实现虚拟设备5的队列3与主机之间的指令传递。In one embodiment, the queue information, virtual devices, and resources can be numbered respectively, and the corresponding relationships are expressed through numbering. For example, queue 3 of virtual device 5 corresponds to resource number 32, indicating that among multiple virtual devices, the resource Number 32 is the exclusive storage resource of queue 3 of virtual device 5. The storage resource corresponding to number 32 can be used to implement instruction transfer between queue 3 of virtual device 5 and the host.

S130,根据上述指令获取地址获取目标指令。S130, obtain the target instruction according to the above instruction acquisition address.

S140,根据获取的目标指令执行与目标指令相应的操作。S140, perform operations corresponding to the target instruction according to the obtained target instruction.

在目标指令执行操作完成后,可以将目标指令的执行结果发送至与子存储空间中的指令结果存储地址相应的存储空间。After the target instruction execution operation is completed, the execution result of the target instruction can be sent to the storage space corresponding to the instruction result storage address in the sub-storage space.

也就是说,子存储空间中的指令结果存储地址指示用于存储指令结果的详细存储位置。That is, the instruction result storage address in the sub-storage space indicates the detailed storage location used to store the instruction result.

本公开实施例在获取目标队列的指令获取请求后,根据映射关系,确定与目标队列对应的指令获取地址,并据此根据指令获取地址获取目标指令,执行该目标指令。如此,通过预先设置的映射关系,可以有效管理队列及其所属资源,实现在有限存储资源下对虚拟设备的有效管理,可以较好地执行存储业务指令,实现多种存储业务的需求。After obtaining the instruction acquisition request of the target queue, the embodiment of the present disclosure determines the instruction acquisition address corresponding to the target queue according to the mapping relationship, and accordingly obtains the target instruction according to the instruction acquisition address and executes the target instruction. In this way, through the preset mapping relationship, queues and their associated resources can be effectively managed, virtual devices can be effectively managed under limited storage resources, storage service instructions can be better executed, and various storage service needs can be met.

在一些实施例中,当队列被删除时,释放该队列的子资源。这样,释放的存储资源仍可用于后续构建的新队列,进一步实现了对有限存储资源地有效管理,提高了资源利用率。In some embodiments, when a queue is deleted, the queue's sub-resources are released. In this way, the released storage resources can still be used for new queues built subsequently, further achieving effective management of limited storage resources and improving resource utilization.

基于相似的发明构思,本公开实施例还提供一种执行指令的方法,该方法应用于服务器(对应于上述主机)。Based on a similar inventive concept, embodiments of the present disclosure also provide a method for executing instructions, which method is applied to a server (corresponding to the above-mentioned host).

图2是应用于服务器的执行指令方法的流程图。如图2所示,该方法包括S210。Figure 2 is a flow chart of a method of executing instructions applied to a server. As shown in Figure 2, the method includes S210.

S210:将指令获取请求发送至虚拟设备的目标队列,该指令获取请求用于指示获取目标指令,以便于虚拟设备基于预先设置的映射关系获取目标指令,该映射关系指示目标队列所在的队列信息与指令获取地址的对应关系。S210: Send an instruction acquisition request to the target queue of the virtual device. The instruction acquisition request is used to instruct the acquisition of the target instruction, so that the virtual device obtains the target instruction based on a preset mapping relationship. The mapping relationship indicates that the queue information where the target queue is located and The corresponding relationship of the instruction acquisition address.

在一个实施例中,上述映射关系还指示目标队列所在的队列信息与指令结果存储地址的对应关系。该方法还包括:接收来自上述目标队列的目标指令的执行结果;基于映射关系,将目标指令的执行结果发送至与指令结果存储地址相应的存储空间。也就是说,子存储空间中的指令结果存储地址指示用于存储指令结果的详细存储位置。In one embodiment, the above mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address. The method also includes: receiving the execution result of the target instruction from the above-mentioned target queue; and based on the mapping relationship, sending the execution result of the target instruction to the storage space corresponding to the instruction result storage address. That is, the instruction result storage address in the sub-storage space indicates the detailed storage location used to store the instruction result.

以下以主机、多个NVME设备构建的系统为应用场景,结合图3所示的流程描述本公开实施例的执行指令方法。The following uses a system built with a host and multiple NVME devices as an application scenario, and describes the instruction execution method of the embodiment of the present disclosure in conjunction with the process shown in Figure 3 .

参见图3所示,执行指令方法包括以下流程。As shown in Figure 3, the instruction execution method includes the following processes.

S301:根据不同业务需求以及不同NVME设备呈现的能力,对各个NVME设备分别创建一定数量的IO QP(输入输出QP,即,S110中的队列)。通过IO QP(可以称为QP队列或者QP),可以实现NVME设备与主机之间的数据传输。S301: Based on different business requirements and the capabilities presented by different NVME devices, create a certain number of IO QP (input and output QP, that is, the queue in S110) for each NVME device. Through IO QP (can be called QP queue or QP), data transmission between the NVME device and the host can be achieved.

基于构建的IO QP队列数量,将一定的总存储资源平均分成若干份。由于每对IO QP队列需要的基本存储资源是一定数量的,因此总存储资源的规模可以根据IO QP队列数量来计算,需要分成多少份子资源与当前业务所需要的能力相关。例如,有些业务需要的NVME设备比较多,则需要共1024对IO QP来完成;而有些业务需要的NVME设备比较少,只需要128个QP就可以完全满足业务需求。Based on the number of built IO QP queues, a certain total storage resource is evenly divided into several parts. Since each pair of IO QP queues requires a certain amount of basic storage resources, the scale of the total storage resources can be calculated based on the number of IO QP queues. The number of sub-resources that need to be divided is related to the capabilities required by the current business. For example, some services require more NVME devices and require a total of 1024 pairs of IO QPs to complete; while some services require fewer NVME devices and only require 128 QPs to fully meet business needs.

又例如,共构建了1024对IO QP,则可以将一定的总存储资源分成共1024份子资源。将每份子资源分别分配给一个IO QP,作为IO QP所需要的存储资源。For another example, if a total of 1024 pairs of IO QPs are constructed, a certain total storage resource can be divided into a total of 1024 sub-resources. Allocate each sub-resource to an IO QP as the storage resource required by the IO QP.

IO QP存储资源用于存储:SQ的基地址信息(对应于上述的指令获取地址)、SQ的队列深度信息、SQ的门铃信息(SQ Doorbell)、CQ的基地址信息(对应于上述的指令结果存储地 址)、CQ的队列深度信息、CQ的门铃信息、CQ的中断向量信息等。这些信息的详细功能可以参见相关技术中的描述,本公开对此不作限制。IO QP storage resources are used to store: SQ base address information (corresponding to the above instruction acquisition address), SQ queue depth information, SQ doorbell information (SQ Doorbell), CQ base address information (corresponding to the above instruction results) storage location address), CQ queue depth information, CQ doorbell information, CQ interrupt vector information, etc. Detailed functions of this information can be found in descriptions in related technologies, and this disclosure does not limit this.

S302:建立各IO QP所属资源(也可以称为QP存储资源)的编号机制。例如,将1024份IO QP所属资源进行编号管理,编号为0至1023,每一个编号即可对应一份IO QP所属资源。S302: Establish a numbering mechanism for the resources to which each IO QP belongs (which can also be called QP storage resources). For example, 1024 resources belonging to IO QP are numbered and numbered from 0 to 1023. Each number corresponds to one resource belonging to IO QP.

通过上述编号机制,当一个NVME设备具有多个IO QP队列时,该NVME设备就可以与各IO QP及其所属资源编号相对应。Through the above numbering mechanism, when an NVME device has multiple IO QP queues, the NVME device can correspond to each IO QP and its associated resource number.

S303:根据各IO QP创建的先后顺序给每个IO QP分配资源编号,并保存每个IO QP创建时的IO QP信息数据(例如,QP所属设备信息等)。S303: Assign a resource number to each IO QP according to the order in which each IO QP is created, and save the IO QP information data when each IO QP is created (for example, the device information to which the QP belongs, etc.).

S304:对每个NVME设备进行编号,编号对应为设备ID(标识),对于NVME设备的每个IO QP,在NVME协议里都有其对应的QP ID,可以由设备ID加上设备内QP ID(设备ID+设备内QP ID)作为其独有的标识,之后将其与一个可被分配的存储资源编号绑定,从而实现了各个NVME设备的不同IO QP共享整体存储资源。S304: Number each NVME device. The number corresponds to the device ID (identification). For each IO QP of the NVME device, there is a corresponding QP ID in the NVME protocol. The device ID can be added to the QP ID within the device. (Device ID + QP ID within the device) as its unique identifier, which is then bound to a storage resource number that can be allocated, thereby enabling different IO QPs of each NVME device to share the overall storage resources.

本公开实施例提供了资源编号分配管理机制,对已有的有限存储资源进行分割及合理编号,每个单独的队列对应一份拥有编号的存储资源,该队列基于这份资源完成对应的NVME设备的某些业务及功能。而对于某个独立的NVME设备,其可以创建多个队列,则对应地给该NVME设备分配多份不同编号的存储资源。This disclosed embodiment provides a resource number allocation management mechanism to divide and reasonably number existing limited storage resources. Each individual queue corresponds to a numbered storage resource, and the queue completes the corresponding NVME device based on this resource. certain businesses and functions. For an independent NVME device, it can create multiple queues, and accordingly allocate multiple storage resources with different numbers to the NVME device.

S305:对内部所有存储资源编号及对应资源进行业务及功能管理,利用存储资源编号与设备ID加上QP ID(设备ID+QP ID)之间的相互映射来控制实现主机与各个NVME设备之间的存储命令及数据的交互。S305: Conduct business and functional management of all internal storage resource numbers and corresponding resources, and use the mutual mapping between storage resource numbers and device IDs plus QP ID (device ID + QP ID) to control the connection between the host and each NVME device storage commands and data interaction.

通过将设备ID、队列ID与存储资源编号进行绑定的策略,队列可与存储资源编号进行绑定及相互映射,从而使得在进行NVME设备与主机交互时,可以准确快速地找到实际设备及实际队列的各项详细参数,并完成与主机之间的指令和数据的存储交互。Through the strategy of binding the device ID, queue ID and storage resource number, the queue can be bound and mapped to the storage resource number, so that when the NVME device and the host interact, the actual device and the actual device can be found accurately and quickly. Detailed parameters of the queue, and complete the storage interaction of instructions and data with the host.

S306:对于所有NVME设备的任何一个IO QP,IO QP被删除时,则将IO QP对应的存储资源编号进行回收,可以将编号存放到硬件缓存中。S306: For any IO QP of all NVME devices, when the IO QP is deleted, the storage resource number corresponding to the IO QP will be recycled, and the number can be stored in the hardware cache.

在实施过程中,当上述示例中的1024个存储资源编号被全部分配完后、需要删除一些IO QP时,即回收一些资源编号和存储资源,之后可以进行资源编号的再分配。当资源编号还未分配完而有的IO QP已被删除时,则可以先保存这些回收的资源编号,并清除编号对应的存储资源的所有数据信息。During the implementation process, when all 1024 storage resource numbers in the above example are allocated and some IO QPs need to be deleted, some resource numbers and storage resources will be recycled, and then the resource numbers can be reallocated. When the resource numbers have not been allocated and some IO QPs have been deleted, you can first save these recycled resource numbers and clear all data information of the storage resources corresponding to the numbers.

S307:对于后续新创建的IO QP,可以优先分配未被分配的资源编号和存储资源。当上述示例中的1024个资源编号全部被分配完时,可以将缓存中存有的已回收的资源编号按照被回收的先后顺序再重新分配给新创建的IO QP。S307: For subsequent newly created IO QPs, unallocated resource numbers and storage resources can be allocated first. When all 1024 resource numbers in the above example have been allocated, the recycled resource numbers stored in the cache can be reallocated to the newly created IO QP in the order in which they are recycled.

通过上述的资源循环回收分配机制,对NVME设备中被删除的队列的编号以及其资源进行回收,回收后可以将其编号及资源分配给后续新创建的队列,达到循环利用资源的目的。Through the above-mentioned resource recycling and allocation mechanism, the number of the deleted queue and its resources in the NVME device are recycled. After recycling, its number and resources can be allocated to subsequent newly created queues to achieve the purpose of recycling resources.

由以上描述可知,本公开实施例提供了多NVME设备下的队列存储资源动态共享流程,通过建立资源编号分配管理、资源循环回收分配的机制,以及设备ID、设备队列ID和资源编号绑定策略,可以在有限存储资源下对虚拟设备中的IO QP进行有效管理,从而可以灵活实现存储业务及功能。As can be seen from the above description, the embodiments of the present disclosure provide a dynamic sharing process of queue storage resources under multiple NVME devices by establishing a mechanism for resource number allocation management, resource recycling and allocation, and a device ID, device queue ID, and resource number binding strategy. , IO QP in virtual devices can be effectively managed under limited storage resources, so that storage services and functions can be flexibly implemented.

基于上述图3所示流程,以下结合图4给出详细应用场景示例流程,该应用场景为多NVME 设备需求下实现128个虚拟NVME设备、1024个IO QP的存储命令数据交互场景。Based on the process shown in Figure 3 above, the detailed application scenario example process is given below in conjunction with Figure 4. This application scenario is multiple NVME Under the equipment requirements, the storage command data interaction scenario of 128 virtual NVME devices and 1024 IO QP is implemented.

参见图4,该执行指令方法包括如下流程。Referring to Figure 4, the instruction execution method includes the following processes.

S401:基于每个IO QP需要的资源量,将能满足128个虚拟NVME设备、1024个IO QP场景规模的存储资源总量,平均分成1024份。S401: Based on the amount of resources required by each IO QP, the total storage resources that can meet the scenario scale of 128 virtual NVME devices and 1024 IO QP will be divided into 1024 shares on average.

S402:建立基于单位IO QP存储资源(即,IO QP所属资源)的编号机制。将1024份IO QP所属资源进行编号管理,编号为0至1023,每一个编号对应一份IO QP存储资源。S402: Establish a numbering mechanism based on unit IO QP storage resources (that is, the resource to which the IO QP belongs). The resources belonging to 1024 IO QPs are numbered and numbered from 0 to 1023. Each number corresponds to an IO QP storage resource.

S403:主机根据128个虚拟NVME设备呈现的IO能力,对每个NVME设备创建8个IO QP,共1024个QP,编号按照创建顺序编为QP0到QP1023。S403: The host creates 8 IO QPs for each NVME device based on the IO capabilities presented by the 128 virtual NVME devices, for a total of 1024 QPs. The numbers are QP0 to QP1023 in the order of creation.

在本示例中,为了描述方便,对每个NVME设备的IO QP数量进行了平均设置。在实际操作中,也可以基于每个NVME设备的业务能力和需求,为不同NVME设备创建不同数量的IO QP,以实现不同的业务需求。本公开对此不作限制。In this example, for the convenience of description, the number of IO QPs for each NVME device is set equally. In actual operation, different numbers of IO QPs can also be created for different NVME devices based on the business capabilities and needs of each NVME device to achieve different business needs. This disclosure does not limit this.

S404:对128个NVME设备进行编号,设备ID为0到127,对于NVME设备的每个IO QP,在NVME协议里都有其对应的QP ID,QP ID设置为0到7,则每个IO QP都可以与资源编号对应,例如,设备5的QP3,可以映射到资源编号32。S404: Number 128 NVME devices. The device IDs are 0 to 127. For each IO QP of the NVME device, there is a corresponding QP ID in the NVME protocol. The QP ID is set to 0 to 7. Then each IO QP can correspond to resource number. For example, QP3 of device 5 can be mapped to resource number 32.

在一些实施例中,可以构建资源编号、设备ID与QP ID之间对应关系的映射表。In some embodiments, a mapping table of correspondences between resource numbers, device IDs and QP IDs may be constructed.

S405:根据资源编号0至1023与设备ID+QP ID之间的相互映射来控制实现主机与128个NVME设备之间的存储命令及数据的交互。详细实现为:根据当前指令所属的设备ID(0到127)+QP ID(0到7)查找映射表,查到其对应的资源编号,获取资源编号中对应的该QP ID对应的存储资源信息,存储资源信息包括:SQ的基地址信息、SQ的队列深度信息、SQ的门铃信息、CQ的基地址信息、CQ的队列深度信息、CQ的门铃信息、CQ的中断向量信息等。S405: Control the storage command and data interaction between the host and 128 NVME devices based on the mutual mapping between resource numbers 0 to 1023 and device ID + QP ID. The detailed implementation is: search the mapping table according to the device ID (0 to 127) + QP ID (0 to 7) to which the current instruction belongs, find its corresponding resource number, and obtain the storage resource information corresponding to the QP ID corresponding to the resource number. , the storage resource information includes: SQ base address information, SQ queue depth information, SQ doorbell information, CQ base address information, CQ queue depth information, CQ doorbell information, CQ interrupt vector information, etc.

例如,当128个NVME设备中的设备5的IO QP 3收到主机发过来的SQ Doorbell时,则根据设备ID和队列ID“5+3”的编号能从映射表中查找到其对应的资源编号为32,然后将资源编号32对应的SQ基地址信息读出,根据该SQ基地址信息从主机中将设备5的QP 3的SQ Entry命令读回用户侧,完成后续的SQ Entry的命令解析执行及存储数据搬运等操作。For example, when IO QP 3 of device 5 among 128 NVME devices receives the SQ Doorbell sent from the host, its corresponding resource can be found from the mapping table based on the device ID and queue ID "5+3" number. The number is 32, and then the SQ base address information corresponding to resource number 32 is read out. Based on the SQ base address information, the SQ Entry command of QP 3 of device 5 is read back to the user side from the host to complete the subsequent SQ Entry command parsing. Perform and store data transfer and other operations.

S406:对于128个NVME设备对应的1024对IO QP中的任何一个IO QP,当IO QP被删除时,则将对应的资源编号进行回收,可以将编号存放到硬件缓存中。例如,若要删除设备3的IO QP6,且其对应的资源编号为10,则将10这个编号存到FIFO(First Input First Output,指先进先出)缓存中。S406: For any IO QP among the 1024 pairs of IO QPs corresponding to 128 NVME devices, when the IO QP is deleted, the corresponding resource number will be recycled, and the number can be stored in the hardware cache. For example, if you want to delete IO QP6 of device 3 and its corresponding resource number is 10, store the number 10 in the FIFO (First Input First Output) cache.

S407:当1024个资源编号全部被分配完成时,若要新创建IO QP,则可以将FIFO缓存中最先放进去的编号10读出进行重新分配。S407: When all 1024 resource numbers are allocated, if you want to create a new IO QP, you can read out the number 10 that was first put in the FIFO cache and reallocate it.

本公开实施例通过建立资源编号分配管理以及循环回收分配机制,以及设备ID、队列ID和资源编号绑定策略,实现了多NVME设备下的队列存储资源动态共享流程。同时,本公开实施例可以以有限存储资源实现更多的NVME虚拟设备,对各NVME设备内队列资源的灵活分配也可以使得NVME存储系统支持更多的存储业务,较好地提高了存储系统的灵活性,可以灵活支持不同IO能力的NVME设备,灵活管理所有设备的创建、工作及消亡。The disclosed embodiment realizes the dynamic sharing process of queue storage resources under multiple NVME devices by establishing a resource number allocation management and recycling allocation mechanism, as well as a device ID, queue ID and resource number binding strategy. At the same time, the embodiments of the present disclosure can implement more NVME virtual devices with limited storage resources, and the flexible allocation of queue resources in each NVME device can also enable the NVME storage system to support more storage services, which better improves the efficiency of the storage system. Flexibility, can flexibly support NVME devices with different IO capabilities, and flexibly manage the creation, work and death of all devices.

基于相似的发明构思,本公开实施例还提供一种执行指令系统。该系统包括:服务器和至少一个虚拟设备(对应于上述的NVME设备),该至少一个虚拟设备包括执行指令装置。在实际操作中,执行指令装置可以设置于虚拟设备中,也可以设置于虚拟设备之外。 Based on similar inventive concepts, embodiments of the present disclosure also provide an execution instruction system. The system includes: a server and at least one virtual device (corresponding to the above-mentioned NVME device), and the at least one virtual device includes an instruction execution device. In actual operation, the instruction execution device may be installed in the virtual device or outside the virtual device.

图5是该系统的结构框图。如图5所示,该系统1示出了:执行指令装置10、服务器20以及虚拟设备30(图5中示出为一个虚拟设备)。在该图中,执行指令装置设置于虚拟设备之外。Figure 5 is a structural block diagram of the system. As shown in Figure 5, the system 1 shows: an instruction execution device 10, a server 20 and a virtual device 30 (shown as a virtual device in Figure 5). In this figure, the instruction execution device is provided outside the virtual device.

在一些实施例中,为了实现多种存储业务,虚拟设备可以是多个,各虚拟设备分别对应于至少一个队列。In some embodiments, in order to implement multiple storage services, there may be multiple virtual devices, and each virtual device corresponds to at least one queue.

图6为根据一些实施例的执行指令装置10的结构框图,如图6所示,该执行指令装置10包括:请求获取单元101、地址确定单元102、指令获取单元103和执行单元104。Figure 6 is a structural block diagram of the instruction execution device 10 according to some embodiments. As shown in Figure 6, the instruction execution device 10 includes: a request acquisition unit 101, an address determination unit 102, an instruction acquisition unit 103 and an execution unit 104.

请求获取单元101,用于获取目标队列中的指令获取请求,所述指令获取请求用于指示获取目标指令。The request acquisition unit 101 is configured to acquire an instruction acquisition request in the target queue, where the instruction acquisition request is used to indicate acquisition of a target instruction.

地址确定单元102,用于基于所述指令获取请求,根据预先设置的映射关系,确定与所述目标队列对应的指令获取地址,所述映射关系指示上述目标队列所在的队列信息与指令获取地址的对应关系。The address determination unit 102 is configured to determine the instruction acquisition address corresponding to the target queue based on the instruction acquisition request and according to a preset mapping relationship, where the mapping relationship indicates the relationship between the queue information where the target queue is located and the instruction acquisition address. Correspondence.

指令获取单元103,用于根据所述指令获取地址获取所述目标指令。The instruction acquisition unit 103 is configured to acquire the target instruction according to the instruction acquisition address.

执行单元104,用于根据所述目标指令执行与所述目标指令相应的操作。The execution unit 104 is configured to execute operations corresponding to the target instructions according to the target instructions.

本公开实施例在请求获取单元101获取目标队列的指令获取请求后,地址确定单元102根据映射关系,确定与目标队列对应的指令获取地址,指令获取单元103根据指令获取地址获取目标指令,随后执行单元104执行该目标指令。如此,通过预先设置的映射关系,可以有效管理队列及其所属资源,实现在有限存储资源下对虚拟设备的有效管理,可以较好地执行存储业务指令,实现多种存储业务的需求。In the embodiment of the present disclosure, after the request acquisition unit 101 acquires the instruction acquisition request of the target queue, the address determination unit 102 determines the instruction acquisition address corresponding to the target queue according to the mapping relationship, and the instruction acquisition unit 103 acquires the target instruction according to the instruction acquisition address, and then executes Unit 104 executes the target instruction. In this way, through the preset mapping relationship, queues and their associated resources can be effectively managed, virtual devices can be effectively managed under limited storage resources, storage business instructions can be better executed, and various storage business needs can be met.

在一些实施例中,上述装置还包括:队列构建单元、存储空间划分单元和分配单元。In some embodiments, the above device further includes: a queue building unit, a storage space dividing unit and an allocation unit.

队列构建单元,用于根据业务需求构建至少一个队列。Queue building unit, used to build at least one queue according to business requirements.

存储空间划分单元,用于将存储空间划分为与所述至少一个队列相同数量的子存储空间。A storage space dividing unit is configured to divide the storage space into the same number of sub-storage spaces as the at least one queue.

分配单元,用于将各子存储空间分别分配给相应的队列,其中,所述子存储空间至少用于存储指令获取地址、指令结果存储地址。An allocation unit is used to allocate each sub-storage space to a corresponding queue, wherein the sub-storage space is at least used to store an instruction acquisition address and an instruction result storage address.

在一些实施例中,上述装置还包括:映射关系设置单元,该映射关系设置单元包括:对应关系获取模块和映射关系设置模块。In some embodiments, the above device further includes: a mapping relationship setting unit, which includes: a correspondence relationship acquisition module and a mapping relationship setting module.

对应关系获取模块,用于获取队列信息及其子存储空间信息的对应关系。The correspondence acquisition module is used to obtain the correspondence between the queue information and its sub-storage space information.

映射关系设置模块,用于根据获取的对应关系设置所述映射关系。A mapping relationship setting module is used to set the mapping relationship according to the obtained corresponding relationship.

在一个实施例中,上述装置还包括:执行结果发送单元。该执行结果发送单元用于将上述目标指令的执行结果发送至与子存储空间中的指令结果存储地址相应的位置。In one embodiment, the above device further includes: an execution result sending unit. The execution result sending unit is used to send the execution result of the above-mentioned target instruction to a location corresponding to the instruction result storage address in the sub-storage space.

在一些实施例中,上述装置还包括:存储空间释放单元。该存储空间释放单元用于响应于队列被删除,释放该队列的子存储空间。In some embodiments, the above device further includes: a storage space releasing unit. The storage space release unit is used to release the sub-storage space of the queue in response to the queue being deleted.

本公开实施例中的执行指令装置10可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统。本公开实施例不作具体限定。The instruction execution device 10 in the embodiment of the present disclosure may be a device with an operating system. The operating system can be an Android operating system, an ios operating system, or other possible operating systems. The embodiments of this disclosure are not specifically limited.

本公开实施例提供的指令执行装置10能够实现图1、图3和图4的方法实施例中实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。The instruction execution device 10 provided by the embodiment of the present disclosure can implement each process implemented in the method embodiments of FIG. 1, FIG. 3, and FIG. 4, and achieve the same technical effect. To avoid duplication, the details will not be described here.

图7是上述服务器20的结构框图。如图7所示,该服务器20包括:请求发送单元201, 用于将指令获取请求发送至虚拟设备的目标队列,所述指令获取请求用于指示获取目标指令,以便于虚拟设备基于预先设置的映射关系获取所述目标指令,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系。FIG. 7 is a structural block diagram of the above-mentioned server 20. As shown in Figure 7, the server 20 includes: a request sending unit 201, Used to send an instruction acquisition request to a target queue of a virtual device. The instruction acquisition request is used to indicate acquisition of a target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship, and the mapping relationship indicates the target. The correspondence between the queue information where the queue is located and the instruction acquisition address.

在一个实施例中,上述映射关系还指示所述目标队列所在的队列信息与指令结果存储地址的对应关系,所述服务器还包括:执行结果接收单元和执行结果发送单元。In one embodiment, the above mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address. The server further includes: an execution result receiving unit and an execution result sending unit.

执行结果接收单元,用于接收来自所述目标队列的所述目标指令的执行结果。An execution result receiving unit is configured to receive the execution result of the target instruction from the target queue.

执行结果发送单元,用于基于所述映射关系,将所述目标指令的执行结果发送至与所述指令结果存储地址相应的存储空间。An execution result sending unit is configured to send the execution result of the target instruction to a storage space corresponding to the instruction result storage address based on the mapping relationship.

在实际操作中,多NVME设备场景下每个NVME设备的控制器是独立存储的。即,每个NVME设备需占用独立完整的存储资源,来实现其与主机之间的存储流程交互。详细流程包括:NVME设备初始化、NVME ADMIN流程以及NVME IO流程等。以下结合图8和图9来详细描述本公开实施例的指令执行系统的工作原理。In actual operation, in a multi-NVME device scenario, the controller of each NVME device is stored independently. That is, each NVME device needs to occupy independent and complete storage resources to realize the storage process interaction between it and the host. The detailed process includes: NVME device initialization, NVME ADMIN process and NVME IO process, etc. The working principle of the instruction execution system of the embodiment of the present disclosure will be described in detail below with reference to FIG. 8 and FIG. 9 .

图8为根据一些实施例的指令执行系统的工作原理图。如图8所示,该示例系统包括:主机和用户存储控制系统。该主机具有上述服务器20的功能,用户存储控制系统具有上述执行指令装置10的功能。用户存储控制系统包括:命令响应及处理模块和队列资源动态管理模块。Figure 8 is a schematic diagram of the operation of an instruction execution system according to some embodiments. As shown in Figure 8, the example system includes: a host and a user storage control system. The host has the function of the above-mentioned server 20, and the user storage control system has the function of the above-mentioned execution instruction device 10. The user storage control system includes: command response and processing module and queue resource dynamic management module.

参见图8,该示例系统的工作原理包括:步骤1至3的NVME ADMIN流程和步骤4至6的NVME IO流程,以下描述各步骤。Referring to Figure 8, the working principle of this example system includes: the NVME ADMIN process from steps 1 to 3 and the NVME IO process from steps 4 to 6. Each step is described below.

步骤1:主机发送ADMIN(管理)队列的消息,即ADMIN门铃(Doorbell),通过PCIE和DMA(PCIE+DMA(Direct Memory Access,直接内存访问))模块,进入用户存储控制系统内的命令响应及处理模块。Step 1: The host sends the message of the ADMIN (management) queue, that is, the ADMIN doorbell (Doorbell), through the PCIE and DMA (PCIE+DMA (Direct Memory Access, direct memory access)) module to enter the command response and control system within the user storage control system processing module.

步骤2:命令响应及处理模块接收ADMIN门铃,该ADMIN门铃可以理解为ADMIN命令获取请求。命令响应及处理模块将对应队列的ADMIN命令获取请求发送给PCIE+DMA模块,由PCIE+DMA模块从主机取回ADMIN命令数据。Step 2: The command response and processing module receives the ADMIN doorbell, which can be understood as an ADMIN command acquisition request. The command response and processing module sends the ADMIN command acquisition request of the corresponding queue to the PCIE+DMA module, and the PCIE+DMA module retrieves the ADMIN command data from the host.

步骤3:取回的ADMIN命令进入命令响应及处理模块,由命令响应及处理模块进行解析,并通知队列资源动态管理模块进行QP硬件资源的编号、分配、回收等动态管理。Step 3: The retrieved ADMIN command enters the command response and processing module, which is parsed by the command response and processing module, and notifies the queue resource dynamic management module to perform dynamic management such as numbering, allocation, and recycling of QP hardware resources.

步骤4:主机发送IO QP队列的消息,即IO门铃(Doorbell),通过PCIE+DMA模块,进入用户存储控制系统内的命令响应及处理模块。Step 4: The host sends the message of the IO QP queue, that is, the IO doorbell (Doorbell), and enters the command response and processing module in the user storage control system through the PCIE+DMA module.

步骤5:命令响应及处理模块将对应队列的IO命令获取请求发送给PCIE+DMA模块,由PCIE+DMA模块从主机取回IO命令数据。Step 5: The command response and processing module sends the IO command acquisition request of the corresponding queue to the PCIE+DMA module, and the PCIE+DMA module retrieves the IO command data from the host.

步骤6:取回的IO命令进入命令响应及处理模块,命令响应及处理模块与队列资源动态管理模块进行交互,实现QP资源编号与设备ID+设备QP ID之间的映射,进而实现用户存储控制系统与主机之间的存储IO命令交互。Step 6: The retrieved IO command enters the command response and processing module. The command response and processing module interacts with the queue resource dynamic management module to realize the mapping between QP resource number and device ID + device QP ID, thereby realizing the user storage control system. Interact with storage IO commands between hosts.

在以上步骤中,在NVME协议里,ADMIN命令主要用来进行NVME设备对主机的能力呈现以及NVME设备队列的创建等;而IO命令主要用来实现主机和NVME设备之间的存储数据读写功能等。In the above steps, in the NVME protocol, the ADMIN command is mainly used to present the capabilities of the NVME device to the host and create the NVME device queue; while the IO command is mainly used to implement the storage data reading and writing functions between the host and the NVME device. wait.

图9是基于图8所示的用户存储控制系统的工作原理示意图。参见图9,图8中的命令响应及处理模块对应于:队列命令处理单元和队列消息响应单元。图8中的队列资源动态管理对应于:队列编号映射管理组件、队列编号分配管理组件和队列资源回收控制单元,以下描述各 部分之间的工作原理。Figure 9 is a schematic diagram of the working principle of the user storage control system shown in Figure 8. Referring to Figure 9, the command response and processing module in Figure 8 corresponds to: the queue command processing unit and the queue message response unit. The dynamic management of queue resources in Figure 8 corresponds to: queue number mapping management component, queue number allocation management component and queue resource recovery control unit. Each is described below. How the parts work.

步骤1:主机通过PCIE+DMA组件(即,上述的PCIE+DMA模块)对用户存储控制系统中的各个NVME控制器(具有上述图8的命令响应及处理模块和队列资源动态管理模块的功能)进行初始化,并建立多个QP(即,IO QP),包括存储提交队列(SQ)以及存储完成队列(CQ)。在创建好QP后,将各个QP对应的Doorbell发给用户存储控制系统里的各个NVME控制器,进入到对应的NVME控制器的队列消息响应单元。Step 1: The host controls each NVME controller in the user storage system through the PCIE+DMA component (i.e., the above-mentioned PCIE+DMA module) (having the functions of the command response and processing module and queue resource dynamic management module of Figure 8 above) Initialize and establish multiple QPs (ie, IO QPs), including storage submission queue (SQ) and storage completion queue (CQ). After the QP is created, the Doorbell corresponding to each QP is sent to each NVME controller in the user's storage control system, and enters the queue message response unit of the corresponding NVME controller.

步骤2:当队列消息响应单元收到新的队列消息时,即可通过PCIE+DMA组件向主机(HOST,图中未示出)发起命令的获取,并通过PCIE+DMA组件将命令从HOST搬回控制器内的队列命令处理单元,同时通知队列编号分配管理组件。Step 2: When the queue message response unit receives a new queue message, it can initiate command acquisition to the host (HOST, not shown in the figure) through the PCIE+DMA component, and move the command from the HOST through the PCIE+DMA component. Return to the queue command processing unit in the controller and notify the queue number allocation management component at the same time.

步骤3:通过队列编号分配管理组件,给每一个创建好的QP赋予对应的QP编号。例如,创建1024个QP则对应编号0至1023,每个编号对应唯一的QP,即,唯一的SQ和CQ;并且,在创建QP时,同时在队列编号映射管理组件中创建QP资源编号与设备ID+设备QP ID之间的映射表及反映射表。Step 3: Use the queue number allocation management component to assign a corresponding QP number to each created QP. For example, creating 1024 QPs corresponds to numbers 0 to 1023, and each number corresponds to a unique QP, that is, a unique SQ and CQ; and when creating a QP, the QP resource number and device are also created in the queue number mapping management component. Mapping table and anti-mapping table between ID+device QP ID.

在实施时,可以为每一个QP设计一个对应的队列执行标志位寄存器,当QP处于命令执行状态时,可将对应的标志位置设置1。During implementation, a corresponding queue execution flag register can be designed for each QP. When the QP is in the command execution state, the corresponding flag position can be set to 1.

步骤4:队列命令处理单元根据QP的队列执行标志位寄存器以及QP资源编号与设备ID+设备QP ID之间的映射表及反映射表,对详细命令进行QP信息资源的缓存及查找。例如,对各个SQ的基地址进行缓存,当需要对主机进行IO命令(SQ entry)的获取时,可以从对应的QP资源缓存中的SQ信息缓存里取出该SQ的基地址,从而能从主机内对应的地址取出对应的命令。同时,可以对各个CQ的基地址进行缓存,当需要将命令的完成情况(CQ entry)发给主机时,从对应的QP资源缓存中的CQ信息缓存里取出该CQ的基地址,就可以将CQ entry发送到主机对应的地址。Step 4: The queue command processing unit caches and searches QP information resources for detailed commands based on the queue execution flag register of QP and the mapping table and anti-mapping table between QP resource number and device ID + device QP ID. For example, the base address of each SQ is cached. When it is necessary to obtain an IO command (SQ entry) from the host, the base address of the SQ can be retrieved from the SQ information cache in the corresponding QP resource cache, so that the base address of the SQ can be retrieved from the host. Get the corresponding command from the corresponding address. At the same time, the base address of each CQ can be cached. When the completion status of the command (CQ entry) needs to be sent to the host, the base address of the CQ can be retrieved from the CQ information cache in the corresponding QP resource cache. CQ entry is sent to the address corresponding to the host.

步骤5:当队列命令处理单元解析到删除QP的ADMIN命令时,由队列命令处理单元通知队列资源回收控制单元对该QP的所有存储资源进行清理清零处理,回收这份资源,以便于后续的重新利用。Step 5: When the queue command processing unit parses the ADMIN command to delete the QP, the queue command processing unit notifies the queue resource recovery control unit to clear and clear all storage resources of the QP and recycle this resource for subsequent use. Repurpose.

本公开实施例采用循环回收利用QP存储资源机制,由队列资源回收控制单元实现,该单元进行编号回收缓存,将该编号存放到缓存中。例如,当所有1024个编号都被队列编号分配管理组件分配完时,若再有新的QP创建,则从编号回收缓存中取出编号,将该编号和对应的资源分配给新创建的QP,从而该新创建的QP能进行正常的存储业务及功能。The disclosed embodiment adopts a QP storage resource recycling mechanism, which is implemented by a queue resource recycling control unit. This unit performs serial number recycling and cache, and stores the serial number in the cache. For example, when all 1024 numbers have been allocated by the queue number allocation management component, if a new QP is created, the number will be taken out from the number recycling cache, and the number and corresponding resources will be allocated to the newly created QP. The newly created QP can perform normal storage services and functions.

在一些实施例中,如图10所示,本公开实施例还提供一种电子设备1000。该电子设备1000包括处理器1010和存储器1020,在存储器1020上存储有可在所述处理器1010上运行的程序或指令,例如,该电子设备1000为终端时,该程序或指令被处理器1010执行时实现上述指令执行方法实施例的各个过程,且能达到相同的技术效果。为避免重复,这里不再赘述。In some embodiments, as shown in Figure 10, embodiments of the present disclosure also provide an electronic device 1000. The electronic device 1000 includes a processor 1010 and a memory 1020. The memory 1020 stores programs or instructions that can be run on the processor 1010. For example, when the electronic device 1000 is a terminal, the program or instructions are processed by the processor 1010. During execution, each process of the above instruction execution method embodiment is realized, and the same technical effect can be achieved. To avoid repetition, they will not be repeated here.

本公开实施例还提供一种可读存储介质。所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述执行指令方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。An embodiment of the present disclosure also provides a readable storage medium. The readable storage medium stores programs or instructions. When the program or instructions are executed by the processor, each process of the above-mentioned instruction execution method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details will not be described here.

所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random  Access Memory,RAM)、磁碟或者光盘等。所述可读存储介质,包括非暂态计算机可读存储介质。The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes computer-readable storage media, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory (RAM), disk or optical disk, etc. The readable storage media includes non-transitory computer-readable storage media.

本公开实施例还提供了一种芯片,所述芯片包括处理器和通信接口。所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述指令执行方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。An embodiment of the present disclosure also provides a chip, which includes a processor and a communication interface. The communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement each process of the above instruction execution method embodiment, and can achieve the same technical effect. To avoid duplication, the details will not be described here.

进一步地,本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现上述指令执行方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Further, embodiments of the present disclosure also provide a computer program product. The computer program product includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor. The program or instructions When instructions are executed by the processor, each process of the above instruction execution method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details will not be described here.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本公开实施方式中的方法和装置的范围不限,按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。It should be noted that, in this document, the terms "comprising", "comprises" or any other variations thereof are intended to cover a non-exclusive inclusion, such that a process, method, article or device that includes a series of elements not only includes those elements, It also includes other elements not expressly listed or inherent in the process, method, article or apparatus. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of additional identical elements in a process, method, article or apparatus that includes that element. In addition, it should be pointed out that the scope of the methods and apparatuses in the embodiments of the present disclosure is not limited. Functions are performed in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in a reverse order according to the functions involved. To perform the functions, for example, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM、RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本公开各个实施例所述的方法。Through the above description of the embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation. Based on this understanding, the technical solution of the present disclosure can be embodied in the form of a computer software product that is essentially or contributes to the existing technology. The computer software product is stored in a storage medium (such as ROM, RAM, disk, etc.) , optical disk), including several instructions to cause a terminal (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.

上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。 The embodiments of the present disclosure have been described above in conjunction with the accompanying drawings. However, the present disclosure is not limited to the above-mentioned specific implementations. The above-mentioned specific implementations are only illustrative and not restrictive. Those of ordinary skill in the art will Inspired by this disclosure, many forms can be made without departing from the purpose of this disclosure and the scope protected by the claims, all of which fall within the protection of this disclosure.

Claims (15)

一种执行指令的方法,其中,所述方法应用于虚拟设备,并包括:A method of executing instructions, wherein the method is applied to a virtual device and includes: 获取目标队列中的指令获取请求,所述指令获取请求用于指示获取目标指令;Obtain the instruction acquisition request in the target queue, where the instruction acquisition request is used to indicate acquisition of the target instruction; 基于所述指令获取请求,根据预先设置的映射关系,确定与所述目标队列对应的指令获取地址,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系;Based on the instruction acquisition request, determine the instruction acquisition address corresponding to the target queue according to a preset mapping relationship, the mapping relationship indicating the correspondence between the queue information where the target queue is located and the instruction acquisition address; 根据所述指令获取地址获取所述目标指令;Obtain the target instruction according to the instruction acquisition address; 根据所述目标指令执行与所述目标指令相应的操作。Execute operations corresponding to the target instructions according to the target instructions. 根据权利要求1所述的方法,其中,在获取目标队列中的指令获取请求之前,所述方法还包括:The method according to claim 1, wherein before acquiring the instruction acquisition request in the target queue, the method further includes: 根据业务需求构建一个或多个队列;Build one or more queues based on business needs; 将存储空间划分为与所述一个或多个队列相同数量的子存储空间;Divide the storage space into the same number of sub-storage spaces as the one or more queues; 将各子存储空间分别分配给相应的队列,其中,所述子存储空间至少用于存储指令获取地址、指令结果存储地址。Each sub-storage space is allocated to a corresponding queue, wherein the sub-storage space is at least used to store an instruction acquisition address and an instruction result storage address. 根据权利要求2所述的方法,其中,通过如下方式设置所述映射关系:The method according to claim 2, wherein the mapping relationship is set in the following manner: 获取队列信息及其子存储空间信息的对应关系;Obtain the correspondence between queue information and its sub-storage space information; 根据获取的对应关系设置所述映射关系。Set the mapping relationship according to the obtained correspondence relationship. 根据权利要求2所述的方法,其中,将各子存储空间分别分配给相应的队列之后,所述方法还包括:The method according to claim 2, wherein after allocating each sub-storage space to a corresponding queue, the method further includes: 响应于队列被删除,释放该队列的子存储空间。In response to the queue being deleted, the queue's substorage space is released. 根据权利要求2所述的方法,其中,根据所述目标指令执行与所述目标指令相应的操作之后,所述方法还包括:The method according to claim 2, wherein after performing an operation corresponding to the target instruction according to the target instruction, the method further includes: 将所述目标指令的执行结果发送至与所述指令结果存储地址相应的存储空间。The execution result of the target instruction is sent to the storage space corresponding to the storage address of the instruction result. 一种执行指令的方法,其中,所述方法应用于服务器,并包括:A method of executing instructions, wherein the method is applied to a server and includes: 将指令获取请求发送至虚拟设备的目标队列,所述指令获取请求用于指示获取目标指令,以便于虚拟设备基于预先设置的映射关系获取所述目标指令,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系。Send an instruction acquisition request to the target queue of the virtual device. The instruction acquisition request is used to instruct the acquisition of the target instruction, so that the virtual device obtains the target instruction based on a preset mapping relationship. The mapping relationship indicates where the target queue is located. The correspondence between the queue information and the instruction acquisition address. 根据权利要求6所述的方法,其中,所述映射关系还指示所述目标队列所在的队列信息与指令结果存储地址的对应关系,所述方法还包括:The method according to claim 6, wherein the mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address, and the method further includes: 接收来自所述目标队列的所述目标指令的执行结果;receiving an execution result of the target instruction from the target queue; 基于所述映射关系,将所述目标指令的执行结果发送至与所述指令结果存储地址相应的存储空间。Based on the mapping relationship, the execution result of the target instruction is sent to the storage space corresponding to the instruction result storage address. 一种执行指令的装置,包括:A device for executing instructions, including: 请求获取单元,用于获取目标队列中的指令获取请求,所述指令获取请求用于指示获取目标指令;A request acquisition unit, used to acquire an instruction acquisition request in the target queue, where the instruction acquisition request is used to indicate acquisition of the target instruction; 地址确定单元,用于基于所述指令获取请求,根据预先设置的映射关系,确定与所述目标队列对应的指令获取地址,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系;An address determination unit configured to determine an instruction acquisition address corresponding to the target queue based on the instruction acquisition request and according to a preset mapping relationship, the mapping relationship indicating the relationship between the queue information where the target queue is located and the instruction acquisition address. Correspondence; 指令获取单元,用于根据所述指令获取地址获取所述目标指令;An instruction acquisition unit, configured to acquire the target instruction according to the instruction acquisition address; 指令执行单元,用于根据所述目标指令执行与所述目标指令相应的操作。An instruction execution unit is configured to execute operations corresponding to the target instruction according to the target instruction. 根据权利要求8所述的装置,其中,所述装置还包括:The device of claim 8, further comprising: 队列构建单元,用于根据业务需求构建至少一个队列; A queue building unit used to build at least one queue according to business requirements; 存储空间划分单元,用于将存储空间划分为与所述至少一个队列相同数量的子存储空间;A storage space dividing unit, configured to divide the storage space into the same number of sub-storage spaces as the at least one queue; 分配单元,用于将各子存储空间分别分配给相应的队列,其中,所述子存储空间至少用于存储指令获取地址、指令结果存储地址。An allocation unit is used to allocate each sub-storage space to a corresponding queue, wherein the sub-storage space is at least used to store an instruction acquisition address and an instruction result storage address. 根据权利要求9所述的装置,其中,所述装置还包括:映射关系设置单元,The device according to claim 9, wherein the device further includes: a mapping relationship setting unit, 所述映射关系设置单元包括:The mapping relationship setting unit includes: 对应关系获取模块,用于获取队列信息及其子存储空间信息的对应关系;The correspondence acquisition module is used to obtain the correspondence between queue information and its sub-storage space information; 映射关系设置模块,用于根据获取的对应关系设置所述映射关系。A mapping relationship setting module is used to set the mapping relationship according to the obtained corresponding relationship. 一种服务器,包括:A server that includes: 请求发送单元,用于将指令获取请求发送至虚拟设备的目标队列,所述指令获取请求用于指示获取目标指令,以便于虚拟设备基于预先设置的映射关系获取所述目标指令,所述映射关系指示所述目标队列所在的队列信息与指令获取地址的对应关系。A request sending unit, configured to send an instruction acquisition request to the target queue of the virtual device. The instruction acquisition request is used to instruct the acquisition of the target instruction, so that the virtual device obtains the target instruction based on a preset mapping relationship. The mapping relationship Indicates the correspondence between the queue information where the target queue is located and the instruction acquisition address. 根据权利要求11所述的服务器,其中,所述映射关系还指示所述目标队列所在的队列信息与指令结果存储地址的对应关系,所述服务器还包括:The server according to claim 11, wherein the mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address, and the server further includes: 执行结果接收单元,用于接收来自所述目标队列的所述目标指令的执行结果;An execution result receiving unit, configured to receive the execution result of the target instruction from the target queue; 执行结果发送单元,用于基于所述映射关系,将所述目标指令的执行结果发送至与所述指令结果存储地址相应的存储空间。An execution result sending unit is configured to send the execution result of the target instruction to a storage space corresponding to the instruction result storage address based on the mapping relationship. 一种执行指令的系统,其中,所述系统包括:如权利要求11或12所述的服务器,以及至少一个虚拟设备,其中,所述至少一个虚拟设备包括如权利要求8-10中任一项所述的执行指令的装置。A system for executing instructions, wherein the system includes: the server according to claim 11 or 12, and at least one virtual device, wherein the at least one virtual device includes any one of claims 8-10 The device for executing instructions. 一种电子设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器运行时执行如权利要求1-5中任一项所述的方法,或6-7中任一项所述方法。An electronic device, including a processor and a memory, the memory stores programs or instructions that can be run on the processor, and the programs or instructions are executed when the processor is run according to any one of claims 1-5 The method described in item 6-7, or the method described in any one of 6-7. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器运行时执行如权利要求1-5中任一项所述的方法,或6-7中任一项所述方法。 A readable storage medium, wherein a program or instructions are stored on the readable storage medium, and when the program or instructions are run by a processor, the method according to any one of claims 1-5 is executed, or 6- The method described in any one of 7.
PCT/CN2023/114014 2022-08-26 2023-08-21 Method, apparatus, and system for executing instruction, and server Ceased WO2024041481A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211037817.7 2022-08-26
CN202211037817.7A CN117666925A (en) 2022-08-26 2022-08-26 Method, device, server and system for executing instruction

Publications (1)

Publication Number Publication Date
WO2024041481A1 true WO2024041481A1 (en) 2024-02-29

Family

ID=90012484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114014 Ceased WO2024041481A1 (en) 2022-08-26 2023-08-21 Method, apparatus, and system for executing instruction, and server

Country Status (2)

Country Link
CN (1) CN117666925A (en)
WO (1) WO2024041481A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119829142B (en) * 2024-12-27 2025-11-07 海光信息技术股份有限公司 Method and device for emptying pipeline, processor and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292007A1 (en) * 2015-03-31 2016-10-06 Kabushiki Kaisha Toshiba Apparatus and method of managing shared resources in achieving io virtualization in a storage device
CN108628775A (en) * 2017-03-22 2018-10-09 华为技术有限公司 A kind of method and apparatus of resource management
US20190073160A1 (en) * 2016-05-26 2019-03-07 Hitachi, Ltd. Computer system and data control method
CN110275774A (en) * 2018-03-13 2019-09-24 三星电子株式会社 The mechanism of physical storage device resource is dynamically distributed in virtualized environment
CN111880750A (en) * 2020-08-13 2020-11-03 腾讯科技(深圳)有限公司 Disk read/write resource allocation method, device, device and storage medium
CN114281252A (en) * 2021-12-10 2022-04-05 阿里巴巴(中国)有限公司 Virtualization method and device for NVMe (network video recorder) device of nonvolatile high-speed transmission bus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292007A1 (en) * 2015-03-31 2016-10-06 Kabushiki Kaisha Toshiba Apparatus and method of managing shared resources in achieving io virtualization in a storage device
US20190073160A1 (en) * 2016-05-26 2019-03-07 Hitachi, Ltd. Computer system and data control method
CN108628775A (en) * 2017-03-22 2018-10-09 华为技术有限公司 A kind of method and apparatus of resource management
CN110275774A (en) * 2018-03-13 2019-09-24 三星电子株式会社 The mechanism of physical storage device resource is dynamically distributed in virtualized environment
CN111880750A (en) * 2020-08-13 2020-11-03 腾讯科技(深圳)有限公司 Disk read/write resource allocation method, device, device and storage medium
CN114281252A (en) * 2021-12-10 2022-04-05 阿里巴巴(中国)有限公司 Virtualization method and device for NVMe (network video recorder) device of nonvolatile high-speed transmission bus

Also Published As

Publication number Publication date
CN117666925A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
US10534552B2 (en) SR-IOV-supported storage resource access method and storage controller and storage device
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
CN107690622B (en) Method, device and system for implementing hardware accelerated processing
US11379265B2 (en) Resource management method, host, and endpoint based on performance specification
US20230342087A1 (en) Data Access Method and Related Device
CN102594660B (en) A kind of virtual interface exchange method, Apparatus and system
US12321635B2 (en) Method for accessing solid state disk and storage device
CN107894913A (en) Computer system and storage access device
EP3693853B1 (en) Method and device for scheduling acceleration resources, and acceleration system
CN108243118A (en) Method of forwarding packets and physical host
CN104239122B (en) A kind of virtual machine migration method and device
EP3506575B1 (en) Method and device for data transmission
US20180246772A1 (en) Method and apparatus for allocating a virtual resource in network functions virtualization network
CN114816741A (en) GPU resource management method, device, system and readable storage medium
CN104915302B (en) Data transmission processing method and data link
CN110019475B (en) Data persistence processing method, device and system
CN107003904A (en) A memory management method, device and system
WO2024041481A1 (en) Method, apparatus, and system for executing instruction, and server
CN116383127B (en) Inter-node communication method, inter-node communication device, electronic equipment and storage medium
CN104571934B (en) A kind of method, apparatus and system of internal storage access
CN114911411A (en) Data storage method and device and network equipment
CN111858035A (en) A kind of FPGA device allocation method, device, device and storage medium
CN104461705A (en) Service access method, storage controllers and cluster storage system
CN109167740B (en) A method and device for data transmission
CN110990122B (en) A virtual machine migration method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23856571

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202517016421

Country of ref document: IN

WWP Wipo information: published in national office

Ref document number: 202517016421

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23856571

Country of ref document: EP

Kind code of ref document: A1