[go: up one dir, main page]

WO2024041481A1 - Procédé, appareil et système d'exécution d'instruction, et serveur - Google Patents

Procédé, appareil et système d'exécution d'instruction, et serveur Download PDF

Info

Publication number
WO2024041481A1
WO2024041481A1 PCT/CN2023/114014 CN2023114014W WO2024041481A1 WO 2024041481 A1 WO2024041481 A1 WO 2024041481A1 CN 2023114014 W CN2023114014 W CN 2023114014W WO 2024041481 A1 WO2024041481 A1 WO 2024041481A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
queue
target
address
mapping relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/114014
Other languages
English (en)
Chinese (zh)
Inventor
戴书舟
廖志佳
余峰
鄢林
程欣
刘强军
王俊
郭成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Publication of WO2024041481A1 publication Critical patent/WO2024041481A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Definitions

  • the present disclosure belongs to the field of storage technology, and specifically relates to a method, device, server and system for executing instructions.
  • NVME Non Volatile Memory Host Controller Interface, non-volatile memory host controller interface
  • Each NVME device is bound to the corresponding NVME controller, and each NVME device and its NVME controller perform storage interactions through multiple queues. Multiple NVME devices will correspond to a large number of NVME queues.
  • SQ Submission Queue, submission queue
  • CQ Completion Queue, completion queue
  • SQ and its corresponding CQ can be called a QP (Queue Pair).
  • embodiments of the present disclosure provide a method for executing instructions, and the method is applied to a virtual device.
  • the method includes: obtaining an instruction acquisition request in a target queue, where the instruction acquisition request is used to indicate acquisition of a target instruction; and based on the instruction acquisition request, determining an instruction acquisition corresponding to the target queue according to a preset mapping relationship.
  • Address, the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address; the target instruction is obtained according to the instruction acquisition address; and the operation corresponding to the target instruction is performed according to the target instruction.
  • embodiments of the present disclosure provide a method for executing instructions, and the method is applied to a server.
  • the method includes: sending an instruction acquisition request to a target queue of the virtual device, where the instruction acquisition request is used to indicate acquisition of the target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship, and the mapping relationship indicates The corresponding relationship between the queue information where the target queue is located and the instruction acquisition address.
  • inventions of the present disclosure provide a device for executing instructions.
  • the device includes: a request acquisition unit, used to acquire an instruction acquisition request in a target queue, the instruction acquisition request being used to indicate acquisition of a target instruction; an address determination unit, based on the instruction acquisition request, according to a preset mapping relationship, determine the instruction acquisition address corresponding to the target queue, the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address; the instruction acquisition unit is used to obtain the instruction acquisition address according to the instruction acquisition address Target instruction; an instruction execution unit, configured to execute operations corresponding to the target instruction according to the target instruction.
  • inventions of the present disclosure provide a server.
  • the server includes: a request sending unit, configured to send an instruction acquisition request to a target queue of the virtual device, where the instruction acquisition request is used to indicate acquisition of the target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship.
  • the mapping relationship indicates the corresponding relationship between the queue information where the target queue is located and the instruction acquisition address.
  • inventions of the present disclosure provide a system for executing instructions.
  • the system includes the above-mentioned server, and At least one virtual device, wherein the at least one virtual device includes the above-mentioned device for executing instructions.
  • an embodiment of the present disclosure provides an electronic device.
  • the electronic device includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the programs or instructions are executed by the processor as described in the first aspect or the second aspect. steps of the method described.
  • embodiments of the present disclosure provide a readable storage medium. Programs or instructions are stored on the readable storage medium, and when the programs or instructions are run by a processor, the steps of implementing the method described in the first aspect or the second aspect are executed.
  • Figure 1 is a schematic flowchart of a method for executing instructions according to some embodiments
  • Figure 2 is yet another flowchart of a method for executing instructions according to some embodiments
  • Figure 3 is another schematic flowchart of a method for executing instructions according to some embodiments.
  • Figure 4 is another schematic flowchart of a method for executing instructions according to some embodiments.
  • Figure 5 is a structural block diagram of an execution instruction system according to some embodiments.
  • Figure 6 is a structural block diagram of a device for executing instructions according to some embodiments.
  • Figure 7 is a structural block diagram of a server according to some embodiments.
  • Figure 8 is a working principle diagram of an execution instruction system according to some embodiments.
  • Figure 9 is a working principle diagram of a user storage control system according to some embodiments.
  • Figure 10 is a schematic structural diagram of an electronic device according to some embodiments.
  • NVME devices have multi-queue characteristics, that is, multiple channels can be constructed between the host and the NVME device for command transmission and data interaction, and the carriers corresponding to these channels are queues.
  • the existence of multiple NVME device queues allows the host to use multiple cores or threads to submit commands and process command completion results in parallel.
  • NVME devices may have different queue requirements based on their own services, and with the development of cloud storage services, users have an increasing demand for the number of NVME devices. Each NVME device requires certain queue resources to achieve its business and functions. How to effectively manage multiple NVME devices under limited hardware conditions to meet various storage business requirements is an urgent problem that needs to be solved.
  • embodiments of the present disclosure provide an instruction execution solution, which is configured by The mapping relationship between virtual device queues and queue storage resources realizes dynamic management of virtual device queue storage resources, so that multiple virtual devices can be effectively managed under limited storage resources, and various storage business needs can be better realized.
  • the instruction execution scheme of the embodiments of the present disclosure can be applied to virtual devices in the storage field.
  • the virtual device is described in detail by taking an NVME device as an example.
  • Figure 1 is a schematic flowchart of a method for executing instructions according to some embodiments. As shown in Figure 1, the method may include the following S110 to S140.
  • the instruction acquisition request is used to indicate acquisition of the target instruction.
  • the queue implements instruction transmission between the NVME device and the server (corresponding to the above-mentioned host), and the queue may be called a QP queue.
  • the instruction here can be a storage service instruction, and the instruction acquisition request is used to instruct to obtain the target instruction from the server.
  • At least one queue is constructed according to business requirements. Generally speaking, multiple queues need to be built to achieve various storage business requirements. Subsequently, the storage space (ie, hardware resource, or storage resource) can be divided into the same number of sub-storage spaces as the constructed queues, and each sub-storage space is allocated to the corresponding queue respectively.
  • the storage space ie, hardware resource, or storage resource
  • the storage space here can be the storage space corresponding to the server.
  • the storage space can be the storage space of the server or the plug-in storage space corresponding to the server.
  • This storage space can store target instructions and can also store other information.
  • the sub-storage space here is at least used to store the instruction acquisition address and the instruction result storage address.
  • the instruction acquisition address refers to the address where the instruction is stored, which can be called the SQ base address.
  • the instruction result storage address refers to the address where the instruction result is stored, which can be called the CQ base address.
  • the storage space needs to be divided into 128 sub-storage spaces, and each sub-storage space is allocated to each queue. In this way, the queue can transmit data to the server through the sub-storage space.
  • the storage space used for queues is collectively called storage resources
  • the sub-storage spaces are collectively called sub-resources
  • the servers are collectively called hosts.
  • the storage resources can be divided into multiple sub-resources in an equal manner, or the sub-resources can be divided according to actual conditions, and the present disclosure does not place a limit on this.
  • the embodiments of the present disclosure are described in detail by taking the resource equalization method as an example.
  • the mapping relationship can be set in the following manner: after allocating the sub-resources to which the queue belongs to each queue, the corresponding relationship between the queue information and the sub-resource information to which the queue belongs can be obtained; and the mapping is set according to the obtained corresponding relationship. relation.
  • queue information and sub-resources can be numbered respectively, and the corresponding relationship can be represented by numbering.
  • queue 3 corresponds to resource number 32, which means that among multiple queues of multiple virtual devices, resource number 32 is the exclusive resource of queue 3.
  • the sub-resource corresponding to queue 3 can implement instructions between queue 3 and the host. transfer.
  • the mapping relationship may also indicate the corresponding relationship between queue information (for example, queue number), virtual device to which the queue belongs, and sub-resource information (for example, resource number) to which the queue belongs.
  • queue information for example, queue number
  • sub-resource information for example, resource number
  • queue information, virtual devices, and resources can be numbered respectively, and the corresponding relationships are expressed through numbering.
  • queue 3 of virtual device 5 corresponds to resource number 32, indicating that among multiple virtual devices, the resource Number 32 is the exclusive storage resource of queue 3 of virtual device 5.
  • the storage resource corresponding to number 32 can be used to implement instruction transfer between queue 3 of virtual device 5 and the host.
  • the execution result of the target instruction can be sent to the storage space corresponding to the instruction result storage address in the sub-storage space.
  • the instruction result storage address in the sub-storage space indicates the detailed storage location used to store the instruction result.
  • the embodiment of the present disclosure determines the instruction acquisition address corresponding to the target queue according to the mapping relationship, and accordingly obtains the target instruction according to the instruction acquisition address and executes the target instruction.
  • queues and their associated resources can be effectively managed, virtual devices can be effectively managed under limited storage resources, storage service instructions can be better executed, and various storage service needs can be met.
  • the queue's sub-resources are released. In this way, the released storage resources can still be used for new queues built subsequently, further achieving effective management of limited storage resources and improving resource utilization.
  • embodiments of the present disclosure also provide a method for executing instructions, which method is applied to a server (corresponding to the above-mentioned host).
  • Figure 2 is a flow chart of a method of executing instructions applied to a server. As shown in Figure 2, the method includes S210.
  • S210 Send an instruction acquisition request to the target queue of the virtual device.
  • the instruction acquisition request is used to instruct the acquisition of the target instruction, so that the virtual device obtains the target instruction based on a preset mapping relationship.
  • the mapping relationship indicates that the queue information where the target queue is located and The corresponding relationship of the instruction acquisition address.
  • the above mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address.
  • the method also includes: receiving the execution result of the target instruction from the above-mentioned target queue; and based on the mapping relationship, sending the execution result of the target instruction to the storage space corresponding to the instruction result storage address. That is, the instruction result storage address in the sub-storage space indicates the detailed storage location used to store the instruction result.
  • the instruction execution method includes the following processes.
  • IO QP input and output QP, that is, the queue in S110
  • IO QP can be called QP queue or QP
  • data transmission between the NVME device and the host can be achieved.
  • a certain total storage resource is evenly divided into several parts. Since each pair of IO QP queues requires a certain amount of basic storage resources, the scale of the total storage resources can be calculated based on the number of IO QP queues.
  • the number of sub-resources that need to be divided is related to the capabilities required by the current business. For example, some services require more NVME devices and require a total of 1024 pairs of IO QPs to complete; while some services require fewer NVME devices and only require 128 QPs to fully meet business needs.
  • a certain total storage resource can be divided into a total of 1024 sub-resources. Allocate each sub-resource to an IO QP as the storage resource required by the IO QP.
  • IO QP storage resources are used to store: SQ base address information (corresponding to the above instruction acquisition address), SQ queue depth information, SQ doorbell information (SQ Doorbell), CQ base address information (corresponding to the above instruction results) storage location address), CQ queue depth information, CQ doorbell information, CQ interrupt vector information, etc.
  • SQ base address information corresponding to the above instruction acquisition address
  • SQ queue depth information SQ doorbell information (SQ Doorbell)
  • CQ base address information (corresponding to the above instruction results) storage location address)
  • CQ queue depth information corresponding to the above instruction results
  • CQ interrupt vector information etc.
  • S302 Establish a numbering mechanism for the resources to which each IO QP belongs (which can also be called QP storage resources). For example, 1024 resources belonging to IO QP are numbered and numbered from 0 to 1023. Each number corresponds to one resource belonging to IO QP.
  • the NVME device when an NVME device has multiple IO QP queues, the NVME device can correspond to each IO QP and its associated resource number.
  • S303 Assign a resource number to each IO QP according to the order in which each IO QP is created, and save the IO QP information data when each IO QP is created (for example, the device information to which the QP belongs, etc.).
  • S304 Number each NVME device. The number corresponds to the device ID (identification). For each IO QP of the NVME device, there is a corresponding QP ID in the NVME protocol. The device ID can be added to the QP ID within the device. (Device ID + QP ID within the device) as its unique identifier, which is then bound to a storage resource number that can be allocated, thereby enabling different IO QPs of each NVME device to share the overall storage resources.
  • This disclosed embodiment provides a resource number allocation management mechanism to divide and reasonably number existing limited storage resources. Each individual queue corresponds to a numbered storage resource, and the queue completes the corresponding NVME device based on this resource. certain businesses and functions. For an independent NVME device, it can create multiple queues, and accordingly allocate multiple storage resources with different numbers to the NVME device.
  • S305 Conduct business and functional management of all internal storage resource numbers and corresponding resources, and use the mutual mapping between storage resource numbers and device IDs plus QP ID (device ID + QP ID) to control the connection between the host and each NVME device storage commands and data interaction.
  • the queue can be bound and mapped to the storage resource number, so that when the NVME device and the host interact, the actual device and the actual device can be found accurately and quickly. Detailed parameters of the queue, and complete the storage interaction of instructions and data with the host.
  • the number of the deleted queue and its resources in the NVME device are recycled. After recycling, its number and resources can be allocated to subsequent newly created queues to achieve the purpose of recycling resources.
  • the embodiments of the present disclosure provide a dynamic sharing process of queue storage resources under multiple NVME devices by establishing a mechanism for resource number allocation management, resource recycling and allocation, and a device ID, device queue ID, and resource number binding strategy.
  • IO QP in virtual devices can be effectively managed under limited storage resources, so that storage services and functions can be flexibly implemented.
  • the instruction execution method includes the following processes.
  • S402 Establish a numbering mechanism based on unit IO QP storage resources (that is, the resource to which the IO QP belongs).
  • the resources belonging to 1024 IO QPs are numbered and numbered from 0 to 1023. Each number corresponds to an IO QP storage resource.
  • S403 The host creates 8 IO QPs for each NVME device based on the IO capabilities presented by the 128 virtual NVME devices, for a total of 1024 QPs.
  • the numbers are QP0 to QP1023 in the order of creation.
  • the number of IO QPs for each NVME device is set equally. In actual operation, different numbers of IO QPs can also be created for different NVME devices based on the business capabilities and needs of each NVME device to achieve different business needs. This disclosure does not limit this.
  • S404 Number 128 NVME devices.
  • the device IDs are 0 to 127.
  • the QP ID is set to 0 to 7.
  • each IO QP can correspond to resource number.
  • QP3 of device 5 can be mapped to resource number 32.
  • mapping table of correspondences between resource numbers, device IDs and QP IDs may be constructed.
  • S405 Control the storage command and data interaction between the host and 128 NVME devices based on the mutual mapping between resource numbers 0 to 1023 and device ID + QP ID.
  • the detailed implementation is: search the mapping table according to the device ID (0 to 127) + QP ID (0 to 7) to which the current instruction belongs, find its corresponding resource number, and obtain the storage resource information corresponding to the QP ID corresponding to the resource number.
  • the storage resource information includes: SQ base address information, SQ queue depth information, SQ doorbell information, CQ base address information, CQ queue depth information, CQ doorbell information, CQ interrupt vector information, etc.
  • IO QP 3 of device 5 among 128 NVME devices receives the SQ Doorbell sent from the host, its corresponding resource can be found from the mapping table based on the device ID and queue ID "5+3" number. The number is 32, and then the SQ base address information corresponding to resource number 32 is read out. Based on the SQ base address information, the SQ Entry command of QP 3 of device 5 is read back to the user side from the host to complete the subsequent SQ Entry command parsing. Perform and store data transfer and other operations.
  • the disclosed embodiment realizes the dynamic sharing process of queue storage resources under multiple NVME devices by establishing a resource number allocation management and recycling allocation mechanism, as well as a device ID, queue ID and resource number binding strategy.
  • the embodiments of the present disclosure can implement more NVME virtual devices with limited storage resources, and the flexible allocation of queue resources in each NVME device can also enable the NVME storage system to support more storage services, which better improves the efficiency of the storage system. Flexibility, can flexibly support NVME devices with different IO capabilities, and flexibly manage the creation, work and death of all devices.
  • embodiments of the present disclosure also provide an execution instruction system.
  • the system includes: a server and at least one virtual device (corresponding to the above-mentioned NVME device), and the at least one virtual device includes an instruction execution device.
  • the instruction execution device may be installed in the virtual device or outside the virtual device.
  • Figure 5 is a structural block diagram of the system. As shown in Figure 5, the system 1 shows: an instruction execution device 10, a server 20 and a virtual device 30 (shown as a virtual device in Figure 5). In this figure, the instruction execution device is provided outside the virtual device.
  • each virtual device corresponds to at least one queue.
  • Figure 6 is a structural block diagram of the instruction execution device 10 according to some embodiments. As shown in Figure 6, the instruction execution device 10 includes: a request acquisition unit 101, an address determination unit 102, an instruction acquisition unit 103 and an execution unit 104.
  • the request acquisition unit 101 is configured to acquire an instruction acquisition request in the target queue, where the instruction acquisition request is used to indicate acquisition of a target instruction.
  • the address determination unit 102 is configured to determine the instruction acquisition address corresponding to the target queue based on the instruction acquisition request and according to a preset mapping relationship, where the mapping relationship indicates the relationship between the queue information where the target queue is located and the instruction acquisition address. Correspondence.
  • the instruction acquisition unit 103 is configured to acquire the target instruction according to the instruction acquisition address.
  • the execution unit 104 is configured to execute operations corresponding to the target instructions according to the target instructions.
  • the address determination unit 102 determines the instruction acquisition address corresponding to the target queue according to the mapping relationship, and the instruction acquisition unit 103 acquires the target instruction according to the instruction acquisition address, and then executes Unit 104 executes the target instruction.
  • the address determination unit 102 determines the instruction acquisition address corresponding to the target queue according to the mapping relationship
  • the instruction acquisition unit 103 acquires the target instruction according to the instruction acquisition address
  • executes Unit 104 executes the target instruction.
  • the above device further includes: a queue building unit, a storage space dividing unit and an allocation unit.
  • Queue building unit used to build at least one queue according to business requirements.
  • a storage space dividing unit is configured to divide the storage space into the same number of sub-storage spaces as the at least one queue.
  • An allocation unit is used to allocate each sub-storage space to a corresponding queue, wherein the sub-storage space is at least used to store an instruction acquisition address and an instruction result storage address.
  • the above device further includes: a mapping relationship setting unit, which includes: a correspondence relationship acquisition module and a mapping relationship setting module.
  • the correspondence acquisition module is used to obtain the correspondence between the queue information and its sub-storage space information.
  • a mapping relationship setting module is used to set the mapping relationship according to the obtained corresponding relationship.
  • the above device further includes: an execution result sending unit.
  • the execution result sending unit is used to send the execution result of the above-mentioned target instruction to a location corresponding to the instruction result storage address in the sub-storage space.
  • the above device further includes: a storage space releasing unit.
  • the storage space release unit is used to release the sub-storage space of the queue in response to the queue being deleted.
  • the instruction execution device 10 in the embodiment of the present disclosure may be a device with an operating system.
  • the operating system can be an Android operating system, an ios operating system, or other possible operating systems.
  • the embodiments of this disclosure are not specifically limited.
  • the instruction execution device 10 provided by the embodiment of the present disclosure can implement each process implemented in the method embodiments of FIG. 1, FIG. 3, and FIG. 4, and achieve the same technical effect. To avoid duplication, the details will not be described here.
  • FIG. 7 is a structural block diagram of the above-mentioned server 20.
  • the server 20 includes: a request sending unit 201, Used to send an instruction acquisition request to a target queue of a virtual device.
  • the instruction acquisition request is used to indicate acquisition of a target instruction, so that the virtual device acquires the target instruction based on a preset mapping relationship, and the mapping relationship indicates the target.
  • the above mapping relationship also indicates the corresponding relationship between the queue information where the target queue is located and the instruction result storage address.
  • the server further includes: an execution result receiving unit and an execution result sending unit.
  • An execution result receiving unit is configured to receive the execution result of the target instruction from the target queue.
  • An execution result sending unit is configured to send the execution result of the target instruction to a storage space corresponding to the instruction result storage address based on the mapping relationship.
  • each NVME device In actual operation, in a multi-NVME device scenario, the controller of each NVME device is stored independently. That is, each NVME device needs to occupy independent and complete storage resources to realize the storage process interaction between it and the host.
  • the detailed process includes: NVME device initialization, NVME ADMIN process and NVME IO process, etc.
  • the working principle of the instruction execution system of the embodiment of the present disclosure will be described in detail below with reference to FIG. 8 and FIG. 9 .
  • Figure 8 is a schematic diagram of the operation of an instruction execution system according to some embodiments.
  • the example system includes: a host and a user storage control system.
  • the host has the function of the above-mentioned server 20, and the user storage control system has the function of the above-mentioned execution instruction device 10.
  • the user storage control system includes: command response and processing module and queue resource dynamic management module.
  • the working principle of this example system includes: the NVME ADMIN process from steps 1 to 3 and the NVME IO process from steps 4 to 6. Each step is described below.
  • Step 1 The host sends the message of the ADMIN (management) queue, that is, the ADMIN doorbell (Doorbell), through the PCIE and DMA (PCIE+DMA (Direct Memory Access, direct memory access)) module to enter the command response and control system within the user storage control system processing module.
  • the ADMIN doorbell Doorbell
  • PCIE and DMA PCIE+DMA (Direct Memory Access, direct memory access)
  • Step 2 The command response and processing module receives the ADMIN doorbell, which can be understood as an ADMIN command acquisition request.
  • the command response and processing module sends the ADMIN command acquisition request of the corresponding queue to the PCIE+DMA module, and the PCIE+DMA module retrieves the ADMIN command data from the host.
  • Step 3 The retrieved ADMIN command enters the command response and processing module, which is parsed by the command response and processing module, and notifies the queue resource dynamic management module to perform dynamic management such as numbering, allocation, and recycling of QP hardware resources.
  • Step 4 The host sends the message of the IO QP queue, that is, the IO doorbell (Doorbell), and enters the command response and processing module in the user storage control system through the PCIE+DMA module.
  • the IO doorbell Doorbell
  • Step 5 The command response and processing module sends the IO command acquisition request of the corresponding queue to the PCIE+DMA module, and the PCIE+DMA module retrieves the IO command data from the host.
  • Step 6 The retrieved IO command enters the command response and processing module.
  • the command response and processing module interacts with the queue resource dynamic management module to realize the mapping between QP resource number and device ID + device QP ID, thereby realizing the user storage control system. Interact with storage IO commands between hosts.
  • the ADMIN command is mainly used to present the capabilities of the NVME device to the host and create the NVME device queue; while the IO command is mainly used to implement the storage data reading and writing functions between the host and the NVME device. wait.
  • Figure 9 is a schematic diagram of the working principle of the user storage control system shown in Figure 8.
  • the command response and processing module in Figure 8 corresponds to: the queue command processing unit and the queue message response unit.
  • the dynamic management of queue resources in Figure 8 corresponds to: queue number mapping management component, queue number allocation management component and queue resource recovery control unit. Each is described below. How the parts work.
  • Step 1 The host controls each NVME controller in the user storage system through the PCIE+DMA component (i.e., the above-mentioned PCIE+DMA module) (having the functions of the command response and processing module and queue resource dynamic management module of Figure 8 above) Initialize and establish multiple QPs (ie, IO QPs), including storage submission queue (SQ) and storage completion queue (CQ). After the QP is created, the Doorbell corresponding to each QP is sent to each NVME controller in the user's storage control system, and enters the queue message response unit of the corresponding NVME controller.
  • the PCIE+DMA component i.e., the above-mentioned PCIE+DMA module
  • Initialize and establish multiple QPs ie, IO QPs
  • SQ storage submission queue
  • CQ storage completion queue
  • Step 2 When the queue message response unit receives a new queue message, it can initiate command acquisition to the host (HOST, not shown in the figure) through the PCIE+DMA component, and move the command from the HOST through the PCIE+DMA component. Return to the queue command processing unit in the controller and notify the queue number allocation management component at the same time.
  • HOST host
  • PCIE+DMA PCIE+DMA
  • Step 3 Use the queue number allocation management component to assign a corresponding QP number to each created QP.
  • creating 1024 QPs corresponds to numbers 0 to 1023, and each number corresponds to a unique QP, that is, a unique SQ and CQ; and when creating a QP, the QP resource number and device are also created in the queue number mapping management component. Mapping table and anti-mapping table between ID+device QP ID.
  • a corresponding queue execution flag register can be designed for each QP.
  • the corresponding flag position can be set to 1.
  • Step 4 The queue command processing unit caches and searches QP information resources for detailed commands based on the queue execution flag register of QP and the mapping table and anti-mapping table between QP resource number and device ID + device QP ID. For example, the base address of each SQ is cached. When it is necessary to obtain an IO command (SQ entry) from the host, the base address of the SQ can be retrieved from the SQ information cache in the corresponding QP resource cache, so that the base address of the SQ can be retrieved from the host. Get the corresponding command from the corresponding address. At the same time, the base address of each CQ can be cached. When the completion status of the command (CQ entry) needs to be sent to the host, the base address of the CQ can be retrieved from the CQ information cache in the corresponding QP resource cache. CQ entry is sent to the address corresponding to the host.
  • Step 5 When the queue command processing unit parses the ADMIN command to delete the QP, the queue command processing unit notifies the queue resource recovery control unit to clear and clear all storage resources of the QP and recycle this resource for subsequent use. Repurpose.
  • the disclosed embodiment adopts a QP storage resource recycling mechanism, which is implemented by a queue resource recycling control unit.
  • This unit performs serial number recycling and cache, and stores the serial number in the cache. For example, when all 1024 numbers have been allocated by the queue number allocation management component, if a new QP is created, the number will be taken out from the number recycling cache, and the number and corresponding resources will be allocated to the newly created QP.
  • the newly created QP can perform normal storage services and functions.
  • embodiments of the present disclosure also provide an electronic device 1000.
  • the electronic device 1000 includes a processor 1010 and a memory 1020.
  • the memory 1020 stores programs or instructions that can be run on the processor 1010. For example, when the electronic device 1000 is a terminal, the program or instructions are processed by the processor 1010. During execution, each process of the above instruction execution method embodiment is realized, and the same technical effect can be achieved. To avoid repetition, they will not be repeated here.
  • An embodiment of the present disclosure also provides a readable storage medium.
  • the readable storage medium stores programs or instructions.
  • the program or instructions are executed by the processor, each process of the above-mentioned instruction execution method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details will not be described here.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage medium includes computer-readable storage media, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory (RAM), disk or optical disk, etc.
  • the readable storage media includes non-transitory computer-readable storage media.
  • An embodiment of the present disclosure also provides a chip, which includes a processor and a communication interface.
  • the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement each process of the above instruction execution method embodiment, and can achieve the same technical effect. To avoid duplication, the details will not be described here.
  • inventions of the present disclosure also provide a computer program product.
  • the computer program product includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor.
  • the program or instructions When instructions are executed by the processor, each process of the above instruction execution method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details will not be described here.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present disclosure can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM, RAM, disk, etc.) , optical disk), including several instructions to cause a terminal (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Procédé, appareil et système d'exécution d'une instruction. Le procédé consiste à : acquérir une demande d'acquisition d'instruction dans une file d'attente cible (S110), la demande d'acquisition d'instruction étant utilisée pour commander l'acquisition d'une instruction cible ; sur la base de la demande d'acquisition d'instruction, déterminer une adresse d'acquisition d'instruction correspondant à la file d'attente cible en fonction d'une relation de mappage prédéfinie (S120), la relation de mappage indiquant une correspondance entre des informations de file d'attente de la file d'attente cible et l'adresse d'acquisition d'instruction ; acquérir l'instruction cible en fonction de l'adresse d'acquisition d'instruction (S130) ; et exécuter une opération correspondant à l'instruction cible en fonction de l'instruction cible (S140).
PCT/CN2023/114014 2022-08-26 2023-08-21 Procédé, appareil et système d'exécution d'instruction, et serveur Ceased WO2024041481A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211037817.7A CN117666925A (zh) 2022-08-26 2022-08-26 执行指令的方法、装置、服务器及系统
CN202211037817.7 2022-08-26

Publications (1)

Publication Number Publication Date
WO2024041481A1 true WO2024041481A1 (fr) 2024-02-29

Family

ID=90012484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114014 Ceased WO2024041481A1 (fr) 2022-08-26 2023-08-21 Procédé, appareil et système d'exécution d'instruction, et serveur

Country Status (2)

Country Link
CN (1) CN117666925A (fr)
WO (1) WO2024041481A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119829142B (zh) * 2024-12-27 2025-11-07 海光信息技术股份有限公司 清空流水线的方法、装置、处理器和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292007A1 (en) * 2015-03-31 2016-10-06 Kabushiki Kaisha Toshiba Apparatus and method of managing shared resources in achieving io virtualization in a storage device
CN108628775A (zh) * 2017-03-22 2018-10-09 华为技术有限公司 一种资源管理的方法和装置
US20190073160A1 (en) * 2016-05-26 2019-03-07 Hitachi, Ltd. Computer system and data control method
CN110275774A (zh) * 2018-03-13 2019-09-24 三星电子株式会社 在虚拟化环境中动态分配物理存储设备资源的机制
CN111880750A (zh) * 2020-08-13 2020-11-03 腾讯科技(深圳)有限公司 磁盘读写资源的分配方法、装置、设备及存储介质
CN114281252A (zh) * 2021-12-10 2022-04-05 阿里巴巴(中国)有限公司 非易失性高速传输总线NVMe设备的虚拟化方法及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292007A1 (en) * 2015-03-31 2016-10-06 Kabushiki Kaisha Toshiba Apparatus and method of managing shared resources in achieving io virtualization in a storage device
US20190073160A1 (en) * 2016-05-26 2019-03-07 Hitachi, Ltd. Computer system and data control method
CN108628775A (zh) * 2017-03-22 2018-10-09 华为技术有限公司 一种资源管理的方法和装置
CN110275774A (zh) * 2018-03-13 2019-09-24 三星电子株式会社 在虚拟化环境中动态分配物理存储设备资源的机制
CN111880750A (zh) * 2020-08-13 2020-11-03 腾讯科技(深圳)有限公司 磁盘读写资源的分配方法、装置、设备及存储介质
CN114281252A (zh) * 2021-12-10 2022-04-05 阿里巴巴(中国)有限公司 非易失性高速传输总线NVMe设备的虚拟化方法及设备

Also Published As

Publication number Publication date
CN117666925A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
US10534552B2 (en) SR-IOV-supported storage resource access method and storage controller and storage device
JP5510556B2 (ja) 仮想マシンのストレージスペースおよび物理ホストを管理するための方法およびシステム
CN107690622B (zh) 实现硬件加速处理的方法、设备和系统
US11379265B2 (en) Resource management method, host, and endpoint based on performance specification
CN111490949B (zh) 用于转发数据包的方法、网卡、主机设备和计算机系统
US20230342087A1 (en) Data Access Method and Related Device
CN102594660B (zh) 一种虚拟接口交换方法、装置及系统
US12321635B2 (en) Method for accessing solid state disk and storage device
EP3693853B1 (fr) Procédé et dispositif de planification de ressources d'accélération, et système d'accélération
CN104239122B (zh) 一种虚拟机迁移方法和装置
EP3506575B1 (fr) Dispositif et procédé de transmission de données
US20180246772A1 (en) Method and apparatus for allocating a virtual resource in network functions virtualization network
CN114816741A (zh) Gpu资源管理方法、装置、系统与可读存储介质
CN110019475B (zh) 数据持久化处理方法、装置及系统
CN104915302B (zh) 数据传输处理方法和数据传输器
CN107003904A (zh) 一种内存管理方法、设备和系统
WO2024041481A1 (fr) Procédé, appareil et système d'exécution d'instruction, et serveur
CN116383127B (zh) 节点间通信方法、装置、电子设备及存储介质
CN104571934B (zh) 一种内存访问的方法、设备和系统
CN114911411A (zh) 一种数据存储方法、装置及网络设备
CN111858035A (zh) 一种fpga设备分配方法、装置、设备及存储介质
CN109947676A (zh) 数据访问方法及装置
CN104461705A (zh) 一种业务访问的方法及存储控制器、集群存储系统
CN104123173A (zh) 一种实现虚拟机间通信的方法及装置
CN109167740B (zh) 一种数据传输的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23856571

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202517016421

Country of ref document: IN

WWP Wipo information: published in national office

Ref document number: 202517016421

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23856571

Country of ref document: EP

Kind code of ref document: A1