[go: up one dir, main page]

WO2025202802A1 - Virtual machine memory data migration method, device, computer program product, and storage medium - Google Patents

Virtual machine memory data migration method, device, computer program product, and storage medium

Info

Publication number
WO2025202802A1
WO2025202802A1 PCT/IB2025/052367 IB2025052367W WO2025202802A1 WO 2025202802 A1 WO2025202802 A1 WO 2025202802A1 IB 2025052367 W IB2025052367 W IB 2025052367W WO 2025202802 A1 WO2025202802 A1 WO 2025202802A1
Authority
WO
WIPO (PCT)
Prior art keywords
page
memory
target virtual
virtual memory
page fault
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2025/052367
Other languages
French (fr)
Chinese (zh)
Inventor
陈梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloud Intelligence Assets Holding Singapore Private Ltd
Original Assignee
Cloud Intelligence Assets Holding Singapore Private Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloud Intelligence Assets Holding Singapore Private Ltd filed Critical Cloud Intelligence Assets Holding Singapore Private Ltd
Publication of WO2025202802A1 publication Critical patent/WO2025202802A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Definitions

  • Virtual Machine Memory Data Migration Method, Device, Computer Program Product, and Storage Medium Cross-Reference
  • This disclosure claims priority to a Chinese patent application filed with the China Patent Office on March 27, 2024, with application number 202410362467.4, entitled “A Virtual Machine Memory Data Migration Method, Device, Computer Program Product, and Storage Medium,” the entire contents of which are incorporated herein by reference.
  • Technical Field This disclosure relates to the field of cloud computing technology, and more particularly to a virtual machine memory data migration method, device, computer program product, and storage medium.
  • Various aspects of the present disclosure provide a virtual machine memory data migration method, apparatus, computer program product, and storage medium for reducing the amount of physical memory occupied by a source host due to memory data migration.
  • Embodiments of the present disclosure provide a virtual machine memory data migration method, applicable to a virtual machine manager on a source host.
  • the method comprises: when performing a first migration on a target virtual memory page, detecting a page fault status of the target virtual memory page; in response to the target virtual memory page being in a page fault status, obtaining memory data corresponding to the target virtual memory page without triggering a page fault recovery operation; and migrating the obtained memory data to a destination host.
  • the present disclosure also provides a method for migrating virtual machine memory data, applicable to a kernel interface in a source host that supports memory data migration.
  • the method comprises: receiving a page fault status query instruction initiated by a virtual machine manager in the source host when performing a first migration of a target virtual memory page; generating page fault status description information for the target virtual memory page based on the page fault status query instruction, wherein the page fault status description information indicates the page fault status of the target virtual memory page; and providing the page fault status description information to the virtual machine manager so that upon detecting that the target virtual memory page is in a page fault state, the virtual machine manager obtains the target virtual memory page address without triggering a page fault recovery operation.
  • the corresponding memory data is retrieved and the acquired memory data is migrated to the destination host.
  • the present disclosure also provides a computing device comprising a memory, a processor, and a communication component.
  • the memory is configured to store one or more computer instructions.
  • the processor is coupled to the memory and the communication component and configured to execute the aforementioned virtual machine memory data migration method.
  • the present disclosure also provides a computer-readable storage medium storing a computer program. When the computer program is executed by one or more processors, it causes the one or more processors to execute the aforementioned virtual machine memory data migration method.
  • the present disclosure also provides a computer program product comprising a computer program. When the computer program is executed by one or more processors, it causes the one or more processors to execute the aforementioned virtual machine memory data migration method.
  • the present disclosure also provides a computer program product comprising a non-volatile computer-readable storage medium storing the computer program. When the computer program is executed by the processor, it implements the aforementioned virtual machine memory data migration method.
  • Virtual machine migration refers to the process of migrating a virtual machine from a source host to a destination host. Virtual machine migration is categorized into hot migration and cold migration. Hot migration involves migrating a running virtual machine from a source host to a destination host. The downtime required for the virtual machine during hot migration is typically minimal, making it virtually imperceptible to users.
  • Cold migration involves migrating a decommissioned virtual machine from a source host to a destination host.
  • Cold migration requires a relatively long downtime.
  • Memory data migration involves migrating the virtual machine's memory data from the source host to the destination host.
  • cold migration does not involve memory data migration, as it requires virtual machine downtime and memory is volatile.
  • Hot migration does not require virtual machine downtime. Therefore, the virtual machine's memory data must be fully and correctly migrated to the destination host to ensure proper operation of the virtual machine on the destination host after the migration is complete.
  • Memory virtualization is a virtualization technology that allows physical memory to be expanded into a larger logical memory space, allowing programs to access more memory resources.
  • the inventors discovered that currently, during virtual machine migration, the virtual machine manager in the source host traverses the virtual memory pages occupied by the virtual machine when migrating memory data and allocates them to these virtual memory pages without any difference. Memory access requests initiated by a memory page trigger page fault recovery operations in the source host's operating system. This requires the use of physical memory on the source host. Therefore, as described in the background, memory data migration can cause significant fluctuations in the source host's physical memory usage. The inventors also discovered during their research that virtual machine migration is often caused by insufficient physical memory on the source host. Migrating some virtual machines is intended to alleviate this problem.
  • the full first migration phase can be understood as the process of traversing the virtual machine's full memory data and migrating it from the source host to the destination host after the migration begins.
  • the dirty page migration phase can be understood as the phase where after a virtual memory page completes its first migration, the virtual machine is still running normally in the source host. Related operations occurring within the machine may cause changes to the memory data corresponding to some virtual memory pages. These virtual memory pages with changed memory data are considered dirty pages and need to be migrated to the destination host.
  • the dirty page migration phase may require multiple rounds of migration until the latest round of dirty page migration is detected to have taken less than a specified time threshold. Upon completion of the latest round of dirty page migration, the dirty page migration phase ends.
  • this embodiment proposes improvements to the aforementioned full first migration phase to reduce the physical memory usage caused by this phase. It is worth noting that during the full first migration phase, the first migration must be performed on the virtual memory pages occupied by the virtual machine. For ease of description, this embodiment will use the target virtual memory page as an example to explain the first migration process in detail. It should be understood that the virtual machine memory data migration method provided in this embodiment is applicable to any virtual memory page occupied by the virtual machine to be migrated. Referring to Figure 1 , in step 100, it is proposed that during the first migration of the target virtual memory page, the page fault status of the target virtual memory page be detected.
  • FIG3 is a logical diagram of an optional implementation of a virtual machine memory data migration method provided by an exemplary embodiment of the present disclosure.
  • an exemplary implementation of step 100 may include: the virtual machine manager may initiate a page fault status query instruction for the target virtual memory page to a kernel interface of the source host that supports memory data migration.
  • the page fault status query instruction is used to trigger the kernel interface to return page fault status description information corresponding to the target virtual memory page; and in response to the page fault status description information indicating that the target virtual memory page is in a page fault state, the virtual machine manager determines that the target virtual memory page is in a page fault state.
  • the source host's physical memory is managed in kernel mode. Therefore, memory data migration requires support from certain interfaces in the source host's kernel mode. In this embodiment, these interfaces are described as kernel interfaces in the source host used to support memory data migration. The virtual machine manager works in conjunction with these kernel interfaces to complete memory data migration.
  • the kernel interfaces in the source host used to support memory data migration can be kernel interfaces provided by the source host's operating system, such as the Memory Manager (MM).
  • the virtual machine manager can issue the page fault status query instruction in accordance with the instruction format agreed upon with the kernel interface, and the kernel interface
  • the instruction format can be used to determine whether the received instruction is a page fault status instruction.
  • the specified format here may include, for example, a special identifier or field in the instruction, which is not limited here.
  • the virtual machine manager can obtain the dirty page data corresponding to the target virtual memory page from the physical memory page mapped to it, and then migrate the obtained dirty page data for the target virtual memory page to the destination host.
  • dirty page refers to a modified memory page. It should be understood that, while managing a virtual machine, the virtual machine manager monitors memory access requests initiated by the virtual machine for virtual memory pages. As mentioned above, in response to the target virtual memory page being in a page fault state, the memory access request triggers a page fault recovery operation to allocate a physical memory page for the target virtual memory page, allowing the corresponding memory data to be stored in the physical memory page.
  • the page fault recovery operation in this case is caused by a memory access request initiated by the virtual machine.
  • This memory access request is required for the normal operation of the virtual machine.
  • the resulting occupation of physical memory pages on the source host is reasonable and is not interfered with by this embodiment. Therefore, in response to the virtual machine manager monitoring the target virtual memory page becoming dirty, the target virtual memory page must no longer be in a page fault state. Therefore, the virtual machine manager can normally initiate a memory access request for the target virtual memory page to obtain the dirty page data corresponding to the target virtual memory page and migrate it to the destination host.
  • the virtual machine manager does not cause incremental usage of physical memory pages on the source host when performing dirty page data migration.
  • this embodiment improves the memory data migration solution during virtual machine migration by detecting the page fault status of each virtual memory page in the virtual machine during the first migration.
  • the virtual machine manager on the source host retrieves the corresponding memory data for these virtual memory pages without triggering a page fault recovery operation. This prevents page fault recovery operations from affecting the source host's physical memory usage, thereby reducing the amount of source host physical memory usage caused by memory data migration. Consequently, memory data migration no longer causes significant fluctuations in the source host's physical memory usage, thereby preventing exacerbation of memory run issues on the source host.
  • various data acquisition schemes can be used in step 101 to acquire the memory data corresponding to the target virtual memory page without triggering a page fault recovery operation.
  • the inventors discovered that there are many reasons why a virtual memory page may be in a page fault state. Depending on the cause, the actual storage location of the memory data corresponding to the virtual memory page may vary. Therefore, an exemplary data acquisition scheme proposes that, in response to different page fault conditions, appropriate access logic can be used to access the actual storage location of the memory data and acquire the memory data.
  • the causes of the page fault condition may include, but are not limited to, memory swapping and memory delay allocation.
  • this data acquisition solution proposes: In the absence of a page fault recovery operation, in response to the memory data corresponding to the target virtual memory page being swapped out to the swap space corresponding to the source host, the memory data corresponding to the target virtual memory page is retrieved from the swap space; in response to the target virtual memory page not yet being allocated to a physical memory page on the source host, an empty file is used as the memory data corresponding to the target virtual memory page.
  • This data acquisition solution provides several possible actual storage locations corresponding to the memory data for different causes of page faults: one is being swapped out to the swap space, and the other is no actual storage location being allocated (i.e., no memory data exists under the virtual memory page). This involves several technical concepts, which are explained here.
  • some virtual memory pages occupied by a virtual machine may not have been accessed by the virtual machine yet, and therefore these virtual memory pages will not be allocated to physical memory pages. Only when the virtual machine initiates access to these virtual memory pages will the host kernel state trigger the allocation of physical memory pages. It can be understood that the fact that a virtual memory page has not yet been allocated to physical memory indicates that the virtual machine has no memory data associated with that virtual memory page.
  • This data acquisition solution also provides access logic for acquiring memory data based on different actual storage locations. If the memory data corresponding to the target virtual memory page is located in swap space, the virtual machine manager can retrieve the memory data corresponding to the target virtual memory page from the swap space.
  • this embodiment provides a preferred implementation scheme for obtaining the memory data corresponding to the target virtual memory page from the swap space.
  • This preferred implementation scheme proposes that the virtual machine manager initiate a data acquisition request for the target virtual memory page to a kernel interface on the source host that supports memory data migration. The data acquisition request triggers the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space, thereby obtaining the memory data corresponding to the target virtual memory page provided by the kernel interface.
  • the virtual machine manager does not need to directly access the swap space. Instead, it can use the kernel interface on the source host to read the required memory data from the swap space. Thus, the virtual machine manager can obtain the memory data corresponding to the target virtual memory page from the kernel interface. 3, in order to facilitate the virtual machine manager to obtain the required memory data from the above kernel interface, in this preferred implementation, it is further proposed that: in order to support data transfer between the virtual machine manager and the above kernel interface, a cache space can be preset.
  • the virtual machine manager can use the preset cache space to store the target A cache address is allocated for a virtual memory page; the identifier and cache address of the target virtual memory page are included in the data retrieval request, triggering the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space and store it at a cache address within the preset cache space.
  • the kernel interface can use the identifier of the target virtual memory page carried in the data retrieval request to find the memory data corresponding to the target virtual memory page in the swap space, thereby accurately reading the memory data corresponding to the target virtual memory page. Based on this, the virtual machine manager can read the memory data corresponding to the target virtual memory page from the cache address.
  • the virtual machine manager can parse the page fault status description information returned by the kernel interface for the target virtual memory page to obtain the page fault type corresponding to the target virtual memory page; in response to the page fault status description information indicating that the page fault type corresponding to the target virtual memory page is a memory swap type, determine that the memory data corresponding to the target virtual memory page has been swapped out to the swap space; and in response to the page fault status description information indicating that the page fault type corresponding to the target virtual memory page is a delayed allocation type, determine that the target virtual memory page has not yet been allocated to a physical memory page.
  • Figure 4 is a schematic diagram of the structure of page fault status description information provided in an exemplary embodiment of the present disclosure.
  • the page fault status description information may include a page fault status identification field and a page fault type field. Based on this, in response to the page fault status identification field taking a first value, it indicates that the target virtual memory page is in a page fault state; in response to the page fault status identification field taking a second value, it indicates that the target virtual memory page is not in a page fault state.
  • the first value may be 1, the second value may be 00, and, in response to the page fault type field taking the third value, it indicates that the target virtual memory page's page fault type is a memory swap type; in response to the page fault type field taking the fourth value, it indicates that the target virtual memory page's page fault type is a delayed allocation type.
  • the virtual machine manager can fully interact with the kernel interface supporting memory data migration in the source host, utilizing this kernel interface to provide the virtual machine manager with a reference for detecting page faults and to retrieve memory data for the faulted virtual memory page.
  • This enables the virtual machine manager to more efficiently detect virtual memory page faults and, without triggering a page fault recovery operation, more efficiently obtain the memory data corresponding to the faulted virtual memory page.
  • Figure 5 is a flowchart of a virtual machine memory data migration method provided by another exemplary embodiment of the present disclosure. This method can be applied to a kernel interface in a source host for supporting memory data migration. Referring to FIG.
  • the kernel interface still maintains its original memory data migration-related functions, such as responding to memory access requests for virtual memory pages.
  • the memory data migration-related functions originally performed by the kernel interface will not be described in detail herein.
  • the processing logic shown in FIG. 5 is added to the kernel interface to enable the kernel interface to cooperate with the virtual machine manager to support the virtual machine manager in avoiding triggering page fault recovery operations during memory data migration.
  • generating page fault status description information for a target virtual memory page may include: querying a page table; determining the page fault status of the target virtual memory page based on the value of the presence flag bit associated with the target virtual memory page, and generating the page fault status description information for the target virtual memory page.
  • the page fault status description information further indicates the page fault type.
  • FIG6 is a schematic diagram of the structure of a computing device provided by another exemplary embodiment of the present disclosure. As shown in FIG6 , the computing device may be the source host for the virtual machine to be migrated.
  • the computing device may include a memory 60, a processor 61, and a communication component 62.
  • the processor 61 is coupled to the memory 60 and the communication component 62, and is configured to execute the computer program in the memory 60.
  • the processor 61 can execute the computer program in the memory 60 to implement a virtual machine manager in the source host. Under such a request, the processor 61 can be configured to: detect the page fault state of the target virtual memory page when performing the first migration for the target virtual memory page; in response to the target virtual memory page being in the page fault state, obtain the memory data corresponding to the target virtual memory page without triggering a page fault recovery operation; and migrate the obtained memory data to the destination host.
  • the processor 61 may be configured to: initiate a page fault status query instruction for the target virtual memory page to a kernel interface on the source host that supports memory data migration, wherein the page fault status query instruction triggers the kernel interface to return page fault status description information corresponding to the target virtual memory page; and, in response to the page fault status description information indicating that the target virtual memory page is in a page fault status, determine that the target virtual memory page is in a page fault status.
  • the page fault status description information further indicates a page fault type corresponding to the target virtual memory page.
  • the processor 61 may be configured to: initiate a data acquisition request for the target virtual memory page to a kernel interface, wherein the data acquisition request is configured to trigger the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space; and obtain the memory data corresponding to the target virtual memory page provided by the kernel interface.
  • the processor 61 may be further configured to: in response to the target virtual memory page not being in a page fault state, acquire the memory data corresponding to the target virtual memory page from the physical memory page mapped to the target virtual memory page. In an optional embodiment, the processor 61 may be further configured to: after completing the first migration for the target virtual memory page, in response to detecting that the target virtual memory page has become a dirty page, obtain dirty page data corresponding to the target virtual memory page from the physical memory page mapped to the target virtual memory page; and migrate the obtained dirty page data for the target virtual memory page to the destination host. In other designs, the processor 61 may execute a computer program in the memory 60 to implement a kernel interface in the source host for supporting memory data migration.
  • the memory in FIG. 6 is used to store computer programs and can be configured to store various other data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phone book data, messages, images, videos, etc.
  • the memory can be implemented by any type of volatile or non-volatile memory device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), electrically erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM electrically erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the communication component in FIG6 is configured to facilitate wired or wireless communication between the device where the communication component is located and other devices.
  • the device where the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G/Long-Term Evolution (LTE), 5G
  • the embodiments of the present disclosure may be provided as methods, systems, or computer program products. Therefore, the present disclosure may adopt a fully hardware embodiment, a fully software embodiment, or a fully hardware embodiment. or in the form of embodiments combining software and hardware. Furthermore, the present disclosure may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to magnetic disk storage, compact disc read-only memory (CD-ROM), optical storage, etc.) containing computer-usable program code.
  • CD-ROM compact disc read-only memory
  • optical storage etc.
  • each process and/or block in the flowcharts and/or block diagrams, as well as combinations of processes and/or blocks in the flowcharts and/or block diagrams, can be implemented by computer program instructions.
  • These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing device to produce a machine, such that the instructions, executed by the processor of the computer or other programmable data processing device, produce means for implementing the functions specified in one or more processes in the flowcharts and/or one or more blocks in the block diagrams.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the functions specified in one or more flow charts and/or one or more blocks in a block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing the computer or other programmable device to execute a series of operational steps to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more flow charts and/or one or more blocks in a block diagram.
  • the source host's virtual machine manager retrieves the corresponding memory data without triggering a page fault recovery operation. This prevents page fault recovery operations from consuming the source host's physical memory, thus reducing the amount of source host physical memory consumed by memory data migration. Consequently, memory data migration no longer causes significant fluctuations in the source host's physical memory usage, thereby resolving the technical issue of memory runs on the source host.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Embodiments of the present application provide a virtual machine memory data migration method, a device, a computer program product, and a storage medium. The method comprises: when performing a first migration for each virtual memory page in a virtual machine, detecting a page fault state of the virtual memory page; for the virtual memory page in the page fault state, a virtual machine manager in a source host acquiring corresponding memory data when no page fault recovery operation is triggered; and then migrating the acquired memory data to a destination host.

Description

一种虚 拟机内存数据迁移方法 、 设备、 计算机程序产品及存储介质 交叉援引 本公开要求于 2024年 03月 27 日提交中国专利局、 申请号为 202410362467. 4、 发明名称 为 “一种虚拟机内存数据迁移方法、 设备、 计算机程序产品及存储介质” 的 中国专利 申请的优先权, 其全部内容通过引用结合在本公开中。 技术领域 本公开涉及云计算技术领 域, 尤其涉及一种虚拟机内存数据迁移方法、 设备、 计 算机程序产 品及存储介质。 背景技术 虚拟机热迁移, 是指将运行中的虚拟机源主机迁移到 目的主机上的过程。 迁移过 程中无需 中断虚拟机上的工作任务, 用户无感知。 其中, 内存数据迁移是热迁移过程中的重要环节。 目前, 内存数据迁移会使得源 主机上的物理 内存的使用量发生比较 大的波动, 导致源主机上可能发生比较严重的内 存挤兑问题 , 影响源主机的内存性能。 发明内容 本公开的多个方面提供一 种虚拟机内存数据迁移方法 、 设备、 计算机程序产品及 存储介质 , 用以降低内存数据迁移导致的对源主机物理内存的占用量 。 本公开实施例提供一种虚 拟机内存数据迁移方法, 适用于源主机中的虚拟机管理 器, 该方法包括: 在针对目标虚拟内存 页执行第一次迁移的情况下, 检测目标虚拟内存页的缺页状 态; 响应于目标虚拟内存页处 于缺页状态, 在未触发缺页恢复操作的情况下, 获取目 标虚拟 内存页对应的内存数据; 将获取到的内存数据, 迁移至目的主机中。 本公开实施例还提供一种 虚拟机内存数据迁移方法, 适用于源主机中用于支持内 存数据迁移 的内核接口, 该方法包括: 接收源主机中的 虚拟机管理器在对 目标虚拟内存 页执行第一次迁移时 发起的缺 页状态查询 指令; 根据缺页状态查询指令 , 为目标虚拟内存页生成缺页状态描述信息, 其中, 缺页 状态描述信 息用于指示目标虚拟内存 页的缺页状态; 将缺页状态描述信息提供 至虚拟机管理器, 以使虚拟机管理器在检测到目标虚拟 内存页处于缺 页状态后, 在未触发缺页恢复操作的情况下, 获取到目标虚拟内存页对 应的内存数据 , 并将获取到的内存数据迁移至目的主机中。 本公开实施例还提供一种 计算设备, 包括存储器、 处理器和通信组件; 存储器用于存储一条或 多条计算机指令; 处理器与存储器和通信 组件耦合, 用于执行前述的虚拟机内存数据迁移方法。 本公开实施例还提供一种 存储计算机程序的计算机可 读存储介质, 当计算机程序 被一个或 多个处理器执行时, 致使一个或多个处理器执行前述的虚拟机 内存数据迁移 方法。 本公开实施例还提供一种 计算机程序产品, 包括计算机程序, 当计算机程序被一 个或多个处理 器执行时, 致使一个或多个处理器执行前述的虚拟机内存数 据迁移方法。 本公开实施例还提供一种计算机程序产品, 包括非易失性计算机可读存储介质, 该 非易失性计算 机可读存储介质存储计算机 程序, 该计算机程序被处理器执行时实现前 述的虚拟机 内存数据迁移方法。 本公开实施例还提供一种计算机程序, 该计算机程序被处理器 执行时实现前述的 虚拟机 内存数据迁移方法。 在本公开实施例中, 对虚拟机迁移过程中的内存数据迁移 方案进行了改进, 提出 针对虚拟机 中各虚拟内存页执行第一次 迁移时, 检测虚拟内存页的缺页状态。 对于处 于缺页状 态的虚拟内存页, 源主机中的虚拟机管理器将在未触发缺页恢 复操作的情况 下, 为这类虚拟内存页获取对应的内存数 据。 这样, 在虚拟机迁移过程中, 即使存在 部分虚拟 内存页处于缺页状态, 也不会再引发缺页恢复操作, 从而可避免缺页恢复操 作对源主机 的物理内存造成的占用, 降低内存数据迁移所导致的对源主机 物理内存的 占用量。 据此, 内存数据迁移不会再导致源主机中物理 内存的使用量发生明显 波动, 从而可避免加 剧源主机中的内存挤兑 问题。 附图说明 此处所说明的附图用来提供 对本公开的进一步理解 , 构成本公开的一部分, 本公 开的示意性 实施例及其说明用于解释本公 开, 并不构成对本公开的不当限定。 在附图 中: 图 1为本公开一示例性实施例提供的一 种虚拟机内存数据迁移方法 的流程示意图; 图 2为本公开一示例性实施例提供的一 种虚拟机内存数据迁移方法 的逻辑示意图; 图 3为本公开一示例性实施例提供的一 种虚拟机内存数据迁移方法 的可选实现方 式的逻辑示 意图; 图 4为本公开一示例性实施例提供 的一种缺页状态描述信息 的结构示意图; 图 5为本公开另一示例性实施 例提供的一种虚拟机 内存数据迁移方法的流程 图; 图 6为本公开又一示例性实施例提供 的一种计算设备的结构示 意图。 具体实施方式 为使本公开的目的、 技术方案和优点更加清楚, 下面将结合本公开具体实施例及 相应的附 图对本公开技术方案进行清楚 、 完整地描述。 显然, 所描述的实施例仅是本 公开一部分 实施例, 而不是全部的实施例。 基于本公开中的实施例, 本领域普通技术 人员在没有做 出创造性劳动前提下所获得 的所有其他实施例, 都属于本公开保护的范 围。 在对本公开各实施例提供 的技术方案进行详细说明之 前, 先对本公开中涉及到的 几个技术概念 进行解释如下。 内存挤兑, 是主机上发生一种物理内存不足的现象。 主机上的物理内存通常是超 量配备给其 上运行的各个虚拟机的, 这主要是考虑到各个虚拟机通常不会 同时全量使 用内存, 因此, 期望通过分时复用的方式保障各个虚拟机的内存使用需求, 从而提高 物理内存 的利用率。但在一些情况下,各个虚拟机的内存使用量可 能发生了明显提升, 各个虚拟机 实际所需的内存使用总量 已经超过了主机上的物理 内存总量, 就会出现内 存挤兑问题 。 内存挤兑问题可能导致部分虚拟机无法正常使用内存, 属于主机上比较 严重的性 能问题。 虚拟机迁移, 是指将虚拟机从源主机迁移到目的主机的技 术。 虚拟机迁移分为热 迁移和冷迁移 。 其中, 热迁移是指将运行中的虚拟机从源主机迁移到目的主机, 热迁 移过程中虚拟 机所需的停运时间通常足够短 , 几乎是用户无感的。 冷迁移则是指将停 运后的虚拟 机从源主机迁移到 目的主机, 冷迁移需要虚拟机进行比较长时间的停运 。 内存数据迁移, 虚拟机迁移过程中, 需要将虚拟机的内存数据从源主机迁移到目 的主机。 通常, 由于冷迁移需要将虚拟机停运而内存是易失性的, 因此, 冷迁移通常 不会涉及到 内存数据迁移。 热迁移则无需将虚拟机停运, 因此, 虚拟机的内存数据需 要全量且正确 迁移至目的主机中, 才能保证迁移完成后目的主机上能够正确地 运行虚 拟机。 内存虚拟化, 是一种虚拟化技术, 它允许将物理内存扩展成逻辑上更大的内存空 间, 从而让程序能够访问更多的内存资源 。 这种技术通过将虚拟地址空间划分为固定 大小的页 面 (称为虚拟内存页) , 并将这些虚拟内存页映射到物理内存上, 使得每个 进程或资 源 (例如虚拟机) 可以拥有自己的虚拟内存空间。 缺页, 又称为页缺失 (Page fault) , 是因内存虚拟化技术而衍生出的一种内存异 常。 缺页通常是指虚拟内存页并未被映射 至物理内存页, 因此导致无法访问到虚拟内 存页对应 的内存数据。 但是这种内存异常是可恢复的, 对缺页进行恢复的操作即为缺 页恢复操作 。 缺页恢复操作, 可为缺页的虚拟内存页重新分配物理 内存页, 并将内存数据恢复 到重新分配 到的物理内存页中。 发明人在研究过程中发现 , 目前, 在虚拟机迁移过程中, 源主机中的虚拟机管理 器在进行 内存数据迁移时, 会遍历虚拟机所占用的虚拟内存页, 并无差地向这些虚拟 内存页发起 内存访问请求, 对于缺页的虚拟内存页所发起的内存访 问请求, 将导致源 主机的操作 系统中发生缺页恢复操作 , 而缺页恢复操作则需要占用源主机中的物理内 存。 因此, 如背景技术中介绍的那样, 内存数据迁移会使得源主机上的物理内存的使 用量发生 比较大的波动。 发明人在研究过程中还发现, 虚拟机迁移的原因通常是因为 源主机上 的物理内存不足, 期望通过迁移出部分虚拟机而改善源主机上 的物理内存不 足问题, 而在这种现状下, 内存数据迁移又需要占用源主机的较多物理 内存, 这导致 源主机上 的物理内存不足问题加重, 这将进一步加剧源主机中的 内存挤兑问题。 为此, 本实施例提出了一种虚拟机内存数据迁移方法, 通过对内存数据迁移环节 进行改造 , 可有效降低内存数据迁移所导致的物理内存占用量。 以下结合附图, 详细说明本公开各实施例提供的技术 方案。 图 1为本公开一示例性实施例提供的一 种虚拟机内存数据迁移方法 的流程示意图, 图 2为本公开一示例性实施例提供的一种虚拟机 内存数据迁移方法的逻辑示 意图。 该 方法可由 源主机中的虚拟机管理器执行 , 该虚拟机管理器可实现为软件、 硬件或软件 与硬件的结合 , 该虚拟机管理器可集成在源主机中。 参考图 1 , 该方法可包括: 步骤 100、 在针对目标虚拟内存页执行第一次迁移的情况下, 检测目标虚拟内存 页的缺页状 态; 步骤 101、 响应于目标虚拟内存页处于缺页状态, 在未触发缺页恢复操作的情况 下, 获取目标虚拟内存页对应的内存数据 ; 步骤 102、 将获取到的内存数据, 迁移至目的主机中。 本实施例中, 虚拟机管理器 (Virtual Machine Manager, 简称为 VMM) , 可采用 虚拟机管理程 序(hypervisor)或者是快速虚拟机执行器(Quick Emulator,简称为 QEMU) -内核虚拟机 (Kernel -based Virtual Machine, 简称为 KVM) 等。 另外, 虚拟机管理器 通常包含位 于内核态的组件和位于用 户态的组件, 本实施例中, 将虚拟机管理器包含 的位于用 户态的组件描述为用户态组件 , 例如, 前述的 QEMU组件; 将虚拟机管理器 包含的位于 内核态的组件则描述为 内核态组件, 例如, 前述的 KVM组件。 本实施例 提供的虚拟机 内存数据迁移方法可主要是 由虚拟机管理器中的用户 态组件执行。 本实施例提供的虚拟机 内存数据迁移方法, 可适应于前文提及的虚拟机热迁移场 景中, 当然, 在虚拟机冷迁移场景中也存在内存数据迁移需求的情况下 , 本实施例提 供的虚拟机 内存数据迁移方法同样可适 用于虚拟机冷迁移场景中。 也即是, 本实施例 对应用场景 不做限定。 发明人在研究过程中发现 , 在内存数据迁移环节中, 大致包含两个迁移阶段: 全 量第一次迁移 阶段和脏页迁移阶段。 其中, 全量第一次迁移阶段, 可理解为在迁移工作开始后, 遍历虚拟机的全量内 存数据, 并将全量内存数据从源主机迁移 至目的主机。 脏页迁移阶段, 则可理解为在 某个虚拟 内存页完成第一次迁移后, 由于虚拟机仍在源主机中正常运行 , 因此, 虚拟 机内发生 的相关操作可能导致部分虚拟 内存页对应的内存数据发生 变化, 则这些内存 数据发生 变化的虚拟内存页即为脏页 , 脏页需要被迁移至目的主机中。 而且, 脏页迁 移阶段中可 能需要进行多轮迁移, 直至检测到最新一轮脏页迁移所需的耗 时低于指定 耗时阈值 的情况下, 将最新一轮脏页迁移完成后, 即可结束脏页迁移阶段。 在此基础上, 本实施例中提出, 对前述的全量第一次迁移阶段进行改进, 以降低 全量第一次 迁移阶段中所导致的物理 内存占用量。 值得说明的, 在全量第一次迁移阶段中需要针对虚拟机所 占用的虚拟内存页进行 第一次迁移 , 为便于描述, 本实施例中将以其中的目标虚拟内存页为例, 对第一次迁 移过程进行详 细说明。 应当理解的是, 本实施例提供的虚拟机内存数据迁移方法可适 用于待迁移 的虚拟机所占用的任一虚拟 内存页。 参考图 1 , 在步骤 100中提出, 可在针对目标虚拟内存页执行第一次迁移的情况 下, 检测目标虚拟内存页的缺页状态。 可以理解的是, 本实施例中, 在针对目标虚拟 内存页执行 第一次迁移时, 并不会直接发起针对目标虚拟内存页的 内存访问请求, 而 是先检测 目标虚拟内存页的缺页状态 。 本实施例中, 在步骤 100中可采用多种实现方式来检测目标虚拟内存页的缺 页状 态。 图 3为本公开一示例性实施例提供的一种虚拟机 内存数据迁移方法的可选实现 方 式的逻辑示 意图。 参考图 3 , 步骤 100中的一种示例性实现方式可以是: 虚拟机管理器可向源主机 中用于支持内存数据迁移的 内核接口, 发起针对目标虚 拟内存页 的缺页状态查询指令, 缺页状态查询指令用于触发内核接 口返回目标虚拟内 存页对应 的缺页状态描述信息; 响应于缺页状态描述信息 中指示目标虚拟内存页处于缺 页状态, 确定检测到目标 虚拟内存 页处于缺页状态。 应当理解的是, 源主机的物理内存是在内核态中进行 管理的, 因此, 内存数据迁 移需要源主机 的内核态中的部分接 口进行支持, 本实施例中, 将这类接口描述为源主 机中用于 支持内存数据迁移的内核接 口。 虚拟机管理器和上述内核接口配合, 可完成 内存数据迁移 。 其中, 源主机中用于支持内存数据迁移的内核接口可以是源主机的操 作系统所提供 的内核接口, 例如, 内存管理器 (Memory Management, 简称为 MM) 等。 当然, 源主机中用于内存数据迁移的内核接口还可以是虚拟机管理 器的内核态组 件所提供 的接口, 例如, KVM 组件提供的内存访问接口等。 在此对源主机中用于支 持内存数据 迁移的内核接口的提供方不作 限定。 相应地, 虚拟机管理器具有对源主机 内核态提供 的相关接口的调用权限, 例如, QEMU组件, 具备对 KVM 组件提供的各 种接口的调 用权限; 还具备对源主机操作系统提供的部分内核接 口的调用权限。 在该示例性实现方式 中, 可通过对虚拟机管理器和上述内核接口之间的通信协议 进行改造 , 以使双方对缺页状态查询指令达成一致的认知。 实际应用中, 虚拟机管理 器可按照 与上述内核接口所约定的指令格 式发起缺页状态查询指令 , 而上述内核接口 则可通过 指令格式判断出所接收到指令 是否为缺页状态指令。 这里的指定格式, 例如 可以是在 指令中携带特殊标识或 字段等, 在此不作限定。 发明人在研究过程中发现, 虚拟机管理 器用于对源主机上各虚拟机 进行管理, 因此, 对于虚拟机管理器来说, 可 获知待迁移 的虚拟机在源主机提供的虚拟 内存空间中所占用的虚拟 内存页有哪些。 其 中, 虚拟内存页的地址即为主机虚拟内存地址 (Host virtual address, 简称为 HVA) 。 基于此 , 在该示例性实现方式中, 可在缺页查询指令中携带目标虚拟内存 页的标识, 例如, 前述的 HVA等, 以使该缺页查询指令指向目标虚拟 内存页。 另外, 在该示例性实现方式中, 还可在上述内核接口中引入用于缺页状态查询的 处理逻辑 , 这样, 上述内核接口在接收到缺页状态查询指令后, 将执行用于缺页状态 查询的处理 逻辑。 上述内核接口中用于缺页状态查询的处理逻辑 , 可包括: 响应于虚拟机管 理器在对目标虚拟 内存页执行第一 次迁移时发起的缺 页状态查 询指令, 为目标虚拟内存页生成缺页状 态描述信息; 将缺页状态描述信息提供至虚拟 机管理器 , 以作为对缺页状态查询指令的响应结果。 发明人在研究过程 中还发现, 上述的内核接口在内存数据迁移过程中需要负责从 主机虚拟 内存地址 HVA和 主机物理内存地址 (Host physical address, 简称为 HP A) 之间 的地址转换工作, 而地址转换工作则是依赖页表所完成 的, 例如, 内存管理器 MM 中维护有内存映射 (MMap) , 即为一种页表。在上述内核接口所维护的页表中, 为虚拟 内存页维护的页表项中包含存在 标记位 (可以用 Present标记位进行表示) , 当该存在标 记位 =1时, 表示从 HVA到 HPA可进行有效的转换, 或者说表示 HVA所 映射的 HPA 存在, 因此, 可确定相应的虚拟内存页未处于缺页状态; 当该存在标记 位 =0 时, 则表示从 HVA到 HPA无法进 行有效的转换, 或者说表示 HVA 所映射的 HPA 不存在, 因此, 可确定相应的虚拟内存页未处于缺页状态。 在此基础上, 上述的内核接口可查询页表; 基于目标虚拟内存页所关联的存在标 记位上的取 值, 确定出目标虚拟内存页的缺页状态。 相应地, 内核接口为目标虚拟内 存页所生 成的缺页状态描述信息中可 指示出目标虚拟内存页的缺 页状态。 在该示例性实现方式 中, 对虚拟机管理器来说, 可解析上述内核接口所返回的缺 页状态描 述信息; 响应于缺页状态描述信息中指示目标虚拟内存 页处于缺页状态, 确 定检测到 目标虚拟内存页处于缺页状 态。 可以理解的是, 在该示例性实现方式中, 虚拟机管理器可通过调用源主机中用于 支持 内存数据迁移的内核接 口, 来触发内核接口去查询目标虚拟内存页的缺 页状态, 从而虚拟机 管理器可从上述内核接 口处获取到目标虚拟内存页的缺 页状态, 且可保证 准确性。 本实施例中, 除了上述示例性实现方式之外, 在步骤 100中, 虚拟机管理器还可 采用其 它实现方式来检测目标虚拟 内存页的缺页状态。 例如, 虚拟机管理器可从上述 内核接 口中读取目标虚拟内存页对应 的页表项, 基于此, 虚拟机管理器可基于页表项 中的存在标 记位中的取值来检测出 目标虚拟内存页的缺页状态。 又例如, 源主机的内 核态中通 常还有其它接口能够提供虚拟 内存页与物理内存页之间的 映射关系, 虚拟机 管理器还可 与源主机内核态中这类不是 用于支持内存数据迁移的接 口进行通信, 以从 这类接 口中获取到所需的信息, 从而检测出目标虚拟内存页的缺页状 态。 在此不作展 开详述, 也不作更多示例。 继续参考图 1 , 在步骤 101 中提出, 响应于目标虚拟内存页处于缺页状态, 在未 触发缺页恢 复操作的情况下, 获取目标虚拟内存页对应的内存数据 。 发明人在研究过程中发现 , 缺页恢复操作是因对处于缺页状态的虚拟内存页发起 了内存访 问请求而导致的。 为此, 在步骤 101中, 虚拟机管理器在确定目标虚拟内存 页处于缺 页状态后, 不再发起对目标虚拟内存页的内存访问请求, 这样, 即不会触发 缺页恢复操作 。 据此, 本实施例中不再采用内存访问的方式来获取已被换出的内存数 据, 即可避免触发缺页恢复操作。 而除了内存访问的方式之外, 其它能够获取到已被 换出的 内存数据的数据获取方式, 均可适用于本实施例中, 以用于获取到已被换出的 内存数据 。 应当理解的是, 本实施例对步骤 101 中可采用的数据获取方式不做限定, 将在后续实 施例中提供示例性的数据获取 方式。 另外, 如前文提及的, 源主机中是在内核态进行内存管理的, 因此, 源主机的内 核态中管理 有缺页的虚拟内存页所对应 的内存数据的实际存储位置 。 在此基础上, 在 步骤 101 中, 基于虚拟机管理器已经具备的与源主机内核态之间的通信能力, 虚拟机 管理器可无 障碍地确定出对目标虚拟 内存页对应的内存数据的实际存 储位置, 进而可 实施能够触 达该实际存储位置的访问逻辑 , 从而获取到目标虚拟内存页对应的内存数 据。 可知,本实施例中,在步骤 101中,响应于目标虚拟内存页存在对应的内存数据, 虚拟机管理 器可从内存数据的实际存储位 置进行数据获取, 而不再通过内存访问请求 的方式来进 行数据获取。 这不仅可保证虚拟机管理器能够获取到缺页的虚 拟内存页所 对应的 内存数据, 且保证不会触发缺页恢复操作。 因此, 虚拟机管理器获取缺页的虚 拟内存页所对 应的内存数据时, 不会引发对源主机中物理内存页的增 量占用。 继续参考图 1 , 在步骤 102中, 可将所获取到的内存数据, 迁移至目的主机中。 本实施例中, 从整个全量第一次迁移阶段来看, 虚拟机管理器可遍历检测待迁移 的虚拟机 所占用的各个虚拟内存 页的缺页状态。 对于未处于缺页状态的虚拟 内存页, 虚拟机管理 器可按照传统迁移方式, 发起对这类虚拟内存页的内存访 问请求, 以从这 类虚拟 内存页所映射的物理内存页中获取 到对应的内存数据, 可以理解的是, 对这类 虚拟内存 页发起的内存访问请求 本来就不会引发对源主 机中物理内存页的增量 占用。 对于处于缺 页状态的虚拟内存 页, 虚拟机管理器将不再按照传统发起内存访 问请求, 而是改为从 内存数据的实际存储位置 中进行数据获取, 这种数据获取方式的改变, 不 仅可保证虚拟 机管理器能够获取到缺 页的虚拟内存页所对应的 内存数据, 且可保证不 会触发缺 页恢复操作,因此,也不会引发对源主机中物理内存页的增量占 用。这使得, 整个全量 第一次迁移阶段中, 虚拟机管理器不再会触发缺页恢复操作 , 页就不会再引 发对源主机 中物理内存页的增量占用。 呼应于前文提到的, 内存数据迁移环节所包含的另一个阶段 ---脏页迁移阶段。 本 实施例中 , 在完成针对目标虚拟内存页的第一次迁移后, 响应于监测到目标虚拟内存 页变为脏 页, 虚拟机管理器可从目标虚拟内存页所映射的物理内存 页中, 获取目标虚 拟内存页对应 的脏页数据; 将为目标虚拟内存页所获取到的脏页数据 , 迁移至目的主 机中。 其中, 这里涉及到的技术概念一脏页, 是指被修改过的内存页。 这里应当理解的是, 虚拟机管理器在对虚拟机进行管理 的过程中, 会监测到虚拟 机对虚拟 内存页发起的内存访问请求 , 而正如前文提及的, 响应于目标虚拟内存页处 于缺页状 态, 则内存访问请求会导致缺页恢复操作, 以为目标虚拟内存页分配物理内 存页, 相应的内存数据可存储至物理 内存页中。 值得强调的是, 这种情况下发生的缺 页恢复操作是 由于虚拟机中发起的 内存访问请求而导致的, 这种内存访问请求是虚拟 机正常运行过程 中所需的, 因此而引发的对源主机中物理内存页的 占用是合理的, 本 实施例并不会 干预。 这样, 响应于虚拟机管理器监测到目标虚拟内存页变为脏页, 目 标虚拟 内存页必然已经不再处于缺页状 态了, 对应虚拟机管理器来说, 可正常地发起 对 目标虚拟内存页的内存访问请求, 以获取到目标虚拟内存页对应的脏页数据, 并迁 移至 目的主机中。 可知, 本实施例中, 在脏页迁移阶段中, 虚拟机管理器在执行脏页数据迁移时, 也不会 引发对源主机中物理内存页的增量 占用。 综上, 本实施例中, 对虚拟机迁移过程中的内存数据迁移方案进行了改进, 提出 针对虚拟机 中各虚拟内存页执行第一次 迁移时, 检测虚拟内存页的缺页状态。 对于处 于缺页状 态的虚拟内存页, 源主机中的虚拟机管理器将在未触发缺页恢 复操作的情况 下, 为这类虚拟内存页获取对应的内存数 据。 这样, 在虚拟机迁移过程中, 即使存在 部分虚拟 内存页处于缺页状态, 也不会再引发缺页恢复操作, 从而可避免缺页恢复操 作对源主机 的物理内存造成的占用, 降低内存数据迁移所导致的对源主机 物理内存的 占用量。 据此, 内存数据迁移不会再导致源主机中物理 内存的使用量发生明显 波动, 从而可避免加 剧源主机中的内存挤兑 问题。 在上述或下述实施例 中, 步骤 101 中可采用多种数据获取方案来实现在未触发缺 页恢复操作 的情况下, 获取目标虚拟内存页对应的内存数据。 发明人在研究过程中发现 , 导致虚拟内存页处于缺页状态的原因有很多, 不同原 因下, 虚拟内存页对应的内存数据的实 际存储位置可能不同, 因此, 在一种示例性的 数据获取 方案中提出: 可针对不同原因导致的缺页状态, 采用相适配的访问逻辑来触 达内存数据 的实际存储位置, 以获取到内存数据。 其中, 导致缺页状态的原因可包括 但不限于 内存交换及内存延时分配等 。 为此, 在该数据获取方案中提出: 在未触发缺页恢复操作的情况下, 响应于目标 虚拟内存 页对应的内存数据已被换 出至源主机对应的交换空间, 从交换空间中获取目 标虚拟 内存页对应的内存数据; 响应于目标虚拟内存页尚未在源主机 中分配到物理内 存页, 将空文件作为目标虚拟内存页对应 的内存数据。 在该数据获取方案中, 针对导致缺页状态的不同原 因, 提供了几种内存数据可能 对应的实 际存储位置: 一种是被换出至交换空间中, 另一种是并未分配实际存储位置 (即虚拟内存页下并不存在内存数据 )。这里, 涉及到几个技术概念,在此进行解释。 内存交换 (swap) , 是指内存交换技术, 在主机上物理内存不足时, 将部分内存数据 保存至交换 空间中, 以释放出主机上的物理内存。 交换空间, 也称为 swap 空间, 是 计算机 系统中用于虚拟内存的一种技术 , 它允许操作系统将磁盘空间作为临时的内存 使用。 当主机上物理内存不足时, swap空间可以保存暂时不活跃的内存页, 以释放物 理内存供其他 程序使用。 swap空间通常位于硬盘上。 内存延时分配, 是一种内存分配 机制。 对于主机上的虚拟机来说, 在内存分配之前仅仅得到内存空 间的使用承诺, 而 不是把物理 内存真正分配给虚拟机, 在虚拟机需要使用内存时, 在主机的内核态中才 去分配一个 或一组物理内存页给到虚拟机 。 换句话说, 虚拟机所占用的部分虚拟内存 页可能尚未被 虚拟机访问过, 这种虚拟内存页就不会分配到物理 内存页, 而在虚拟机 发起对这种虚 拟内存页的访问时, 才会触发主机的内核态去为它分配物 理内存页。 可 以理解的是 , 响应于虚拟内存页尚未分配到物理内存, 则表征虚拟机在该虚拟内存页 下并未发生 内存数据。 在该数据获取方案中, 还针对不同的实际存储位置, 分别提供了获取内存数据的 访问逻辑 。 其中, 在目标虚拟内存页对应的内存数据位于交换空间中的情况下, 虚拟 机管理器可从 交换空间中获取到 目标虚拟内存页对应的内存数据。 而在目标虚拟内存 页尚未在 源主机中分配到物理内存页 的情况下, 说明目标虚拟内存页下并未发生内存 数据, 虚拟机管理器可直接将空文件作 为目标虚拟内存页对应的 内存数据。 进一步, 本实施例中, 还提供了一种从交换空间中获取目标虚拟内存页对应的内 存数据的优选 实现方案。 在该优选实现方案中提出: 虚拟机管理器可向源主机中用于 支持内存数据 迁移的内核接口发起针对 目标虚拟内存页的数据获取请 求, 数据获取请 求用于触 发该内核接口从交换空间中读取 目标虚拟内存页对应的 内存数据; 获取该内 核接口提供 的目标虚拟内存页对应的 内存数据。 也即是, 在该优选实现方案中, 虚拟机管理器无需直接访问交换空间, 而是可利 用源主机 中的上述内核接口来从交换 空间中读取出所需的内存数据 , 这样, 虚拟机管 理器可从上 述内核接口获取到 目标虚拟内存页对应的内存数据。 参考图 3 , 为了更便于虚拟机管理器从上述内核接口获取到所需的内存数据, 在 该优选实现 方案中, 进一步提出: 为了支持虚拟机管理器和上述内核接口之间进行数 据传递, 可预设缓存空间, 在此基础上, 虚拟机管理器可在预设缓存空间中, 为目标 虚拟内存 页分配缓存地址; 将目标虚拟内存页的标识和缓存地址携带在数 据获取请求 中, 以触发内核接口将从交互空间中读取 目标虚拟内存页对应的内存数 据, 并存储至 预设缓存 空间内的缓存地址中。 对于内核接口来说, 可按照数据获取请求中携带的目 标虚拟内存 页的标识, 在交换空间中命中目标虚拟内存页对应的内存数 据, 从而准确 地读取到 目标虚拟内存页对应的内存数据 。 基于此, 虚拟机管理器可从该缓存地址中 读取 目标虚拟内存页对应的内存数据。 这可有效提升虚拟机管理器从交换 空间中获取 目标虚拟 内存页对应的内存数据的效率 , 且可降低虚拟机管理器中的逻辑复杂度, 充 分利用起源主 机中的上述内核接口。 应当理解的是, 上述几种内存数据可能对应的实际存储位 置仅是示例性的, 本实 施例中并不 限于此。 而且, 针对上述几种内存数据可能对应的实际存储位置所提供的 访问逻辑也是 示例性的, 本实施例也并不限于此, 保证虚拟机管理器能够采用合适的 访问逻辑触达 至缺页的虚拟内存对应的 内存数据的实际存储位置即 可。 进一步, 为了使虚拟机管理器能够更便捷, 更高效地确定出导致目标虚拟内存页 处于缺页状 态的原因。 本实施例中, 还提出了一种优选方案。 该优选方案可继承前文 中提到 的向源主机中用于 支持内存数据迁移 的内核接口发起缺 页状态查询指令的基 本构思。 基于此, 在该优选方案中提出, 可在上述内核接口返回的缺页状态描述信息 中指示缺 页类型。 这里, 缺页类型与前述的导致缺页状态的原因相对应。 例如, 响应 于导致缺 页状态的原因为内存交换, 则相应的缺页类型即为内存交换 类, 响应于导致 缺页状态的原 因为内存延时分配, 则相应的缺页类型即为延时分配 类。 在此基础上, 该优选方案中: 虚拟机管理器可通过解析上述内核接口为目标虚拟 内存页所返 回的缺页状态描述信息, 获知目标虚拟内存页对应的缺 页类型; 响应于缺 页状态描述信 息指示目标虚拟内存页对应 的缺页类型为内存交换类 , 确定目标虚拟内 存页对应的 内存数据已被换出至交换 空间; 响应于缺页状态描述信息指示目标虚拟内 存页对应的缺 页类型为延时分配类, 确定目标虚拟内存页尚未分配到物 理内存页。 图 4为本公开一示例性实施例提供 的一种缺页状态描述信息 的结构示意图。 参考 图 4, 缺页状态描述信息中可包含缺页状 态标识字段和缺页类型字段 。 基于此, 响应于缺页状态标识字段的取值为第一取值, 指示目标虚拟内存页处于 缺页状态 ; 响应于缺页状态标识字段的取值为第二取值, 指示目标虚拟内存页未处于 缺页状态。 实际应用中, 第一取值可为 1 , 第二取值可为 Oo 以及, 响应于缺页类型 字段的取值 为第三取值, 指示目标虚拟内存页的缺页类型为内存交换 类; 响应于缺页 类型字段的取 值为第四取值, 指示目标虚拟内存页的缺页类型为延 时分配类。 图 4中还示出了缺页状态描述信息 中的其它字段, 如信息类型字段, 用于指示出 当前 ■信息为缺页状态描述信息, 以供虚拟机管理器识别。 在此对缺页状态描述信息中 可包含的其 它字段不做更多示例。 应当理解的是, 除了上述优选方案之外, 本实施例中还可采用其它实现方案来支 持虚拟机管理 器确定出导致目标虚拟 内存页处于缺页状态的原因, 例如, 虚拟机管理 器可向源主机 的内核态申请页表监测权 限, 以保持监测页表中的修改事件, 从而虚拟 机管理器可基 于修改事件而获知哪些虚拟 内存页发生了哪种导致缺 页的事件, 即可为 缺页的虚拟 内存页确定出导致缺页状态 的原因。 在此不做更多实现方案的示例。 综上, 本实施例中, 虚拟机管理器可与源主机中用于支持内存数据迁移的内核接 口充分交互 , 以利用该内核接口来为虚拟机管理器提供检测缺页状态 的参考依据, 以 及利用该 内核接口来为缺页的虚拟 内存页读取到内存数据。 这使得虚拟机管理器能够 更加高效地检 测出虚拟内存页的缺页状 态, 以及在未触发缺页恢复操作的情况下, 更 加高效地获取 到缺页的虚拟内存页所对应 的内存数据。 图 5 为本公开另一示例性实施例提供的一种 虚拟机内存数据迁移方法的 流程图。 该方法可适 用于源主机中用于支持内存数据 迁移的内核接口, 参考图 5 , 该方法可包 括: 步骤 500、 接收源主机中的虚拟机管理器在对目标虚拟内存页执行第一次迁移 时 发起的缺 页状态查询指令; 步骤 501、 根据缺页状态查询指令, 为目标虚拟内存页生成缺页状态描述信息, 其中, 缺页状态描述信息用于指示 目标虚拟内存页的缺页状态 ; 步骤 502、 将缺页状态描述信息提供至虚拟机管理器, 以使虚拟机管理器在检测 到 目标虚拟内存页处于缺页状态后, 在未触发缺页恢复操作的情况下 , 获取到目标虚 拟内存页对应 的内存数据, 并将获取到的内存数据迁移至目的主机 中。 应当理解的是, 本实施例中, 该内核接口仍然保持原本所承担的内存数据迁移相 关工作, 例如, 响应对虚拟内存页的内存访问请求等, 在此对该内核接口原本所承担 的内存数据 迁移相关工作不做展开详述 。 本实施例中, 在该内核接口中增加了图 5所示的处理逻辑, 以使该内核接口能够 配合虚拟机 管理器, 来支持虚拟机管理器能够避免在内存数据迁移过程 中触发缺页恢 复操作。 以下仅对该内核接 口中的部分技术细节进行再次明示 : 在一可选实施例中, 为目标虚拟内存页生成缺页状态描 述信息, 可包括: 查询页 表; 基于目标虚拟内存页所关联的存在标 记位上的取值, 确定出目标虚拟内存页的缺 页状态, 以为目标虚拟内存页生成缺页状态描述信息。 在一可选实施例中, 缺页状态描述信息还用于指示缺 页类型。 可在缺页状态描述 信息中指示 目标虚拟内存页对应的缺 页类型为内存交换类, 以供虚拟机管理器确定目 标虚拟 内存页对应的内存数据已被换 出至交换空间; 可在缺页状态描述信息中指示目 标虚拟 内存页对应的缺页类型为延时分 配类, 以供虚拟机管理器确定目标虚拟内存页 尚未分配到物 理内存页。 在一可选实施例中, 缺页状态描述信息中包含缺页状态标 识字段和缺页类型字段; 响应于缺 页状态标识字段的取值为第一取值 , 指示目标虚拟内存页处于缺页状态; 响 应于缺页状 态标识字段的取值为第二取值 , 指示目标虚拟内存页未处于缺页状态; 响 应于缺页 类型字段的取值为第三取值 , 指示目标虚拟内存页的缺页类型为内存交换类; 响应于缺 页类型字段的取值为第四取值 , 指示目标虚拟内存页的缺页类型为延时分配 类。 在一可选实施例中, 可响应于虚拟机管理器针对 目标虚拟内存页发起的数据获取 请求, 从交换空间中读取 目标虚拟内存页对应的内存数据 , 并提供至虚拟机管理器。 在一可选实施例中, 响应于虚拟机管理器发起的数据获取 请求中携带有目标虚拟 内存页的标识 和缓存地址, 则将从交换空间中读取到的目标虚拟内存 页对应的内存数 据, 存储至该缓存地址, 以供虚拟机管理器从该缓存地址中读取到 目标虚拟内存页对 应的内存数据 。 值得说明的是, 关于该内核接口中更多的具 体技术细节, 可参考前文中的描述, 在此不做展 开详述, 但这不应造成对本公开保护范围的损失。 另外, 在上述实施例及附图中的描述的一些流程中, 包含了按照特定顺序出现的 多个操作 , 但是应该清楚了解, 这些操作可以不按照其在本文中出现的顺序来执行或 并行执行 , 操作的序号如 801、 802 等, 仅仅是用于区分开各个不同的操作, 序号本 身不代表任何 的执行顺序。 另外, 这些流程可以包括更多或更少的操作, 并且这些操 作可以按顺序 执行或并行执行。需要说明的是,本文中的 “第一” 、 “第二” 等描述, 是用于 区分不同的取值等, 不代表先后顺序, 也不限定 “第一 " 和 “第二” 是不同的 类型。 图 6为本公开又一示例性实施例提供 的一种计算设备的结构示意 图。如图 6所示, 该计算设备 可以是待迁移的虚拟所在的 源主机, 该计算设备可包括: 存储器 60、 处理 器 61 以及通信组件 62。 处理器 61 , 与存储器 60和通信组件 62耦合, 用于执行存储 器 60中的计算机程序。 在一些设计方案中, 处理器 61可执行存储器 60中的计算机程序, 以实现为源主 机中的虚拟机 管理器。 这种请求下, 处理器 61可用于: 在针对目标虚拟内存 页执行第一次迁移的情况下, 检测目标虚拟内存页的缺页状 态; 响应于目标虚拟内存页处 于缺页状态, 在未触发缺页恢复操作的情况下, 获取目 标虚拟 内存页对应的内存数据; 将获取到的内存数据, 迁移至目的主机中。 在一可选实施例中, 处理器 61 在未触发缺页恢复操作的情况下, 获取目标虚拟 内存页对应 的内存数据时, 可用于: 在未触发缺页恢复操作 的情况下, 响应于目标虚拟内存页对应的内存数据已被换 出至源主机 交换空间, 从交换空间中获取目标虚拟内存页对应的 内存数据; 响应于目标虚拟内存页尚 未在源主机中分配到物理 内存页, 将空文件作为目标虚 拟内存页对应 的内存数据。 在一可选实施例中, 处理器 61在检测目标虚拟内存页的缺页状 态时, 可用于: 向源主机中用于支持 内存数据迁移的内核接口, 发起针对目标虚拟内存页的缺页 状态查询指令 , 其中, 缺页状态查询指令用于触发内核接口返回目标虚拟内存页对应 的缺页状 态描述信息; 响应于缺页状态描述信息 中指示目标虚拟内存页处于缺 页状态, 确定检测到目标 虚拟内存 页处于缺页状态。 在一可选实施例中, 缺页状态描述信息还用于指示 目标虚拟内存页对应的缺页类 型, 处理器 61还可用于: 响应于缺页类型为内存交换 类, 确定目标虚拟内存页对应的内存数据已被换出至 交换空间 ; 响应于缺页类型为延时分配 类, 确定目标虚拟内存页尚未分配到物理内存页。 在一可选实施例中, 缺页状态描述信息中包含缺页状态标 识字段和缺页类型字段; 响应于缺页状态标识字段 的取值为第一取值, 指示目标虚拟内存页处于缺页状态; 响应于缺 页状态标识字段的取值 为第二取值, 指示目标虚拟内存页未处于缺 页状态; 响应于缺页类型字段的取值 为第三取值, 指示目标虚拟内存页的缺页类型为内存 交换类 ; 响应于缺页类型字段的取值为第四取值, 指示目标虚拟内存页的缺页类型为 延时分配 类。 在一可选实施例中, 处理器 61 在从交换空间中获取目标虚拟内存页对应 的内存 数据时, 可用于: 向内核接口发起针对 目标虚拟内存页的数据获取请求 , 其中, 数据获取请求用于 触发内核接 口从交换空间中读取 目标虚拟内存页对应的内存数据 ; 获取内核接口提供的 目标虚拟内存页对应的内存数据 。 在一可选实施例中, 处理器 61 在向内核接口发起针对目标虚拟内存页的数 据获 取请求时 , 可用于: 在预设缓存空间中, 为目标虚拟内存页分配缓存地址 ; 将目标虚拟内存页的标识 和缓存地址携带在数据获取 请求中, 以触发内核接口将 从交换空 间中读取目标虚拟内存页对应 的内存数据, 并将读取到的内存数据存储至预 设缓存空 间内的缓存地址中; 获取内核接口提供的 目标虚拟内存页对应的内存数据 , 包括: 从缓存地址中读取 目标虚拟内存页对应的内存数据 。 在一可选实施例中, 处理器 61还可用于: 响应于目标虚拟内存页未处 于缺页状态, 从目标虚拟内存页所映射的物理内存页 中获取 目标虚拟内存页对应的内存数据 。 在一可选实施例中, 处理器 61还可用于: 在完成针对目标虚拟 内存页的第一次迁移后, 响应于监测到目标虚拟内存页变为 脏页, 从目标虚拟内存页所映射的物理 内存页中, 获取目标虚拟内存页对应的脏页数 据; 将为目标虚拟内存页所获 取到的脏页数据, 迁移至目的主机中。 在另一些设计方案中, 处理器 61可执行存储器 60中的计算机程序, 以实现为源 主机中用于 支持内存数据迁移的内核接 口。 这种情况下, 处理器 61可用于: 接收源主机中的 虚拟机管理器在对 目标虚拟内存 页执行第一次迁移时 发起的缺 页状态查询 指令; 根据缺页状态查询指令 , 为目标虚拟内存页生成缺页状态描述信息, 其中, 缺页 状态描述信 息用于指示目标虚拟内存 页的缺页状态; 将缺页状态描述信息提供 至虚拟机管理器, 以使虚拟机管理器在检测到目标虚拟 内存页处于缺 页状态后, 在未触发缺页恢复操作的情况下, 获取到目标虚拟内存页对 应的内存数据 , 并将获取到的内存数据迁移至目的主机中。 进一步, 如图 6所示, 该计算设备还包括: 电源组件 63等其它组件。 图 6中仅 示意性给 出部分组件, 并不意味着计算设备仅包括图 6所示组件。 值得说明的是, 上述关于计算设备各实施例中的技术细 节, 可参考前述的方法实 施例中关于虚 拟机管理器和用于支持 内存数据迁移的内核接口的相 关描述, 为节省篇 幅, 在此不再赘述, 但这不应造成本公开保护范围的损失。 相应地, 本公开实施例还提供一种存储有计算机程序 的计算机可读存储介质, 计 算机程序被执 行时能够实现上述方法实 施例中的各步骤。 相应地, 本公开实施例还提供一种计算机程序产品, 包括计算机程序, 当计算机 程序被一个 或多个处理器执行时, 致使一个或多个处理器执行 : 在针对目标虚拟内存 页执行第一次迁移的情况下, 检测目标虚拟内存页的缺页状 态; 响应于目标虚拟内存页处 于缺页状态, 在未触发缺页恢复操作的情况下, 获取目 标虚拟 内存页对应的内存数据; 将为目标虚拟内存页所获 取到的内存数据, 迁移至目的主机中。 本实施例中提供的计算机程 序产品, 可在针对虚拟机中各虚拟内存页执行第一次 迁移时, 检测虚拟内存页的缺页状态。 对于处于缺页状态的虚拟内存 页, 源主机中的 虚拟机管理 器将在未触发缺页恢复操作 的情况下, 为这类虚拟内存页获取对应的内存 数据。 这样, 在虚拟机迁移过程中, 即使存在部分虚拟内存页处于缺页状态, 也不会 再引发缺 页恢复操作, 从而可避免缺页恢复操作对源主机的物理 内存造成的占用, 降 低内存数据 迁移所导致的对源主机物理 内存的占用量。 据此, 内存数据迁移不会再导 致源主机 中物理内存的使用量发生明显 波动, 从而可避免加剧源主机中的内存挤兑问 题。 上述图 6中的存储器, 用于存储计算机程序, 并可被配置为存储其它各种数据以 支持在计 算平台上的操作。 这些数据的示例包括用于在计算平台上操作 的任何应用程 序或方 法的指令, 联系人数据, 电话簿数据, 消息, 图片, 视频等。 存储器可以由任 何类型 的易失性或非易 失性存储设备或者 它们的组合实现, 如静态随机存取存 储器 ( Static Random Access Memory , 简称为 SRAM) , 电可擦除可编程只读存储器 (Electrically Erasable Programmable Read-Only Memory, 简称为 EEPROM) , 可擦除 可编程只 读存储器(Electrically Programmable Read-Only Memory, 简称为 EPROM) , 可编程只 读存储器 (Programmable Read-Only Memory, 简称为 PROM) , 只读存储器 ((Read-Only Memory, 简称为 ROM) , 磁存储器, 快闪存储器, 磁盘或光盘。 上述图 6中的通信组件, 被配置为便于通信组件所在设备和其他 设备之间有线或 无线方 式的通信。通信组件所在设备可以接入基于通信标准 的无线网络,如 WiFi、2G、 3G、 4G/长期演进技术 (Long-Term Evolution, 简称为 L TE) 、 5G等移动通信网络, 或它们 的组合。 在一个示例性实施例中, 通信组件经由广播信道接收来自外部广播管 理系统 的广播信号或广播相关信息 。 在一个示例性实施例中, 所述通信组件还包括近 场通信 (Near Field Communication, 简称为 NFC) 模块, 以促进短程通信。 例如, 在 NFC 模块可基于射 频识别 (Radio Frequency Identification, 简称为 RFID) 技术, 红外 数据协会 (Infrared Data Association, 简称为 I rDA) 技术, 超宽带 (Ultra Wideband, 简称为 UWB) 技术, 蓝牙 (Bluetooth, 简称为 BT) 技术和其他技术来实现。 上述图 6中的电源组件, 为电源组件所在设备的各种组件提供 电力。 电源组件可 以包括 电源管理系统, 一个或多个电源, 及其他与为电源组件所在设备生成、 管理和 分配电 力相关联的组件。 本领域内的技术人 员应明白, 本公开的实施例可提供为方法、 系统、 或计算机程 序产品 。 因此, 本公开可采用完全硬件实施例、 完全软件实施例、 或结合软件和硬件 方面的实 施例的形式。 而且, 本公开可采用在一个或多个其中包含有计算机可用程序 代码的 计算机可用存储介质 (包括但不限于磁盘存储器、 光盘只读存储器 (Compact Disc Read-Only Memory, 简称为 CD-ROM) , 光学存储器等) 上实施的计算机程序产 品的形 式。 本公开是参照根据本公 开实施例的方法、 设备 (系统) 、 和计算机程序产品的流 程图和 /或方框图来描述的。 应理解可由计算机程序指令实现流程图和 /或方框图中 的每一流程 和/或 方框、 以及流程图和 /或方框图中的流程和/或 方框的结合。 可提 供这些计 算机程序指令到通用计算机 、 专用计算机、 嵌入式处理机或其他可编程数据 处理设备 的处理器以产生一个机器 , 使得通过计算机或其他可编程数据处理设备的处 理器执 行的指令产生用 于实现在流程图一个 流程或多个流程 和 /或方框图一个方框 或多个 方框中指定的功能的装置。 这些计算机程序 指令也可存储在 能引导计算机或其他 可编程数据处理 设备以特 定方式工作 的计算机可读存储器中, 使得存储在该计算机可读存储器 中的指令产生包 括指令装置 的制造品, 该指令装置实现在流程图一个流程或多个流程和 /或方框图一 个方框或多个 方框中指定的功能。 这些计算机程序指令也 可装载到计算机或其他可编程 数据处理设备上, 使得在计 算机或其他 可编程设备上执行一系列操作 步骤以产生计算机实现的 处理, 从而在计算 机或其他 可编程设备上执 行的指令提供用于 实现在流程图一个 流程或多个流程和 / 或方框图一个 方框或多个方框中指定的功能 的步骤。 还需要说明的是, 术语 “包括” 、 “包含” 或者其任何其他变体意在涵盖非排他 性的包含 ,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素, 而且还包括 没有明确列出的其他要素, 或者是还包括为这种过程、 方法、 商品或者设 备所固有 的要素。 在没有更多限制的情况下, 由语句 “包括一个 … … " 限定的要素, 并不排除在 包括所述要素的过程、 方法、 商品或者设备中还存在另外的相同要素。 需要说明的是, 本公开所涉及的用户信息 (包括但不限于用户设备信息、 用户个 人信息等 ) 和数据 (包括但不限于用于分析的数据、 存储的数据、 展示的数据等) , 均为经用 户授权或者经过各方充分授权 的信息和数据, 并且相关数据的收集、 使用和 处理需要遵 守相关国家和地区的相关 法律法规和标准, 并提供有相应的操作入口, 供 用户选择授权 或者拒绝。 以上所述仅为本公开的 实施例而已, 并不用于限制本公开。 对于本领域技术人员 来说,本公开可以有各种更 改和变化。凡在本公开的精神和原理之 内所作的任何修改、 等同替换 、 改进等, 均应包含在本公开的保护范围之内。 工业实用性 本公开实施例提供的方 案提出针对虚拟机中各虚拟 内存页执行第一次迁移时, 检 测虚拟 内存页的缺页状态。 对于处于缺页状态的虚拟内存页, 源主机中的虚拟机管理 器将在未触 发缺页恢复操作的情况下 ,为这类虚拟内存页获取对应的内存数据。这样, 在虚拟机迁移 过程中, 即使存在部分虚拟内存页处于缺页状态, 也不会再引发缺页恢 复操作, 从而可避免缺页恢复操作对源主机 的物理内存造成的占用, 降低内存数据迁 移所导致 的对源主机物理内存的占用量 。 据此, 内存数据迁移不会再导致源主机中物 理内存的使 用量发生明显波动, 从而解决了源主机中内存挤兑的技 术问题。 A Virtual Machine Memory Data Migration Method, Device, Computer Program Product, and Storage Medium Cross-Reference This disclosure claims priority to a Chinese patent application filed with the China Patent Office on March 27, 2024, with application number 202410362467.4, entitled "A Virtual Machine Memory Data Migration Method, Device, Computer Program Product, and Storage Medium," the entire contents of which are incorporated herein by reference. Technical Field This disclosure relates to the field of cloud computing technology, and more particularly to a virtual machine memory data migration method, device, computer program product, and storage medium. Background: Virtual machine live migration refers to the process of migrating a running virtual machine from a source host to a destination host. The migration process does not interrupt the virtual machine's workload and is unaware to users. Memory data migration is a key step in the live migration process. Currently, memory data migration can cause significant fluctuations in physical memory usage on the source host, potentially leading to severe memory runs and impacting the source host's memory performance. SUMMARY OF THE INVENTION Various aspects of the present disclosure provide a virtual machine memory data migration method, apparatus, computer program product, and storage medium for reducing the amount of physical memory occupied by a source host due to memory data migration. Embodiments of the present disclosure provide a virtual machine memory data migration method, applicable to a virtual machine manager on a source host. The method comprises: when performing a first migration on a target virtual memory page, detecting a page fault status of the target virtual memory page; in response to the target virtual memory page being in a page fault status, obtaining memory data corresponding to the target virtual memory page without triggering a page fault recovery operation; and migrating the obtained memory data to a destination host. The present disclosure also provides a method for migrating virtual machine memory data, applicable to a kernel interface in a source host that supports memory data migration. The method comprises: receiving a page fault status query instruction initiated by a virtual machine manager in the source host when performing a first migration of a target virtual memory page; generating page fault status description information for the target virtual memory page based on the page fault status query instruction, wherein the page fault status description information indicates the page fault status of the target virtual memory page; and providing the page fault status description information to the virtual machine manager so that upon detecting that the target virtual memory page is in a page fault state, the virtual machine manager obtains the target virtual memory page address without triggering a page fault recovery operation. The corresponding memory data is retrieved and the acquired memory data is migrated to the destination host. The present disclosure also provides a computing device comprising a memory, a processor, and a communication component. The memory is configured to store one or more computer instructions. The processor is coupled to the memory and the communication component and configured to execute the aforementioned virtual machine memory data migration method. The present disclosure also provides a computer-readable storage medium storing a computer program. When the computer program is executed by one or more processors, it causes the one or more processors to execute the aforementioned virtual machine memory data migration method. The present disclosure also provides a computer program product comprising a computer program. When the computer program is executed by one or more processors, it causes the one or more processors to execute the aforementioned virtual machine memory data migration method. The present disclosure also provides a computer program product comprising a non-volatile computer-readable storage medium storing the computer program. When the computer program is executed by the processor, it implements the aforementioned virtual machine memory data migration method. The present disclosure also provides a computer program. When the computer program is executed by the processor, it implements the aforementioned virtual machine memory data migration method. In an embodiment of the present disclosure, an improvement is made to the memory data migration scheme during virtual machine migration. This scheme proposes detecting the page fault status of each virtual memory page in the virtual machine during the first migration. For virtual memory pages in the page fault state, the virtual machine manager on the source host retrieves the corresponding memory data for these virtual memory pages without triggering a page fault recovery operation. This prevents page fault recovery operations from consuming the source host's physical memory, thus reducing the amount of physical memory consumed by memory data migration. Consequently, memory data migration no longer causes significant fluctuations in physical memory usage on the source host, thereby preventing exacerbation of memory run issues on the source host. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings described herein are provided to provide a further understanding of the present disclosure and constitute a part of this disclosure. The illustrative embodiments of this disclosure and their description are provided to explain the present disclosure and are not intended to limit it. In the accompanying drawings: Figure 1 is a flowchart of a method for migrating memory data of a virtual machine provided by an exemplary embodiment of the present disclosure; Figure 2 is a logical diagram of a method for migrating memory data of a virtual machine provided by an exemplary embodiment of the present disclosure; Figure 3 is a logical diagram of an optional implementation of a method for migrating memory data of a virtual machine provided by an exemplary embodiment of the present disclosure; Figure 4 is a structural diagram of a description of a missing page state provided by an exemplary embodiment of the present disclosure; Figure 5 is a flowchart of a method for migrating memory data of a virtual machine provided by another exemplary embodiment of the present disclosure; Figure 6 is a structural diagram of a computing device provided by another exemplary embodiment of the present disclosure. To further clarify the objectives, technical solutions, and advantages of this disclosure, the technical solutions of this disclosure will be described clearly and completely below in conjunction with specific embodiments of this disclosure and the corresponding figures. Obviously, the described embodiments represent only a portion of the embodiments of this disclosure, and are not exhaustive. All other embodiments derived by persons of ordinary skill in the art based on the embodiments of this disclosure without inventive effort are within the scope of protection of this disclosure. Before providing a detailed description of the technical solutions provided by each embodiment of this disclosure, several technical concepts involved in this disclosure are explained below. A memory run is a phenomenon in which physical memory is insufficient on a host. Physical memory on a host is typically over-allocated to each virtual machine running on it. This is primarily because each virtual machine typically does not use all memory at once. Therefore, it is desirable to ensure the memory usage requirements of each virtual machine through time-sharing multiplexing, thereby improving physical memory utilization. However, in some cases, the memory usage of each virtual machine may increase significantly, and the total memory required by each virtual machine may exceed the total physical memory on the host, resulting in a memory run. Memory overflow issues can cause some virtual machines to be unable to use memory properly, a serious performance issue on the host. Virtual machine migration refers to the process of migrating a virtual machine from a source host to a destination host. Virtual machine migration is categorized into hot migration and cold migration. Hot migration involves migrating a running virtual machine from a source host to a destination host. The downtime required for the virtual machine during hot migration is typically minimal, making it virtually imperceptible to users. Cold migration involves migrating a decommissioned virtual machine from a source host to a destination host. Cold migration requires a relatively long downtime. Memory data migration involves migrating the virtual machine's memory data from the source host to the destination host. Typically, cold migration does not involve memory data migration, as it requires virtual machine downtime and memory is volatile. Hot migration does not require virtual machine downtime. Therefore, the virtual machine's memory data must be fully and correctly migrated to the destination host to ensure proper operation of the virtual machine on the destination host after the migration is complete. Memory virtualization is a virtualization technology that allows physical memory to be expanded into a larger logical memory space, allowing programs to access more memory resources. This technology divides the virtual address space into fixed-size pages (called virtual memory pages) and maps these virtual memory pages to physical memory, allowing each process or resource (such as a virtual machine) to have its own virtual memory space. A page fault, also known as a page fault, is a memory anomaly derived from memory virtualization technology. A page fault generally refers to a virtual memory page not being mapped to a physical memory page, resulting in an inability to access the memory data corresponding to the virtual memory page. However, this memory anomaly is recoverable, and the operation of recovering a page fault is called a page fault recovery operation. The page fault recovery operation reallocates physical memory pages to the virtual memory page that has been faulted and restores the memory data to the reallocated physical memory pages. During the course of research, the inventors discovered that currently, during virtual machine migration, the virtual machine manager in the source host traverses the virtual memory pages occupied by the virtual machine when migrating memory data and allocates them to these virtual memory pages without any difference. Memory access requests initiated by a memory page trigger page fault recovery operations in the source host's operating system. This requires the use of physical memory on the source host. Therefore, as described in the background, memory data migration can cause significant fluctuations in the source host's physical memory usage. The inventors also discovered during their research that virtual machine migration is often caused by insufficient physical memory on the source host. Migrating some virtual machines is intended to alleviate this problem. However, in this situation, memory data migration requires a significant amount of physical memory on the source host, exacerbating the source host's physical memory shortage and further exacerbating the memory squeeze. Therefore, this embodiment proposes a method for virtual machine memory data migration. By modifying the memory data migration process, it can effectively reduce the physical memory usage caused by memory data migration. The following, combined with the accompanying drawings, details the technical solutions provided by various embodiments of this disclosure. Figure 1 is a flow chart of a virtual machine memory data migration method provided in accordance with an exemplary embodiment of the present disclosure. Figure 2 is a logical diagram of a virtual machine memory data migration method provided in accordance with an exemplary embodiment of the present disclosure. This method can be executed by a virtual machine manager (VM) on a source host. This VM manager can be implemented as software, hardware, or a combination of software and hardware, and can be integrated into the source host. Referring to Figure 1 , the method may include: Step 100: Detecting a page fault status of a target virtual memory page during a first migration; Step 101: Responding to the target virtual memory page being in a page fault state, without triggering a page fault recovery operation, obtaining memory data corresponding to the target virtual memory page; and Step 102: Migrating the obtained memory data to the destination host. In this embodiment, the VM manager (VMM) can be a hypervisor, Quick Emulator (QEMU), Kernel-based Virtual Machine (KVM), or the like. Furthermore, a virtual machine manager typically includes components in kernel mode and components in user mode. In this embodiment, user-mode components of the virtual machine manager are described as user-mode components, such as the aforementioned QEMU components; kernel-mode components of the virtual machine manager are described as kernel-mode components, such as the aforementioned KVM components. The virtual machine memory data migration method provided in this embodiment can be primarily executed by the user-mode components of the virtual machine manager. The virtual machine memory data migration method provided in this embodiment is applicable to the aforementioned virtual machine hot migration scenario. Of course, if memory data migration is also required in virtual machine cold migration scenarios, the virtual machine memory data migration method provided in this embodiment is also applicable to these scenarios. In other words, this embodiment does not limit the application scenario. During research, the inventors discovered that memory data migration generally includes two migration phases: a full first migration phase and a dirty page migration phase. The full first migration phase can be understood as the process of traversing the virtual machine's full memory data and migrating it from the source host to the destination host after the migration begins. The dirty page migration phase can be understood as the phase where after a virtual memory page completes its first migration, the virtual machine is still running normally in the source host. Related operations occurring within the machine may cause changes to the memory data corresponding to some virtual memory pages. These virtual memory pages with changed memory data are considered dirty pages and need to be migrated to the destination host. Furthermore, the dirty page migration phase may require multiple rounds of migration until the latest round of dirty page migration is detected to have taken less than a specified time threshold. Upon completion of the latest round of dirty page migration, the dirty page migration phase ends. Based on this, this embodiment proposes improvements to the aforementioned full first migration phase to reduce the physical memory usage caused by this phase. It is worth noting that during the full first migration phase, the first migration must be performed on the virtual memory pages occupied by the virtual machine. For ease of description, this embodiment will use the target virtual memory page as an example to explain the first migration process in detail. It should be understood that the virtual machine memory data migration method provided in this embodiment is applicable to any virtual memory page occupied by the virtual machine to be migrated. Referring to Figure 1 , in step 100, it is proposed that during the first migration of the target virtual memory page, the page fault status of the target virtual memory page be detected. It will be appreciated that in this embodiment, when performing the first migration for a target virtual memory page, a memory access request for the target virtual memory page is not directly initiated. Instead, the page fault status of the target virtual memory page is first detected. In this embodiment, various implementations can be used to detect the page fault status of the target virtual memory page in step 100. FIG3 is a logical diagram of an optional implementation of a virtual machine memory data migration method provided by an exemplary embodiment of the present disclosure. Referring to FIG3 , an exemplary implementation of step 100 may include: the virtual machine manager may initiate a page fault status query instruction for the target virtual memory page to a kernel interface of the source host that supports memory data migration. The page fault status query instruction is used to trigger the kernel interface to return page fault status description information corresponding to the target virtual memory page; and in response to the page fault status description information indicating that the target virtual memory page is in a page fault state, the virtual machine manager determines that the target virtual memory page is in a page fault state. It should be understood that the source host's physical memory is managed in kernel mode. Therefore, memory data migration requires support from certain interfaces in the source host's kernel mode. In this embodiment, these interfaces are described as kernel interfaces in the source host used to support memory data migration. The virtual machine manager works in conjunction with these kernel interfaces to complete memory data migration. The kernel interfaces in the source host used to support memory data migration can be kernel interfaces provided by the source host's operating system, such as the Memory Manager (MM). Of course, the kernel interfaces in the source host used to support memory data migration can also be interfaces provided by kernel mode components of the virtual machine manager, such as the memory access interface provided by the KVM component. The provider of the kernel interfaces in the source host used to support memory data migration is not limited herein. Accordingly, the virtual machine manager has call permissions for relevant interfaces provided in the source host's kernel mode. For example, the QEMU component has call permissions for various interfaces provided by the KVM component. It also has call permissions for certain kernel interfaces provided by the source host's operating system. In this exemplary implementation, the communication protocol between the virtual machine manager and the kernel interface can be modified so that both parties reach a consensus on the page fault status query instruction. In actual applications, the virtual machine manager can issue the page fault status query instruction in accordance with the instruction format agreed upon with the kernel interface, and the kernel interface The instruction format can be used to determine whether the received instruction is a page fault status instruction. The specified format here may include, for example, a special identifier or field in the instruction, which is not limited here. During research, the inventors discovered that a virtual machine manager manages virtual machines on a source host. Therefore, the virtual machine manager can determine which virtual memory pages are occupied by the virtual machine to be migrated within the virtual memory space provided by the source host. The address of a virtual memory page is the host virtual address (HVA). Based on this, in this exemplary implementation, the page fault query instruction may include the identifier of the target virtual memory page, such as the aforementioned HVA, so that the page fault query instruction is directed to the target virtual memory page. Furthermore, in this exemplary implementation, processing logic for page fault status queries may be incorporated into the kernel interface. Upon receiving a page fault status query instruction, the kernel interface will execute this processing logic. The processing logic for page fault status query in the kernel interface may include: generating page fault status description information for the target virtual memory page in response to a page fault status query instruction issued by the virtual machine manager when performing the first migration of the target virtual memory page; and providing the page fault status description information to the virtual machine manager as a response to the page fault status query instruction. The inventors also discovered during their research that the kernel interface is responsible for address translation between host virtual memory addresses (HVAs) and host physical memory addresses (HPAs) during memory data migration. This address translation relies on page tables. For example, the memory manager (MM) maintains a memory map (MMap), which is a type of page table. In the page table maintained by the kernel interface, the page table entry for the virtual memory page includes a presence flag (represented by the "Present" flag). When the presence flag is 1, it indicates that a valid translation from the HVA to the HPA is possible, or that the HPA mapped by the HVA exists. Therefore, it can be determined that the corresponding virtual memory page is not in a page fault state. When the presence flag is 0, it indicates that a valid translation from the HVA to the HPA is not possible, or that the HPA mapped by the HVA does not exist. Therefore, it can be determined that the corresponding virtual memory page is not in a page fault state. Based on this, the kernel interface can query the page table and determine the page fault state of the target virtual memory page based on the value of the presence flag associated with the target virtual memory page. Accordingly, the page fault state description information generated by the kernel interface for the target virtual memory page can indicate the page fault state of the target virtual memory page. In this exemplary implementation, the virtual machine manager can parse the page fault status description information returned by the kernel interface; in response to the page fault status description information indicating that the target virtual memory page is in a page fault state, the virtual machine manager determines that the target virtual memory page is in a page fault state. It will be appreciated that in this exemplary implementation, the virtual machine manager can trigger the kernel interface to query the page fault status of the target virtual memory page by calling a kernel interface on the source host that supports memory data migration. Thus, the virtual machine manager can obtain the page fault status of the target virtual memory page from the kernel interface, ensuring accuracy. In this embodiment, in addition to the exemplary implementation described above, the virtual machine manager can also use other implementations to detect the page fault status of the target virtual memory page in step 100. For example, the virtual machine manager can read the page table entry corresponding to the target virtual memory page from the kernel interface. Based on this, the virtual machine manager can determine the page fault status of the target virtual memory page based on the page table entry. The page fault status of the target virtual memory page is detected by the value of the presence flag bit in the source host. For another example, the source host's kernel state typically has other interfaces that provide a mapping relationship between virtual memory pages and physical memory pages. The virtual machine manager can also communicate with these interfaces in the source host's kernel state that are not used to support memory data migration to obtain the required information from these interfaces, thereby detecting the page fault status of the target virtual memory page. This will not be described in detail here, nor will further examples be provided. Continuing with FIG. 1 , step 101 proposes that, in response to the target virtual memory page being in a page fault state, memory data corresponding to the target virtual memory page is obtained without triggering a page fault recovery operation. During research, the inventors discovered that a page fault recovery operation is caused by a memory access request being initiated for a virtual memory page in a page fault state. Therefore, in step 101, after determining that the target virtual memory page is in a page fault state, the virtual machine manager no longer initiates a memory access request for the target virtual memory page, thereby preventing the triggering of a page fault recovery operation. Therefore, this embodiment no longer uses memory access to retrieve swapped-out memory data, thereby avoiding triggering a page fault recovery operation. In addition to memory access, other data acquisition methods capable of retrieving swapped-out memory data are applicable to this embodiment for retrieving swapped-out memory data. It should be understood that this embodiment does not limit the data acquisition methods that can be used in step 101; exemplary data acquisition methods will be provided in subsequent embodiments. Furthermore, as mentioned above, the source host performs memory management in kernel mode. Therefore, the actual storage location of the memory data corresponding to the virtual memory page with the page fault is managed in kernel mode in the source host. Based on this, in step 101, based on the virtual machine manager's existing communication capabilities with the source host's kernel mode, the virtual machine manager can seamlessly determine the actual storage location of the memory data corresponding to the target virtual memory page and then implement access logic capable of reaching this actual storage location, thereby retrieving the memory data corresponding to the target virtual memory page. As can be seen, in this embodiment, in step 101, in response to the presence of corresponding memory data in the target virtual memory page, the virtual machine manager can retrieve the data from the actual storage location of the memory data, rather than through a memory access request. This ensures that the virtual machine manager can retrieve the memory data corresponding to the missing virtual memory page and also prevents triggering a page fault recovery operation. Therefore, when the virtual machine manager retrieves the memory data corresponding to the missing virtual memory page, it does not cause incremental occupation of physical memory pages on the source host. Continuing with Figure 1, in step 102, the retrieved memory data can be migrated to the destination host. In this embodiment, during the entire first full migration phase, the virtual machine manager can traverse and detect the page fault status of each virtual memory page occupied by the virtual machine to be migrated. For virtual memory pages that are not in the page-fault state, the virtual machine manager can initiate memory access requests for these virtual memory pages according to the traditional migration method to obtain the corresponding memory data from the physical memory pages mapped to these virtual memory pages. It is understandable that the memory access requests initiated for these virtual memory pages will not cause incremental occupation of physical memory pages on the source host. For virtual memory pages in the page-fault state, the virtual machine manager will no longer initiate memory access requests according to the traditional method, but instead obtain data from the actual storage location of the memory data. This change in data acquisition method not only ensures that the virtual machine manager can obtain the memory data corresponding to the missing virtual memory page, but also ensures that This triggers a page fault recovery operation, and therefore does not cause incremental occupation of physical memory pages on the source host. This ensures that during the entire first full migration phase, the virtual machine manager no longer triggers page fault recovery operations, and pages no longer cause incremental occupation of physical memory pages on the source host. This corresponds to the dirty page migration phase mentioned above, another phase included in the memory data migration process. In this embodiment, after completing the first migration for the target virtual memory page, in response to detecting that the target virtual memory page has become dirty, the virtual machine manager can obtain the dirty page data corresponding to the target virtual memory page from the physical memory page mapped to it, and then migrate the obtained dirty page data for the target virtual memory page to the destination host. The technical concept "dirty page" here refers to a modified memory page. It should be understood that, while managing a virtual machine, the virtual machine manager monitors memory access requests initiated by the virtual machine for virtual memory pages. As mentioned above, in response to the target virtual memory page being in a page fault state, the memory access request triggers a page fault recovery operation to allocate a physical memory page for the target virtual memory page, allowing the corresponding memory data to be stored in the physical memory page. It is worth emphasizing that the page fault recovery operation in this case is caused by a memory access request initiated by the virtual machine. This memory access request is required for the normal operation of the virtual machine. The resulting occupation of physical memory pages on the source host is reasonable and is not interfered with by this embodiment. Therefore, in response to the virtual machine manager monitoring the target virtual memory page becoming dirty, the target virtual memory page must no longer be in a page fault state. Therefore, the virtual machine manager can normally initiate a memory access request for the target virtual memory page to obtain the dirty page data corresponding to the target virtual memory page and migrate it to the destination host. As can be seen, in this embodiment, during the dirty page migration phase, the virtual machine manager does not cause incremental usage of physical memory pages on the source host when performing dirty page data migration. In summary, this embodiment improves the memory data migration solution during virtual machine migration by detecting the page fault status of each virtual memory page in the virtual machine during the first migration. For virtual memory pages in the page fault state, the virtual machine manager on the source host retrieves the corresponding memory data for these virtual memory pages without triggering a page fault recovery operation. This prevents page fault recovery operations from affecting the source host's physical memory usage, thereby reducing the amount of source host physical memory usage caused by memory data migration. Consequently, memory data migration no longer causes significant fluctuations in the source host's physical memory usage, thereby preventing exacerbation of memory run issues on the source host. In the above or following embodiments, various data acquisition schemes can be used in step 101 to acquire the memory data corresponding to the target virtual memory page without triggering a page fault recovery operation. During research, the inventors discovered that there are many reasons why a virtual memory page may be in a page fault state. Depending on the cause, the actual storage location of the memory data corresponding to the virtual memory page may vary. Therefore, an exemplary data acquisition scheme proposes that, in response to different page fault conditions, appropriate access logic can be used to access the actual storage location of the memory data and acquire the memory data. The causes of the page fault condition may include, but are not limited to, memory swapping and memory delay allocation. To this end, this data acquisition solution proposes: In the absence of a page fault recovery operation, in response to the memory data corresponding to the target virtual memory page being swapped out to the swap space corresponding to the source host, the memory data corresponding to the target virtual memory page is retrieved from the swap space; in response to the target virtual memory page not yet being allocated to a physical memory page on the source host, an empty file is used as the memory data corresponding to the target virtual memory page. This data acquisition solution provides several possible actual storage locations corresponding to the memory data for different causes of page faults: one is being swapped out to the swap space, and the other is no actual storage location being allocated (i.e., no memory data exists under the virtual memory page). This involves several technical concepts, which are explained here. Memory swapping refers to a memory exchange technology. When physical memory on a host is insufficient, some memory data is saved to the swap space to free up physical memory on the host. Swap space, also known as swap space, is a virtual memory technology used in computer systems, allowing the operating system to use disk space as temporary memory. When physical memory on a host is insufficient, swap space can store temporarily inactive memory pages, freeing up physical memory for other programs. Swap space is typically located on a hard disk. Delayed memory allocation is a memory allocation mechanism. Before memory allocation, virtual machines on a host only receive a commitment to use memory space, rather than actually allocating physical memory to the virtual machine. When the virtual machine needs memory, one or a group of physical memory pages are allocated to the virtual machine in the host kernel state. In other words, some virtual memory pages occupied by a virtual machine may not have been accessed by the virtual machine yet, and therefore these virtual memory pages will not be allocated to physical memory pages. Only when the virtual machine initiates access to these virtual memory pages will the host kernel state trigger the allocation of physical memory pages. It can be understood that the fact that a virtual memory page has not yet been allocated to physical memory indicates that the virtual machine has no memory data associated with that virtual memory page. This data acquisition solution also provides access logic for acquiring memory data based on different actual storage locations. If the memory data corresponding to the target virtual memory page is located in swap space, the virtual machine manager can retrieve the memory data corresponding to the target virtual memory page from the swap space. If the target virtual memory page has not yet been allocated to a physical memory page on the source host, this indicates that no memory data exists under the target virtual memory page. The virtual machine manager can directly use an empty file as the memory data corresponding to the target virtual memory page. Furthermore, this embodiment provides a preferred implementation scheme for obtaining the memory data corresponding to the target virtual memory page from the swap space. This preferred implementation scheme proposes that the virtual machine manager initiate a data acquisition request for the target virtual memory page to a kernel interface on the source host that supports memory data migration. The data acquisition request triggers the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space, thereby obtaining the memory data corresponding to the target virtual memory page provided by the kernel interface. In other words, in this preferred implementation scheme, the virtual machine manager does not need to directly access the swap space. Instead, it can use the kernel interface on the source host to read the required memory data from the swap space. Thus, the virtual machine manager can obtain the memory data corresponding to the target virtual memory page from the kernel interface. 3, in order to facilitate the virtual machine manager to obtain the required memory data from the above kernel interface, in this preferred implementation, it is further proposed that: in order to support data transfer between the virtual machine manager and the above kernel interface, a cache space can be preset. On this basis, the virtual machine manager can use the preset cache space to store the target A cache address is allocated for a virtual memory page; the identifier and cache address of the target virtual memory page are included in the data retrieval request, triggering the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space and store it at a cache address within the preset cache space. The kernel interface can use the identifier of the target virtual memory page carried in the data retrieval request to find the memory data corresponding to the target virtual memory page in the swap space, thereby accurately reading the memory data corresponding to the target virtual memory page. Based on this, the virtual machine manager can read the memory data corresponding to the target virtual memory page from the cache address. This effectively improves the efficiency of the virtual machine manager in obtaining the memory data corresponding to the target virtual memory page from the swap space, reduces the logic complexity within the virtual machine manager, and fully utilizes the kernel interface in the originating host. It should be understood that the above-mentioned possible actual storage locations for the memory data are merely illustrative and are not intended to be limiting in this embodiment. Furthermore, the access logic provided for the above-mentioned possible actual storage locations for the memory data is also illustrative and is not intended to be limiting in this embodiment. The key requirement is to ensure that the virtual machine manager can use appropriate access logic to reach the actual storage location of the memory data corresponding to the missing virtual memory page. Furthermore, to enable the virtual machine manager to more conveniently and efficiently determine the cause of the target virtual memory page being in a page fault state, this embodiment also proposes a preferred solution. This preferred solution inherits the aforementioned basic concept of initiating a page fault status query instruction to the kernel interface used to support memory data migration in the source host. Based on this, this preferred solution proposes that the page fault type be indicated in the page fault status description information returned by the kernel interface. Here, the page fault type corresponds to the aforementioned cause of the page fault state. For example, if the cause of the page fault state is memory swapping, the corresponding page fault type is memory swapping; if the cause of the page fault state is delayed memory allocation, the corresponding page fault type is delayed allocation. Based on this, in this preferred solution: the virtual machine manager can parse the page fault status description information returned by the kernel interface for the target virtual memory page to obtain the page fault type corresponding to the target virtual memory page; in response to the page fault status description information indicating that the page fault type corresponding to the target virtual memory page is a memory swap type, determine that the memory data corresponding to the target virtual memory page has been swapped out to the swap space; and in response to the page fault status description information indicating that the page fault type corresponding to the target virtual memory page is a delayed allocation type, determine that the target virtual memory page has not yet been allocated to a physical memory page. Figure 4 is a schematic diagram of the structure of page fault status description information provided in an exemplary embodiment of the present disclosure. Referring to Figure 4, the page fault status description information may include a page fault status identification field and a page fault type field. Based on this, in response to the page fault status identification field taking a first value, it indicates that the target virtual memory page is in a page fault state; in response to the page fault status identification field taking a second value, it indicates that the target virtual memory page is not in a page fault state. In practical applications, the first value may be 1, the second value may be 00, and, in response to the page fault type field taking the third value, it indicates that the target virtual memory page's page fault type is a memory swap type; in response to the page fault type field taking the fourth value, it indicates that the target virtual memory page's page fault type is a delayed allocation type. FIG4 also illustrates other fields in the page fault status description information, such as the information type field, which indicates that the current information is a page fault status description information for identification by the virtual machine manager. Further examples of other fields that may be included in the page fault status description information are not provided herein. It should be understood that, in addition to the preferred solution described above, other implementation solutions may also be adopted in this embodiment. The virtual machine manager can determine the cause of the target virtual memory page's page fault. For example, the virtual machine manager can request page table monitoring permissions from the source host's kernel state to monitor modification events in the page table. Based on these modification events, the virtual machine manager can determine which virtual memory pages have experienced which event that caused the page fault, thereby determining the cause of the page fault for the faulted virtual memory page. Further implementation examples are not provided here. In summary, in this embodiment, the virtual machine manager can fully interact with the kernel interface supporting memory data migration in the source host, utilizing this kernel interface to provide the virtual machine manager with a reference for detecting page faults and to retrieve memory data for the faulted virtual memory page. This enables the virtual machine manager to more efficiently detect virtual memory page faults and, without triggering a page fault recovery operation, more efficiently obtain the memory data corresponding to the faulted virtual memory page. Figure 5 is a flowchart of a virtual machine memory data migration method provided by another exemplary embodiment of the present disclosure. This method can be applied to a kernel interface in a source host for supporting memory data migration. Referring to FIG. 5 , the method may include: Step 500: receiving a page fault status query instruction issued by a virtual machine manager in the source host when performing a first migration of a target virtual memory page; Step 501: generating page fault status description information for the target virtual memory page based on the page fault status query instruction, wherein the page fault status description information indicates the page fault status of the target virtual memory page; Step 502: providing the page fault status description information to the virtual machine manager so that upon detecting that the target virtual memory page is in a page fault state, the virtual machine manager retrieves memory data corresponding to the target virtual memory page and migrates the retrieved memory data to the destination host without triggering a page fault recovery operation. It should be understood that in this embodiment, the kernel interface still maintains its original memory data migration-related functions, such as responding to memory access requests for virtual memory pages. The memory data migration-related functions originally performed by the kernel interface will not be described in detail herein. In this embodiment, the processing logic shown in FIG. 5 is added to the kernel interface to enable the kernel interface to cooperate with the virtual machine manager to support the virtual machine manager in avoiding triggering page fault recovery operations during memory data migration. The following only clarifies some technical details of the kernel interface: In an optional embodiment, generating page fault status description information for a target virtual memory page may include: querying a page table; determining the page fault status of the target virtual memory page based on the value of the presence flag bit associated with the target virtual memory page, and generating the page fault status description information for the target virtual memory page. In an optional embodiment, the page fault status description information further indicates the page fault type. The page fault status description information may indicate that the page fault type corresponding to the target virtual memory page is a memory swap type, allowing the virtual machine manager to determine that the memory data corresponding to the target virtual memory page has been swapped out to the swap space; and the page fault status description information may indicate that the page fault type corresponding to the target virtual memory page is a delayed allocation type, allowing the virtual machine manager to determine that the target virtual memory page has not yet been allocated to a physical memory page. In an optional embodiment, the page missing status description information includes a page missing status identification field and a page missing type field; In response to the first value of the page fault status flag field, the target virtual memory page is in a page fault state; in response to the second value of the page fault status flag field, the target virtual memory page is not in a page fault state; in response to the third value of the page fault type field, the target virtual memory page is in a memory swap fault type; and in response to the fourth value of the page fault type field, the target virtual memory page is in a delayed allocation fault type. In an optional embodiment, in response to a data acquisition request initiated by the virtual machine manager for the target virtual memory page, memory data corresponding to the target virtual memory page may be read from the swap space and provided to the virtual machine manager. In an optional embodiment, in response to the data acquisition request initiated by the virtual machine manager carrying the target virtual memory page identifier and cache address, the memory data corresponding to the target virtual memory page read from the swap space is stored in the cache address, so that the virtual machine manager can read the memory data corresponding to the target virtual memory page from the cache address. It is worth noting that for more specific technical details regarding the kernel interface, please refer to the previous description and will not be elaborated upon here. However, this should not diminish the scope of protection of the present disclosure. Furthermore, while some processes described in the above embodiments and accompanying figures include multiple operations that appear in a specific order, it should be understood that these operations may be executed in a different order than those presented herein or in parallel. Operation sequence numbers, such as 801 and 802, are merely used to distinguish between different operations and do not represent any specific execution order. Furthermore, these processes may include more or fewer operations, and these operations may be executed sequentially or in parallel. It should be noted that terms such as "first" and "second" are used herein to distinguish between different values and do not represent a sequential order, nor do they limit "first" and "second" to different types. FIG6 is a schematic diagram of the structure of a computing device provided by another exemplary embodiment of the present disclosure. As shown in FIG6 , the computing device may be the source host for the virtual machine to be migrated. The computing device may include a memory 60, a processor 61, and a communication component 62. The processor 61 is coupled to the memory 60 and the communication component 62, and is configured to execute the computer program in the memory 60. In some design solutions, the processor 61 can execute the computer program in the memory 60 to implement a virtual machine manager in the source host. Under such a request, the processor 61 can be configured to: detect the page fault state of the target virtual memory page when performing the first migration for the target virtual memory page; in response to the target virtual memory page being in the page fault state, obtain the memory data corresponding to the target virtual memory page without triggering a page fault recovery operation; and migrate the obtained memory data to the destination host. In an optional embodiment, when the processor 61 obtains the memory data corresponding to the target virtual memory page without triggering a page fault recovery operation, it can be configured to: obtain the memory data corresponding to the target virtual memory page from the swap space in response to the memory data corresponding to the target virtual memory page being swapped out to the source host swap space without triggering a page fault recovery operation; In response to the target virtual memory page not being allocated to a physical memory page on the source host, an empty file is used as the memory data corresponding to the target virtual memory page. In an optional embodiment, upon detecting the page fault status of the target virtual memory page, the processor 61 may be configured to: initiate a page fault status query instruction for the target virtual memory page to a kernel interface on the source host that supports memory data migration, wherein the page fault status query instruction triggers the kernel interface to return page fault status description information corresponding to the target virtual memory page; and, in response to the page fault status description information indicating that the target virtual memory page is in a page fault status, determine that the target virtual memory page is in a page fault status. In an optional embodiment, the page fault status description information further indicates a page fault type corresponding to the target virtual memory page. The processor 61 may be further configured to: in response to the page fault type being a memory swap type, determine that the memory data corresponding to the target virtual memory page has been swapped out to the swap space; and in response to the page fault type being a delayed allocation type, determine that the target virtual memory page has not yet been allocated to a physical memory page. In an optional embodiment, the page fault status description information includes a page fault status identification field and a page fault type field. In response to the page fault status identification field taking a first value, it indicates that the target virtual memory page is in a page fault state. In response to the page fault status identification field taking a second value, it indicates that the target virtual memory page is not in a page fault state. In response to the page fault type field taking a third value, it indicates that the page fault type of the target virtual memory page is a memory swap fault. In response to the page fault type field taking a fourth value, it indicates that the page fault type of the target virtual memory page is a delayed allocation fault. In an optional embodiment, when obtaining memory data corresponding to the target virtual memory page from the swap space, the processor 61 may be configured to: initiate a data acquisition request for the target virtual memory page to a kernel interface, wherein the data acquisition request is configured to trigger the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space; and obtain the memory data corresponding to the target virtual memory page provided by the kernel interface. In an optional embodiment, when initiating a data acquisition request for a target virtual memory page to the kernel interface, the processor 61 may be configured to: allocate a cache address for the target virtual memory page in a preset cache space; include an identifier and cache address of the target virtual memory page in the data acquisition request to trigger the kernel interface to read memory data corresponding to the target virtual memory page from the swap space and store the read memory data at the cache address within the preset cache space; and acquire the memory data corresponding to the target virtual memory page provided by the kernel interface, including: reading the memory data corresponding to the target virtual memory page from the cache address. In an optional embodiment, the processor 61 may be further configured to: in response to the target virtual memory page not being in a page fault state, acquire the memory data corresponding to the target virtual memory page from the physical memory page mapped to the target virtual memory page. In an optional embodiment, the processor 61 may be further configured to: after completing the first migration for the target virtual memory page, in response to detecting that the target virtual memory page has become a dirty page, obtain dirty page data corresponding to the target virtual memory page from the physical memory page mapped to the target virtual memory page; and migrate the obtained dirty page data for the target virtual memory page to the destination host. In other designs, the processor 61 may execute a computer program in the memory 60 to implement a kernel interface in the source host for supporting memory data migration. In this case, processor 61 may be configured to: receive a page fault status query instruction initiated by the virtual machine manager in the source host when performing the first migration of the target virtual memory page; generate page fault status description information for the target virtual memory page based on the page fault status query instruction, wherein the page fault status description information indicates the page fault status of the target virtual memory page; and provide the page fault status description information to the virtual machine manager so that upon detecting that the target virtual memory page is in a page fault state, the virtual machine manager, without triggering a page fault recovery operation, retrieves memory data corresponding to the target virtual memory page and migrates the retrieved memory data to the destination host. Furthermore, as shown in FIG6 , the computing device also includes other components, such as a power supply component 63. FIG6 only schematically illustrates some components and does not mean that the computing device only includes the components shown in FIG6 . It is worth noting that the technical details of the above-mentioned computing device embodiments can be found in the description of the virtual machine manager and the kernel interface for supporting memory data migration in the aforementioned method embodiments. To save space, these details will not be repeated here, but this should not compromise the scope of protection of the present disclosure. Accordingly, embodiments of the present disclosure further provide a computer-readable storage medium storing a computer program. When executed, the computer program can implement each step of the above-described method embodiment. Accordingly, embodiments of the present disclosure further provide a computer program product, including the computer program. When executed by one or more processors, the computer program causes the one or more processors to: detect a page fault status of a target virtual memory page during a first migration of the target virtual memory page; in response to the target virtual memory page being in a page fault state, retrieve memory data corresponding to the target virtual memory page without triggering a page fault recovery operation; and migrate the retrieved memory data for the target virtual memory page to a destination host. The computer program product provided in this embodiment can detect the page fault status of each virtual memory page in a virtual machine during a first migration. For virtual memory pages in a page fault state, the virtual machine manager on the source host retrieves the corresponding memory data for such virtual memory pages without triggering a page fault recovery operation. This way, even if some virtual memory pages are faulted during virtual machine migration, page fault recovery operations will not be triggered. This prevents the source host's physical memory usage from being occupied by page fault recovery operations, reducing the amount of physical memory used by the source host due to memory data migration. Consequently, memory data migration will no longer cause significant fluctuations in physical memory usage on the source host, thus preventing exacerbated memory run issues on the source host. The memory in FIG. 6 is used to store computer programs and can be configured to store various other data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phone book data, messages, images, videos, etc. The memory can be implemented by any type of volatile or non-volatile memory device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), electrically erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The communication component in FIG6 is configured to facilitate wired or wireless communication between the device where the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G/Long-Term Evolution (LTE), 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies. The power supply component in FIG6 provides power to various components of the device where the power supply component is located. The power supply component may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device where the power supply component is located. Those skilled in the art should understand that the embodiments of the present disclosure may be provided as methods, systems, or computer program products. Therefore, the present disclosure may adopt a fully hardware embodiment, a fully software embodiment, or a fully hardware embodiment. or in the form of embodiments combining software and hardware. Furthermore, the present disclosure may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to magnetic disk storage, compact disc read-only memory (CD-ROM), optical storage, etc.) containing computer-usable program code. The present disclosure is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present disclosure. It should be understood that each process and/or block in the flowcharts and/or block diagrams, as well as combinations of processes and/or blocks in the flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing device to produce a machine, such that the instructions, executed by the processor of the computer or other programmable data processing device, produce means for implementing the functions specified in one or more processes in the flowcharts and/or one or more blocks in the block diagrams. These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the functions specified in one or more flow charts and/or one or more blocks in a block diagram. These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing the computer or other programmable device to execute a series of operational steps to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more flow charts and/or one or more blocks in a block diagram. It should also be noted that the terms "comprise,""include," or any other variations thereof are intended to encompass non-exclusive inclusion, such that a process, method, product, or device comprising a list of elements may include not only those elements but also other elements not expressly listed, or elements inherent to such process, method, product, or device. In the absence of further restrictions, elements defined by the phrase "comprising a..." do not preclude the presence of other identical elements in the process, method, product, or device comprising the elements. It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data used for analysis, storage, and display) involved in this disclosure are all authorized by the user or fully authorized by all parties. The collection, use, and processing of the relevant data must comply with the relevant laws, regulations, and standards of the relevant countries and regions, and corresponding operation portals are provided for users to choose to authorize or deny. The above description is merely an embodiment of the present disclosure and is not intended to limit the present disclosure. Persons skilled in the art will readily appreciate that various modifications and variations of the present disclosure are possible. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present disclosure are intended to be included within the scope of protection of the present disclosure. Industrial Applicability: The solution provided in the embodiments of the present disclosure detects the page fault status of each virtual memory page in a virtual machine during the first migration. For virtual memory pages in the page fault state, the source host's virtual machine manager retrieves the corresponding memory data without triggering a page fault recovery operation. This prevents page fault recovery operations from consuming the source host's physical memory, thus reducing the amount of source host physical memory consumed by memory data migration. Consequently, memory data migration no longer causes significant fluctuations in the source host's physical memory usage, thereby resolving the technical issue of memory runs on the source host.

Claims

权 利 要 求 书 Claims 1、 一种虚拟机内存数据的迁移方法, 适用于源主机中的虚拟机管理器, 所述方法包 括 : 在针对目标虚拟内存 页执行第一次迁移的情况 下, 检测所述目标虚拟内存页的缺页 状态; 响应于所述目标虚拟 内存页处于所述缺页状态 , 在未触发缺页恢复操作的情况下, 获取所述 目标虚拟内存页对应的内存数据 ; 将获取到的所述内存数据 , 迁移至目的主机中。 1. A method for migrating virtual machine memory data, applicable to a virtual machine manager in a source host, the method comprising: when performing a first migration on a target virtual memory page, detecting a page fault state of the target virtual memory page; in response to the target virtual memory page being in the page fault state, obtaining memory data corresponding to the target virtual memory page without triggering a page fault recovery operation; and migrating the obtained memory data to a destination host. 2、 根据权利要求 1所述的方法, 在未触发缺页恢复操作的情况下, 获取所述目标虚 拟内存 页对应的内存数据, 包括: 在未触发缺页恢复操作 的情况下, 响应于所述目标虚拟内存页对应的 内存数据已被 换出至所 述源主机对应的交换空 间, 从所述交换空间中获取所述目标虚拟内存页对应 的 内存数据 ; 响应于所述目标虚拟 内存页尚未在所述源主机 中分配到物理内存页, 将空文件作为 所述 目标虚拟内存页对应的内存数据 。 2. The method according to claim 1, wherein obtaining memory data corresponding to the target virtual memory page without triggering a page fault recovery operation comprises: obtaining the memory data corresponding to the target virtual memory page from the swap space in response to the memory data corresponding to the target virtual memory page having been swapped out to the swap space corresponding to the source host without triggering the page fault recovery operation; and using an empty file as the memory data corresponding to the target virtual memory page in response to the target virtual memory page not being allocated to a physical memory page in the source host. 3、 根据权利要求 2所述的方法, 检测所述目标虚拟内存页的缺页状态, 包括: 向所述源主机中用于 支持内存数据迁移的 内核接口, 发起针对所述目标虚拟内存页 的缺页状 态查询指令, 其中, 所述缺页状态查询指令用于触发所述内核接口返回所 述目 标虚拟 内存页对应的缺页状态描述信 息; 响应于所述缺页状态描 述信息指示所述 目标虚拟内存页处于所述缺 页状态, 确定检 测到所述 目标虚拟内存页处于所述缺 页状态。 3. The method according to claim 2, wherein detecting the page fault state of the target virtual memory page comprises: initiating a page fault state query instruction for the target virtual memory page to a kernel interface for supporting memory data migration in the source host, wherein the page fault state query instruction is used to trigger the kernel interface to return page fault state description information corresponding to the target virtual memory page; and determining that the target virtual memory page is detected to be in the page fault state in response to the page fault state description information indicating that the target virtual memory page is in the page fault state. 4、 根据权利要求 3所述的方法, 所述缺页状态描述信息还用于指示所述目标虚拟内 存页对应 的缺页类型, 所述方法还包括: 响应于所述缺页类型 为内存交换类, 确定所述目标虚拟内存页对应的 内存数据已被 换出至所述 交换空间; 响应于所述缺页类型 为延时分配类, 确定所述目标虚拟内存页尚未分配 到所述物理 内存页。 4. The method according to claim 3, wherein the page fault status description information is further used to indicate a page fault type corresponding to the target virtual memory page, the method further comprising: in response to the page fault type being a memory swap type, determining that memory data corresponding to the target virtual memory page has been swapped out to the swap space; and in response to the page fault type being a delayed allocation type, determining that the target virtual memory page has not yet been allocated to the physical memory page. 5、 根据权利要求 4所述的方法, 所述缺页状态描述信息中包含缺页状态标识字段和 缺页类型字段 ; 响应于所述缺页状态标 识字段的取值为第一取值 , 指示所述目标虚拟内存页处于所 述缺页状 态; 响应于所述缺页状态标 识字段的取值为第二取值 , 指示所述目标虚拟内存页未处于 所述缺 页状态; 响应于所述缺页类型字段 的取值为第三取值, 指示所述目标虚拟内存页的缺页类型 为所述 内存交换类; 响应于所述缺页类型字段的取值为第四取 值, 指示所述目标虚拟内 存页的缺 页类型为所述延时分配类。 5. The method according to claim 4, wherein the page fault status description information includes a page fault status identification field and a page fault type field; in response to the value of the page fault status identification field being a first value, it indicates that the target virtual memory page is in the page fault state; in response to the value of the page fault status identification field being a second value, it indicates that the target virtual memory page is not in the page fault state; in response to the value of the page fault type field being a third value, it indicates the page fault type of the target virtual memory page is the memory swap class; in response to the value of the page fault type field being the fourth value, indicating that the page fault type of the target virtual memory page is the delayed allocation class. 6、 根据权利要求 2所述的方法, 从所述交换空间中获取所述目标虚拟内存页对应的 内存数据 , 包括: 向所述源主机中用于 支持内存数据迁移的 内核接口, 发起针对所述目标虚拟内存页 的数据获取 请求, 其中, 所述数据获取请求用于触发所述内核接口从所述交换空 间中读 取所述 目标虚拟内存页对应的内存数据 ; 获取所述内核接口提供 的所述目标虚拟内存页对应 的内存数据。 6. The method according to claim 2, wherein obtaining memory data corresponding to the target virtual memory page from the swap space comprises: initiating a data acquisition request for the target virtual memory page to a kernel interface in the source host for supporting memory data migration, wherein the data acquisition request is used to trigger the kernel interface to read the memory data corresponding to the target virtual memory page from the swap space; and obtaining the memory data corresponding to the target virtual memory page provided by the kernel interface. 7、 根据权利要求 6所述的方法, 向所述内核接口发起针对所述目标虚拟内存页的数 据获取请求 , 包括: 在预设缓存空间中, 为所述目标虚拟内存页分配缓存 地址; 将所述目标虚拟内存 页的标识和所述缓存地址 携带在所述数据获取请 求中, 以触发 所述内核接 口将从所述交换空间 中读取所述目标虚拟内存 页对应的内存数据, 并将读取 到的所述 内存数据存储至所述预设缓存 空间内的所述缓存地址中 ; 获取所述内核接口提供 的所述目标虚拟内存页对应 的内存数据, 包括: 从所述缓存地址中读取所 述目标虚拟内存页对应的 内存数据。 7. The method according to claim 6, wherein initiating a data acquisition request for the target virtual memory page to the kernel interface comprises: allocating a cache address to the target virtual memory page in a preset cache space; carrying an identifier of the target virtual memory page and the cache address in the data acquisition request to trigger the kernel interface to read memory data corresponding to the target virtual memory page from the swap space and store the read memory data in the cache address in the preset cache space; and acquiring memory data corresponding to the target virtual memory page provided by the kernel interface comprises: reading the memory data corresponding to the target virtual memory page from the cache address. 8、 根据权利要求 3所述的方法, 所述方法还包括: 响应于所述目标虚拟 内存页未处于所述缺页状 态, 从所述目标虚拟内存页所映射的 物理内存 页中获取所述目标虚拟内存 页对应的内存数据。 8. The method according to claim 3, further comprising: in response to the target virtual memory page not being in the page fault state, acquiring memory data corresponding to the target virtual memory page from a physical memory page mapped by the target virtual memory page. 9、 根据权利要求 1所述的方法, 所述方法还包括: 在完成针对所述 目标虚拟内存页的第一次迁移 后, 响应于监测到所述目标虚拟内存 页变为脏 页, 从所述目标虚拟内存页所映射的物理内存页 中, 获取所述目标虚拟内存页 对应的脏 页数据; 将为所述目标虚拟内存 页所获取到的所述脏页数据 , 迁移至所述目的主机中。 9. The method according to claim 1, further comprising: after completing the first migration for the target virtual memory page, in response to monitoring that the target virtual memory page becomes a dirty page, obtaining dirty page data corresponding to the target virtual memory page from a physical memory page mapped by the target virtual memory page; and migrating the obtained dirty page data for the target virtual memory page to the destination host. 10、 一种虚拟机内存数据迁移方法, 适用于源主机中用于支持内存数据迁移的 内核 接口, 所述方法包括: 接收所述源主机中的虚 拟机管理器在对 目标虚拟内存页执行第一次迁移 时发起的缺 页状态查询 指令; 根据所述缺页状态查询指 令,为所述目标虚拟内存页生成缺页状态描述 信息,其中, 所述缺 页状态描述信息用于指示所述 目标虚拟内存页的缺页状态 ; 将所述缺页状态描述信 息提供至所述虚拟机管 理器, 以使所述虚拟机管理器在检测 到所述 目标虚拟内存页处于所述缺 页状态后, 在未触发缺页恢复操作的情况下, 获取到 所述 目标虚拟内存页对应的内存数据 , 并将获取到的所述内存数据迁移至目的主机中。 10. A method for migrating virtual machine memory data, applicable to a kernel interface in a source host for supporting memory data migration, the method comprising: receiving a page fault status query instruction initiated by a virtual machine manager in the source host when performing a first migration of a target virtual memory page; generating page fault status description information for the target virtual memory page based on the page fault status query instruction, wherein the page fault status description information is used to indicate the page fault status of the target virtual memory page; and providing the page fault status description information to the virtual machine manager so that the virtual machine manager, after detecting that the target virtual memory page is in the page fault status, obtains memory data corresponding to the target virtual memory page without triggering a page fault recovery operation, and migrates the obtained memory data to a destination host. 11、 根据权利要求 10所述的方法, 所述缺页状态描述信息还用于指示缺页类型, 所 述缺页 类型包括内存交互类或延时 分配类, 所述内存交换类用于触发所述虚拟机管理 器 从所述源 主机对应的交换空间中为 目标虚拟内存页获取 内存数据, 所述延时分配类用于 触发所述虚 拟机管理器将空文件作为所 述目标虚拟内存页对应的 内存数据。 11. The method according to claim 10, wherein the page fault status description information is further used to indicate the page fault type. The page fault type includes a memory swap type or a delayed allocation type. The memory swap type is used to trigger the virtual machine manager to obtain memory data for the target virtual memory page from the swap space corresponding to the source host. The delayed allocation type is used to trigger the virtual machine manager to use an empty file as the memory data corresponding to the target virtual memory page. 12、 一种计算设备, 包括存储器、 处理器和通信组件; 所述存储器用于存储一 条或多条计算机指令; 所述处理器与所述存储 器和所述通信组件耦合 , 用于执行所述一条或多条计算机指 令, 以用于执行权利要求 1至 11任一项所述的虚拟机内存数据的迁移方法。 12. A computing device, comprising a memory, a processor, and a communication component; the memory being configured to store one or more computer instructions; and the processor being coupled to the memory and the communication component and configured to execute the one or more computer instructions to perform the virtual machine memory data migration method according to any one of claims 1 to 11. 13、 一种存储计算机程序的计算机可读存储介质, 当所述计算机程序被一个或多个 处理器执行 时,致使所述一个或多个处理器执行权利 要求 1至 11任一项所述的虚拟机内 存数据的迁移 方法。 13. A computer-readable storage medium storing a computer program, wherein when the computer program is executed by one or more processors, the one or more processors are caused to perform the method for migrating virtual machine memory data according to any one of claims 1 to 11. 14、 一种计算机程序产品, 包括计算机程序, 当所述计算机程序被一个或多个处理 器执行时 ,致使所述一个或多个处理器执行权利要 求 1至 11任一项所述的虚拟机内存数 据的迁移方 法。 14. A computer program product, comprising a computer program. When the computer program is executed by one or more processors, the computer program causes the one or more processors to perform the method for migrating virtual machine memory data according to any one of claims 1 to 11. 15、 一种计算机程序产品, 包括非易失性计算机可读存储介质, 所述非易失性计算机可 读存储介质存储计算机程序,所述计算机程序被处理器执行时实现权利要求 1至 11中任一项 所述的虚拟机 内存数据的迁移方法。 15. A computer program product, comprising a non-volatile computer-readable storage medium, wherein the non-volatile computer-readable storage medium stores a computer program, wherein when the computer program is executed by a processor, the method for migrating virtual machine memory data according to any one of claims 1 to 11 is implemented. 16、 一种计算机程序, 其中, 所述计算机程序被处理器执行时实现权利要求 1至 11中任 一项所述的虚拟机 内存数据的迁移方法。 16. A computer program, wherein when executed by a processor, the computer program implements the virtual machine memory data migration method according to any one of claims 1 to 11. 19 19
PCT/IB2025/052367 2024-03-27 2025-03-05 Virtual machine memory data migration method, device, computer program product, and storage medium Pending WO2025202802A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410362467.4 2024-03-27
CN202410362467.4A CN120723363A (en) 2024-03-27 2024-03-27 A virtual machine memory data migration method, device, computer program product and storage medium

Publications (1)

Publication Number Publication Date
WO2025202802A1 true WO2025202802A1 (en) 2025-10-02

Family

ID=97167515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2025/052367 Pending WO2025202802A1 (en) 2024-03-27 2025-03-05 Virtual machine memory data migration method, device, computer program product, and storage medium

Country Status (2)

Country Link
CN (1) CN120723363A (en)
WO (1) WO2025202802A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9183089B1 (en) * 2008-12-15 2015-11-10 Open Invention Network, Llc System and method for hybrid kernel and user-space checkpointing using a chacter device
CN108713189A (en) * 2016-06-10 2018-10-26 谷歌有限责任公司 speculative virtual machine execution
US20210019206A1 (en) * 2019-07-17 2021-01-21 Memverge, Inc. Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
US20230023696A1 (en) * 2021-07-20 2023-01-26 Vmware, Inc. Migrating virtual machines in cluster memory systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9183089B1 (en) * 2008-12-15 2015-11-10 Open Invention Network, Llc System and method for hybrid kernel and user-space checkpointing using a chacter device
CN108713189A (en) * 2016-06-10 2018-10-26 谷歌有限责任公司 speculative virtual machine execution
US20210019206A1 (en) * 2019-07-17 2021-01-21 Memverge, Inc. Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
US20230023696A1 (en) * 2021-07-20 2023-01-26 Vmware, Inc. Migrating virtual machines in cluster memory systems

Also Published As

Publication number Publication date
CN120723363A (en) 2025-09-30

Similar Documents

Publication Publication Date Title
US11531625B2 (en) Memory management method and apparatus
CN111949605B (en) Method, apparatus and computer program product for implementing a file system
US20170075818A1 (en) Memory management method and device
US9852054B2 (en) Elastic caching for Java virtual machines
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
JP5425286B2 (en) How to track memory usage in a data processing system
US8850156B2 (en) Method and system for managing virtual machine storage space and physical host
US20150309735A1 (en) Techniques for reducing read i/o latency in virtual machines
WO2024183559A1 (en) Data sharing method and system, and device and storage medium
WO2018176911A1 (en) Virtual disk file format conversion method and device
EP4375836A1 (en) Memory paging method and system, and storage medium
WO2021047425A1 (en) Virtualization method and system for persistent memory
CN118819871B (en) Memory management method, host machine, electronic device, storage medium and program product
CN116303123A (en) Page shortage processing method, device and storage medium
CN112000277B (en) Method, device and equipment for copying simplified backup file and readable storage medium
CN110019475B (en) Data persistence processing method, device and system
WO2017054636A1 (en) Method and apparatus for processing virtual machine snapshots
US11741056B2 (en) Methods and systems for allocating free space in a sparse file system
WO2025202802A1 (en) Virtual machine memory data migration method, device, computer program product, and storage medium
WO2025060806A1 (en) Memory management method and device and storage medium
US20230027307A1 (en) Hypervisor-assisted transient cache for virtual machines
CN118860622A (en) Data processing system and memory dynamic allocation method
WO2023030173A1 (en) Method for managing dynamic library and corresponding apparatus
CN119988044B (en) Memory management method, host, electronic device, storage medium, and program product
HK40070328A (en) Memory page changing method and system and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25779222

Country of ref document: EP

Kind code of ref document: A1