[go: up one dir, main page]

WO2022061859A1 - Application restore based on volatile memory storage across system resets - Google Patents

Application restore based on volatile memory storage across system resets Download PDF

Info

Publication number
WO2022061859A1
WO2022061859A1 PCT/CN2020/118297 CN2020118297W WO2022061859A1 WO 2022061859 A1 WO2022061859 A1 WO 2022061859A1 CN 2020118297 W CN2020118297 W CN 2020118297W WO 2022061859 A1 WO2022061859 A1 WO 2022061859A1
Authority
WO
WIPO (PCT)
Prior art keywords
restore data
computing system
memory
volatile memory
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/118297
Other languages
French (fr)
Inventor
Fei Li
Shuo LIU
Zhuangzhi LI
Zhi JIN
Di Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to PCT/CN2020/118297 priority Critical patent/WO2022061859A1/en
Publication of WO2022061859A1 publication Critical patent/WO2022061859A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1438Restarting or rejuvenating

Definitions

  • Embodiments generally relate to an accelerated system restore after warm restarts. More particularly, embodiments relate to volatile memory storage (e.g., based on a Ram-based Persist Filesystem) to allow snapshots and/or checkpoints of one or more applications to sustain across the warm reboots so that they may be rapidly resumed from memory.
  • volatile memory storage e.g., based on a Ram-based Persist Filesystem
  • Various triggers may cause system resets of computing systems. Some examples may include updates to operating system (OS) components, firmware components, user actuation, etc.
  • OS operating system
  • a full system reboot may result in system changes (e.g., rebooting of firmware, Basic Input/Output Systems, Unified Extensible Firmware Interfaces, OS kernels, and OS frameworks, OS services, etc. ) .
  • applications may be restarted during the full system reboot. Such application restarts may be pronounced in cloud environments.
  • time may be spent to restore the applications to a previous running status, especially when “applications” include a virtual machines (VMs) and/or containers (e.g., software that packages code and dependencies of the code an application may runs quickly and reliably in the container) .
  • VMs virtual machines
  • containers e.g., software that packages code and dependencies of the code an application may runs quickly and reliably in the container
  • FIG. 1 is a process flow diagram of an example of a restoration process according to an embodiment
  • FIG. 2 is a flowchart of an example of a method of restarting one or more applications according to an embodiment
  • FIG. 3 is a process flow diagram of an example of using Ram-based Persist Filesystem to accelerate a reboot according to an embodiment
  • FIG. 4 is a process flow diagram of an example of a Ram-based Persist Filesystem Driver operations according to an embodiment
  • FIG. 5 is a block diagram of an example of a deduplicated memory system according to an embodiment
  • FIG. 6 is a process flow diagram of an example of a Ram-based Persist Filesystem data flow according to an embodiment
  • FIG. 7 is a process flow diagram of an example of a progressively accelerated parallel compression mechanism according to an embodiment
  • FIG. 8 is a block diagram of an example of a performance-enhanced computing system according to an embodiment
  • FIG. 9 is an illustration of an example of a semiconductor apparatus according to an embodiment
  • FIG. 10 is a block diagram of an example of a processor according to an embodiment.
  • FIG. 11 is a block diagram of an example of a multi-processor based computing system according to an embodiment.
  • a Ram-based Persist Filesystem is provided to allow for an enhanced and low latency reconstruction of an application 118 (e.g., VM and/or container application) .
  • a RPFS and driver of the RPFS may create a storage area directly in a volatile memory 104 (e.g., random-access memory (RAM) , double data rate (DDR) memory, etc. ) similar to a partition on a disk drive, that persists across reboots and/or warm restarts (e.g., power is maintained) .
  • RAM random-access memory
  • DDR double data rate
  • a system program 108 may store restore data 106 (e.g., snapshots and/or checkpoints) of the application 118 to a volatile memory 104.
  • restore data 106 e.g., snapshots and/or checkpoints
  • system program 108 the restore data 106 may be stored in response to one or more of a request from the application 118, a firmware update, a software update or an identification that the computing device 102 is to be restarted.
  • the restore data 106 may be sustained across warm reboots of the computing device 102 so that the application 118 may be rapidly resumed from volatile memory 104.
  • Storing the restore data 106 to volatile memory 104 may accelerate the reboot and reconstruction time as compared to storing the restore data 106 to a long-term, non-volatile storage such as hard disk drive, network-based disk/file systems, etc. due to the read/write speeds of such storage and transfer latencies.
  • some embodiments may further enhance efficiency of memory usage and reduce cost by using RPFS with memory deduplication (explained below) and data compression.
  • memory deduplication may identify and merge identical memory pages to reduce storage space in the volatile memory 104 of the restore data 106.
  • Some embodiments may further employ compression algorithms to compress the restore data 106 to reduce storage space in the volatile memory 104.
  • some embodiments may employ a RPFS in the volatile memory 104 to maintain application snapshots in the restore data 106 in a valid state and retained across the warm reboots. For example, before a reset and/or restart of a computing device 102, snapshots and/or checkpoints of applications (e.g., VMs, Containers, etc. ) may be saved as restore data 106 to the volatile memory 104 that employs RPFS and driver (e.g., based on DDR technology) . During the reboot, the restore data 106 in the volatile memory 104 may be substantially unaltered while the application 118 is being restored, for example by the BIOS/UEFI and OS initializations.
  • applications e.g., VMs, Containers, etc.
  • driver e.g., based on DDR technology
  • a RPFS driver may re-construct the RPFS (e.g., storage locations of data associated with applications) from the RPFS memory so that applications may be rapidly resumed from memory rapidly. For example, some embodiments may reconstruct file system meta data from a firmware reserved volatile memory portion so that applications may access data files of the applications from the file system and resume states rapidly.
  • applications (which may include VMs, containers, services, etc. ) may be recovered with less latency and more efficiency after reboot and up to around 80 times faster than rebooting applications from a disk.
  • the RPFS mechanism and storage may be transparent to applications.
  • applications may already support snapshot and/or checkpointing technology and may use RPFS in conjunction with such technology reducing modifications to the applications to support the RPFS technology.
  • an amount of the volatile memory 104 used by RPFS may be dynamically shared with normal memory usage to reduce memory costs.
  • a system program 108 may store restore data 106, 114 to volatile memory 104.
  • the system program 108 may be an application manager of the application 118, an independent application and/or an operating system (OS) of the computing device 102.
  • the system program 108 may store the restore data 106 in response to a specific trigger being identified, such as an initiation of a shut-down or warm-restart of the computing device 102.
  • the system program 108 may further provide a notification 112 to a reboot program 110.
  • the notification may include an identification of the restore data 106, such as memory addresses (e.g., data pointers) of the restore data 106, and/or an instruction to not overwrite the restore data 106 during reboot.
  • the reboot program 110 may be BIOS or UEFI that is responsible for rebooting the computing device 102 and/or executing a boot process.
  • the reboot program 110 may avoid overwriting the restore data 106 during the reboot process and ensure that power is provided to the volatile memory 104 during the reboot process.
  • the volatile memory 104 does not receive sufficient power, the data in the volatile memory 104, including the restore data 106, may be lost. That is, the volatile memory 104 may require power throughout the boot process to maintain stored information including the restore data 106.
  • the process 100 may then reboot 116.
  • the system program 108 e.g., the RPFS driver
  • the system program 108 may verify the authenticity of the restore data 106, 122 (e.g., checksum authentications and validations) to verify that the restore data 106 has not been tampered with and/or that the restore data 106 is malicious code.
  • the reboot program 110 may be considered “upstream” of the chain of trust, and maintains the preserved data, executes the reboot, and bootload of another OS instance.
  • the system program 108 may verify and check the restore data 106 to avoid memory errors, corruptions, etc. that may cause a failure to reboot. For example, such verification and checking may be carried out by the RPFS driver in an OS context.
  • the RPFS driver which may be part of the system program 108, may firstly execute a checksum of the restore data 106 when the restore data 106 is stored in the volatile memory 104, and store the checksum in a secure, non-volatile space.
  • the system program 108 e.g., RPFS driver
  • the system program 108 may pass data pointers (e.g., memory addresses) to the reboot program 110 as described above when the notification is provided 112.
  • the reboot program 110 reports the preserved data pointer to the system program, 108 (e.g., the RPFS driver) .
  • the system program 108 e.g., the RPFS driver
  • the system program 108 rechecks the data against the saved checksum.
  • the system program 108 (e.g., the RPFS driver) may use acceleration techniques to accelerate the calculation, so that the boot speed is improved.
  • the system program 108 may then restore the application 118 from the restore data 106.
  • the system program 108 may include an RPFS driver that reconstructs file system views from the restore data 106 for an OS management application to access.
  • the OS management application may read the snapshot and/or checkpoint from the reconstructed file system and resume the target applications, which in this example is application 118.
  • By resuming the application 118 from the restore data 106 stored in the volatile memory 104 latency may be reduced, and the reboot process may execute with enhanced efficiency.
  • some embodiments may reduce power consumption since storage accesses to non-volatile memory may be reduced. Thus, efficiency may be enhanced, power consumption may be reduced, and applications may be securely restarted.
  • FIG. 2 shows a method 300 of restarting one or more applications.
  • the method 300 may generally be implemented the computing device 102 (FIG. 1) , already discussed. More particularly, the method 300 may be implemented as one or more modules in a set of logic instructions stored in a machine-or computer-readable storage medium such as random access memory (RAM) , read only memory (ROM) , programmable ROM (PROM) , firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs) , FPGAs, complex programmable logic devices (CPLDs) , in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC) , complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.
  • ASIC application specific integrated circuit
  • CMOS complementary metal oxide semiconductor
  • TTL transistor-transistor logic
  • computer program code to carry out operations shown in the method 300 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • logic instructions might include assembler instructions, ISA instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc. ) .
  • Illustrated processing block 302 identifies restore data stored in a volatile memory, where the restore data is associated with the one or more applications. Illustrated processing block 304 generates a list associated with the restore data, where the list is to include memory locations of the restore data. Illustrated processing block 306 identifies that the computing system is to be rebooted. Illustrated processing block 308, after the computing system has been rebooted, restores the one or more applications based on the restore data and the list.
  • the method 300 stores the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system. In some embodiments, the method 300 stores the restore data into the volatile memory in response to a request from the one or more applications. In some embodiments, the method 300 provides a notification to a program that is to reboot the computing system, where the notification indicates the memory locations and that the restore data is to be preserved across the reboot. In some embodiments, the program is to be one of a Basic Input Output System or a Unified Extensible Firmware Interface.
  • the method 300 conducts a verification of authenticity data (e.g., checksum data) associated with the restore data to verify that the restore data was not tampered with and is not malicious, and reconstructs a filesystem associated with the restore data in response to the verification.
  • a verification of authenticity data e.g., checksum data
  • the volatile memory is a double-data rate memory.
  • the restore data is to be one or more of a checkpoint or a snapshot.
  • FIG. 3 shows a process 400 of using RPFS and a driver of the RPFS to accelerate a reboot.
  • the process 400 may be readily operated with the embodiments described herein, such as the process (FIG. 1) and/or method 300 (FIG. 2) . While a snapshot is discussed below, it will be understood that any restore data may be readily substituted for the snapshot (e.g., checkpoint) .
  • a management agent 402 may trigger a reset 408. For example, the management agent 402 may initiate a software or firmware update, and then notify an OS 404 to reboot to activate the update.
  • the OS 404 may store snapshots to an RPFS 410.
  • a storage application e.g., a VM manager, Container manager, and/or any application which is capable to save states and restore the states later
  • a snapshot and/or checkpoint of an application may be stored to the RPFS.
  • an RPFS driver of the RPFS may receive the snapshot and store the snapshot according to the RPFS. Storing such a volume of data for the snapshot may normally consume an extensive amount of memory which may lead to detrimental performance.
  • the RPFS drive may utilize memory de-duplication and accelerators to compress the data (e.g., QuickAssist technology) in addition to utilizing RPFS, so that the actual memory used by RPFS is substantially reduced.
  • a RPFS driver operating in the OS 404 may construct an in-use memory page list of RPFS.
  • the RPFS driver may provide a notification to a reboot program 406 of the memory pages used by RPFS for preservation and request a warm reset 412 to avoid memory erasures of volatile memory that stores the snapshot.
  • the reboot program e.g., BIOS and/or UEFI
  • the reboot program 406 may preserve the memory pages according to the list so that they will not be overridden.
  • the reboot program 406 may reboot without overriding the memory pages for preservation and pass verification data (e.g., checksum of RPFS metadata) to the operating system 404, 414.
  • the RPFS driver verifies the memory pages based on the verification data and reconstructs the RPFS during initialization 416. For example, during RPFS driver initialization, the RPFS driver may reconstruct the RPFS if metadata information associated with the memory pages exists and if the memory pages pass verification based on checksum data of the verification data. The RPFS driver may then proceed to build the file system view for the OS application's access. The application may then be resumed from the snapshots on the RPFS.
  • FIG. 4 illustrates a process 450 that may be implemented by an RPFS driver 468.
  • the process 450 may be readily operated with the embodiments described herein, such as the process (FIG. 1) , method 300 (FIG. 2) and/or process 400 (FIG. 3) .
  • the RPFS driver 468 may implement aspects of the RPFS processes to accelerate reboot times.
  • the RPFS driver 468 may provide a filesystem interface to allow user space access to the filesystem with memory as a backend “storage. ”
  • the RPFS driver 468 may dynamically allocate memory on-demand as opposed to statically assigned memory spaces in the physical memory 474 so as to allow other applications access to the physical memory 474.
  • the RPFS driver 468 may maintain RPFS metadata information 470 and RPFS memory Preservation List 472 to save the RPFS data 478 so that the RPFS in the physical memory 474 may be reconstructed across a warm reboot.
  • the RPFS metadata information 470 may include the metadata address 470a and metadata checksum 470b which may be a checksum of the whole page list, including both the RPFS memory preservation list 472 and associated memory pages (e.g., snapshots memory locations)
  • the RPFS memory preservation list 472 may include addresses of memory list addresses as stored in the scatterlist [0] 472a-scatterlist [n] 472n.
  • the scatterlist [0] 472a-scatterlist [n] 472n may be a preserve memory list that includes every page used by RPFS (e.g., addresses of the restore data stored by the RPFS) .
  • the RPFS memory preservation list 472 may be provided to reboot software so that the reboot software does not overwrite restore data.
  • the RPFS metadata information 470 may be stored to a non-volatile, secure portion of the physical memory 474 so as to be preserved across reboots.
  • the RPFS memory preservation list 472 may be stored into a volatile portion of the physical memory 474.
  • the first VM 452, the second VM 454 and container C1 456 (which may be executing on a computing system that is to be rebooted) save snapshots 476 to a RPFS, which is illustrated in the physical memory 474 (e.g., volatile portions.
  • RPFS driver 468 may allocate dedicated firmware preserved data-blocks to maintain the file across reboots.
  • the snapshots are saved and scattered in physical memory 474.
  • the first VM 452 stores a first VM snapshot portion one 458a and a first VM snapshot portion two 458b.
  • the second VM 454 stores a second VM snapshot portion one 460a, a second VM snapshot portion two 460b, and a second VM snapshot portion three 460c.
  • the container C1 stores C1 snapshot portion one 464a and C1 snapshot portion one 464b.
  • Process 450 may then reboot 480 a computing system that executes the first VM, 452, second VM 454 and container C1 456.
  • the RPFS driver 468 may then restore (e.g., reconstruct) the RPFS 482 based on the RPFS metadata information 470 and the RPFS memory preservation list 472.
  • the RFPS metadata information 470 and RPFS memory preservation list 472 may be passed to a reboot program during the saving of the RPFS data 478 and read back from the reboot program at during the reboot 480.
  • the RPFS driver 468 could still able to reconstruct the file system view (e.g. the RPFS) based on the RPFS metadata information 470 and RPFS memory preservation list 472.
  • the RPFS memory preservation list 472 may be stored as the RPFS metadata chunk 462 (e.g., in volatile memory) .
  • the RPFS driver 468 may then restore the snapshots 484 based on the data stored in the physical memory 474 to reconstruct the previous content from the scatterlist [0] 472a-scatterlist [n] 472n.
  • OS management software may read snapshot files via the RPFS driver 468 so that the first VM 452, the second VM 454 and the container C1 456 may be resumed from the snapshots.
  • the physical memory 474 may be a volatile memory.
  • FIG. 5 illustrates a deduplieated memory system 500 that includes memory deduplication combined with RPFS systems.
  • the deduplicated memory system 500 may be readily operated with the embodiments described herein, such as the process (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , and/or process 450 (FIG. 4) .
  • Memory deduplication may increase an effective capacity of physical memory 522 unique values in memory. In the deduplicated memory system 500, memory may no longer be treated as a linear array.
  • the deduplicated memory system 500 may be organized into two regions, including a physical memory address space 506 (e.g., a translation table that is an indirection table that maps each system address (SA) to a data line) and a physical memory 522 that is a data region (e.g., a region where data lines are stored) .
  • a physical memory address space 506 e.g., a translation table that is an indirection table that maps each system address (SA) to a data line
  • SA system address
  • data region e.g., a region where data lines are stored
  • content of the first VM 502, or container in some embodiments, and snapshot data may be nearly exactly the same as running memory content except some small running state information.
  • deduplication may be effectively used with RPFS to avoid redundant data storage.
  • operational memories and snapshots may be redirected to the same data that is stored only once in the physical memory 522.
  • the first VM 502 may store first VM operational memory portion one 508, first VM operational memory portion two 510, and first VM operational memory portion three 512 in the physical memory address space 506.
  • the first VM snapshot 504 may store first VM 1 st snapshot portion 518, first VM 2 nd snapshot portion 514, first VM 3 rd snapshot portion 516 and first VM running state information 520.
  • the first VM operation memory portion one 508 and first VM 1 st snapshot 518 may be nearly identical, and thus it is possible to store only one copy of the first VM operational memory portion one 508 in the physical memory 522 which contains contents that correspond to both the first VM operation memory portion one 508 and the first VM 1 st snapshot portion 518.
  • Both the first VM 1 st snapshot portion 518 and the first VM operational memory one 508 may be redirected from the physical memory address space 506 to the first VM operational memory one 508.
  • both the first VM operational memory portion two 510 and first VM 2 nd snapshot potion 514 in the physical memory address space 506 may direct to the first VM operational memory portion two 510 in the physical memory 522.
  • both the first VM operational memory portion three 512 and first VM 3 rd snapshot portion 516 in the physical memory address space 506 may direct to the first VM operational memory portion three 512 in the physical memory 522.
  • the first VM running state information 520 in the physical memory 522 may be redirected to the first VM running state information 520.
  • the first VM 502 may be rebuilt from the first VM operational memory portion one 508, the first VM operational memory portion two 510, and first VM operational memory portion three 512.
  • the first VM 1 st snapshot portion 518, the first VM 2 nd snapshot portion 514 and the first VM 3 rd snapshot portion 516 that are stored in the physical memory address space 506 occupy a substantially reduced amount of memory cells since duplicative data does not need to be stored.
  • a de-duplicated memory system such as deduplicated memory system 500, will report more system addresses (e.g., a second memory address space) than the original memory system 500 (e.g., a first memory address space that is less than the second memory address space by an address range) actual has to the OS (e.g., total configured memory size may exceed the amount of available physical memory) .
  • the overcommit ratio may be 200%.
  • firmware may report around 300%system address space, with a basic 200%overcommit plus extra 100%overcommit for RPFS snapshot saving.
  • the extra 100%overcommitted system address range may be owned by RPFS driver and may not be used by other OS components, or applications.
  • an RPFS driver may save application snapshots into the extra 100%overcommitted system ranges, where they are fully merged with the normal application data.
  • the deduplication may be executed within a non-uniform memory access (NUMA) node.
  • NUMA non-uniform memory access
  • the RPFS e.g., a driver
  • the RPFS may also be aware of the NUMA node, which may provide the memory space in the same NUMA node to the VM and/or container for saving snapshot files of the VM and/or container. Doing so may allow for memory deduplication to be fully leveraged and to enhance a maximum compaction ratio.
  • the extra snapshotting space is per-NUMA node.
  • RPFS may be considered “NUMA aware” to the extent that the RPFS may save pages into the snapshotting space of the same NUMA node the original page of the corresponding application that is associated with snapshot is stored within.
  • new entries in the physical memory address space 506 may be generated, but nearly no added consumption of the physical memory 522 as the new snapshots have been deduplicated and redirect to operational memories that are used for execution of the first VM 502. Then after restoring the snapshots from RPFS, the newly added translation table entries may be reclaimed by clearing the memory.
  • memory de-duplication may be carried out within NUMA nodes. For example, in a system with multiple NUMA nodes, some embodiments may fully release the potential of memory compaction by configuring the RPFS to be NUMA aware.
  • the RPFS driver may own one extra overcommit range per NUMA node, which may correspond to physical memory 522.
  • the RPFS driver may check the original NUMA node of the page and write to the overcommit range of the same NUMA node, thus guaranteeing the snapshot page and the original page may be merged.
  • FIG. 6 shows the RPFS data flow 600 to save of using compression (e.g., QuickAssist technology (QAT) ) technology to compress VM or Container snapshots/checkpoints.
  • compression e.g., QuickAssist technology (QAT)
  • QAT QuickAssist technology
  • FIG. 5 shows the RPFS data flow 600 to save of using compression (e.g., QuickAssist technology (QAT) ) technology to compress VM or Container snapshots/checkpoints.
  • QAT QuickAssist technology
  • the RPFS restore flow with compression may be the opposite of flow 600.
  • a VM or Container manager 602 may generate snapshots and/or checkpoints and ready then request RPFS driver 604 to save data.
  • the RPFS driver 604 may submit a request to the compression driver/library 606 for compressing snapshot data and save to physical memory 610 (e.g., a DDR memory) .
  • the compression accelerator 608 may access snapshots and/or checkpoints data from (e.g., direct memory accessed) from memory 610, compress the data and transmit the data back to physical memory 610 (e.g., a RPFS memory) .
  • the RPFS driver 604 may receive a compression done signal from the compression driver/library 606 and compression accelerator 608 and notifies the application manager 602 that the snapshots/checkpoints data saving completed.
  • FIG. 7 shows an example of a flow 640 of a progressively accelerated parallel compression mechanism.
  • the flow 640 may be readily implemented in conjunction with any of the embodiments described herein, such as the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) and/or flow 600 (FIG. 6) .
  • physical memory 642 may include snapshots.
  • each of the VM 1 -VM n boxes are VM 1 -VM n snapshots that are saved to physical memory 642 through RPFS.
  • Each of VM 1 -VM n snapshots has 2 gigabyte (G) size and the compression ratio may be around 50%.
  • G gigabyte
  • each of VM 1 -VM n snapshots may be compressed to 1G size by different techniques, such as QAT.
  • the RPFS saving flow and compression flow may be carried out together to save the compressed snapshots in-place, as described in the paragraph below.
  • the VM1 snapshot may be compressed (e.g., using QAT) to 1G and saved to 1G free memory through RPFS.
  • the slot that was originally occupied by VM 1 is now free, and the compressed VM 1 is stored in the originally 1G free location.
  • VM2 and VM3 snapshots may be compressed in parallel and saved to 2G free memory space.
  • 4G memory was released by the compression of VM2 and VM3, and therefore operation 648 may then compress VM4, VM5, VM6 and VM7 snapshots in parallel and saved to 4G free RPFS.
  • operation 650 may compress 8 VM snapshots in parallel, then 16 VMs. Thereafter, 32 VMs may be compressed and so forth. Thus, if a total of N VMs are provided, only log2N rounds of compressions of snapshots may be needed to substantially reduce the occupied memory space.
  • the system 150 may generally be part of an electronic device/platform having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, convertible tablet, server) , communications functionality (e.g., smart phone) , imaging functionality (e.g., camera, camcorder) , media playing functionality (e.g., smart television/TV) , wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry) , vehicular functionality (e.g., car, track, motorcycle) , robotic functionality (e.g., autonomous robot) , etc., or any combination thereof.
  • the system 150 includes a host processor 152 (e.g., CPU) having an integrated memory controller (IMC) 154 that is coupled to a system memory 156.
  • IMC integrated memory controller
  • the illustrated system 150 also includes an input output (IO) module 158 implemented together with the host processor 152 and a graphics processor 160 (e.g., GPU) on a semiconductor die 162 as a system on chip (SoC) .
  • the illustrated IO module 158 communicates with, for example, a display 164 (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display) , a network controller 166 (e.g., wired and/or wireless) , and mass storage 168 (e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory) .
  • a display 164 e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display
  • a network controller 166 e.g., wired and/or wireless
  • mass storage 168 e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory
  • the host processor 152, the graphics processor 160 and/or the IO module 158 may execute instructions 170 retrieved from the system memory 156 and/or the mass storage 168.
  • the computing system 150 is operated in a system150 restart stage and the instructions 170 include executable program instructions to perform one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) .
  • FIG. 9 shows a semiconductor apparatus 172 (e.g., chip, die, package) .
  • the illustrated apparatus 172 includes one or more substrates 174 (e.g., silicon, sapphire, gallium arsenide) and logic 176 (e.g., transistor array and other integrated circuit/IC components) coupled to the substrate (s) 174.
  • the apparatus 172 is operated in an application development stage and the logic 176 perform one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) .
  • the logic 176 may be implemented at least partly in configurable logic or fixed-functionality hardware logic.
  • the logic 176 includes transistor channel regions that are positioned (e.g., embedded) within the substrate (s) 174.
  • the interface between the logic 176 and the substrate (s) 174 may not be an abrupt junction.
  • the logic 176 may also be considered to include an epitaxial layer that is grown on an initial wafer of the substrate (s) 174.
  • FIG. 10 illustrates a processor core 200 according to one embodiment.
  • the processor core 200 may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP) , a network processor, or other device to execute code. Although only one processor core 200 is illustrated in FIG. 10, a processing element may alternatively include more than one of the processor core 200 illustrated in FIG. 10.
  • the processor core 200 may be a single-threaded core or, for at least one embodiment, the processor core 200 may be multithreaded in that it may include more than one hardware thread context (or “logical processor” ) per core.
  • FIG. 10 also illustrates a memory 270 coupled to the processor core 200.
  • the memory 270 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art.
  • the memory 270 may include one or more code 213 instruction (s) to be executed by the processor core 200, wherein the code 213 may implement one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) .
  • the processor core 200 follows a program sequence of instructions indicated by the code 213.
  • Each instruction may enter a front end portion 210 and be processed by one or more decoders 220.
  • the decoder 220 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction.
  • the illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution.
  • the processor core 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function.
  • the illustrated execution logic 250 performs the operations specified by code instructions.
  • back end logic 260 retires the instructions of the code 213.
  • the processor core 200 allows out of order execution but requires in order retirement of instructions.
  • Retirement logic 265 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like) . In this manner, the processor core 200 is transformed during execution of the code 213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250.
  • a processing element may include other elements on chip with the processor core 200.
  • a processing element may include memory control logic along with the processor core 200.
  • the processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic.
  • the processing element may also include one or more caches.
  • FIG. 11 shown is a block diagram of a computing system 1000 embodiment in accordance with an embodiment. Shown in FIG. 11 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080. While two processing elements 1070 and 1080 are shown, it is to be understood that an embodiment of the system 1000 may also include only one such processing element.
  • the system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated in FIG. 11 may be implemented as a multi-drop bus rather than point-to-point interconnect.
  • each of processing elements 1070 and 1080 may be multicore processors, including first and second processor cores (i.e., processor cores 1074a and 1074b and processor cores 1084a and 1084b) .
  • processor cores 1074a, 1074b, 1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 10.
  • Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b.
  • the shared cache 1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074a, 1074b and 1084a, 1084b, respectively.
  • the shared cache 1896a, 1896b may locally cache data stored in a memory 1032, 1034 for faster access by components of the processor.
  • the shared cache 1896a, 1896b may include one or more mid-level caches, such as level 2 (L2) , level 3 (L3) , level 4 (L4) , or other levels of cache, a last level cache (LLC) , and/or combinations thereof.
  • LLC last level cache
  • processing elements 1070, 1080 may be present in a given processor.
  • processing elements 1070, 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array.
  • additional processing element (s) may include additional processors (s) that are the same as a first processor 1070, additional processor (s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units) , field programmable gate arrays, or any other processing element.
  • accelerators such as, e.g., graphics accelerators or digital signal processing (DSP) units
  • DSP digital signal processing
  • processing elements 1070, 1080 there can be a variety of differences between the processing elements 1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070, 1080.
  • the various processing elements 1070, 1080 may reside in the same die package.
  • the first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078.
  • the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088.
  • MC's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034, which may be portions of main memory locally attached to the respective processors.
  • the MC 1072 and 1082 is illustrated as integrated into the processing elements 1070, 1080, for alternative embodiments the MC logic may be discrete logic outside the processing elements 1070, 1080 rather than integrated therein.
  • the first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 1076 1086, respectively.
  • the I/O subsystem 1090 includes P-P interfaces 1094 and 1098.
  • I/O subsystem 1090 includes an interface 1092 to couple I/O subsystem 1090 with a high performance graphics engine 1038.
  • bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090.
  • a point-to-point interconnect may couple these components.
  • I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096.
  • the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.
  • PCI Peripheral Component Interconnect
  • various I/O devices 1014 may be coupled to the first bus 1016, along with a bus bridge 1018 which may couple the first bus 1016 to a second bus 1020.
  • the second bus 1020 may be a low pin count (LPC) bus.
  • Various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012, communication device (s) 1026, and a data storage unit 1019 such as a disk drive or other mass storage device which may include code 1030, in one embodiment.
  • the illustrated code 1030 may implement one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG.
  • an audio I/O 1024 may be coupled to second bus 1020 and a battery 1010 may supply power to the computing system 1000.
  • FIG. 11 may implement a multi-drop bus or another such communication topology.
  • the elements of FIG. 11 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 11.
  • Example 1 includes an efficiency-enhanced computing system comprising a processor coupled to a volatile memory and to execute one or more applications, and a memory including a set of executable program instructions, which when executed by the processor, cause the computing system to identify restore data stored in the volatile memory, wherein the restore data is associated with the one or more applications, generate a list associated with the restore data, wherein the list is to include memory locations of the restore data, identify that the computing system is to be rebooted, and after the computing system has been rebooted, cause the one or more applications to be restored based on the restore data and the list.
  • Example 2 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  • Example 3 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to provide a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, store the restore data in volatile memory, and pass the list to the program.
  • Example 4 includes the computing system of Example 3, wherein the program is to: trigger a warm reboot of the system, supply power to the volatile memory during the warm reboot, preserve the list across the warm reboot, provide a memory location of the list after the warm reboot, and wherein the instructions, when executed, further cause the computing system to receive the memory location of the list from the program, and reconstruct a filesystem associated with the restore data based on the list and the restore data.
  • Example 5 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to conduct a verification of authenticity data associated with the restore data, and reconstruct a filesystem associated with the restore data in response to the verification.
  • Example 6 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to identify that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and maintain the address range for the restore data.
  • Example 7 includes the computing system of any one of Examples 1 to 6, wherein the restore data is to be checkpoints or snapshots, and the instructions, when executed, further cause the computing system to execute an iterative compression process that is to include compression of a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, compression of a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and storage of the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  • Example 8 includes the computing system of any one of Examples 1 to 6, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to a request from the one or more applications.
  • Example 9 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to identify restore data stored in a volatile memory of the computing system, wherein the restore data is to be associated with one or more applications, generate a list associated with the restore data, wherein the list is to include memory locations of the restore data, identify that the computing system is to be rebooted, and after the computing system has been rebooted, cause the one or more applications to be restored based on the restore data and the list.
  • Example 10 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  • Example 11 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to:
  • the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, store the restore data in volatile memory, and pass the list to the program.
  • Example 12 includes the at least one computer readable storage medium of Example 11, wherein the program is to trigger a warm reboot of the system, supply power to the volatile memory during the warm reboot, preserve the list across the warm reboot, provide a memory location of the list after the warm reboot, and wherein the instructions, when executed, further cause the computing system to receive the memory location of the list from the program, and reconstruct a filesystem associated with the restore data based on the list and the restore data.
  • Example 13 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to conduct a verification of authenticity data associated with the restore data, and reconstruct a filesystem associated with the restore data in response to the verification.
  • Example 14 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to identify that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and maintain the address range for the restore data.
  • Example 15 includes the at least one computer readable storage medium of any one of Examples 9 to 14, wherein the restore data is checkpoints or snapshots, wherein the instructions, when executed, further cause the computing system to execute an iterative compression process that is to include compression of a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, compression of a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and storage of the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  • Example 16 includes the at least one computer readable storage medium of any one of Examples 9 to 14, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to a request from the one or more applications.
  • Example 17 includes a method of restoring one or more applications operating on a computing system, comprising identifying restore data stored in a volatile memory of the computing system, wherein the restore data is associated with the one or more applications, generating a list associated with the restore data, wherein the list includes memory locations of the restore data, identifying that the computing system is to be rebooted, and after the computing system has been rebooted, causing the one or more applications to be restored based on the restore data and the list.
  • Example 18 includes the method of Example 17, further comprising storing the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  • Example 19 includes the method of Example 17, further comprising providing a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, storing the restore data in volatile memory, and passing the list to the program.
  • Example 20 includes the method of Example 19, wherein the program triggers a warm reboot of the system, supplies power to the volatile memory during the warm reboot, preserves the list across the warm reboot and provide a memory location of the list after the warm reboot, and the method further comprises receiving the memory location of the list from the program, and reconstructing a filesystem associated with the restore data based on the list and the restore data.
  • Example 21 includes the method of Example 17, wherein the method further comprises conducting a verification of authenticity data associated with the restore data, and reconstructing a filesystem associated with the restore data in response to the verification.
  • Example 22 includes the method of Example 17, further comprising identifying that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and maintaining the address range for the restore data.
  • Example 23 includes the method of any one of Examples 17 to 22, wherein the restore data is checkpoints or snapshots, and the method further comprises executing an iterative compression process that includes compressing a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, compressing a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and storing the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  • Example 24 includes the method of any one of Examples 17 to 22, further comprising storing the restore data into the volatile memory in response to a request from the one or more applications.
  • Example 25 includes an efficiency-enhanced computing system comprising means for identifying restore data stored in a volatile memory of the computing system, wherein the restore data is associated with the one or more applications, means for generating a list associated with the restore data, wherein the list includes memory locations of the restore data, means for identifying that the computing system is to be rebooted, and after the computing system has been rebooted, means for causing the one or more applications to be restored based on the restore data and the list.
  • Example 26 includes the computing system of Example 25, further comprising means for storing the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  • Example 27 includes the computing system of Example 25, further comprising means for providing a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, means for storing the restore data in volatile memory, and means for passing the list to the program.
  • Example 28 includes the computing system of Example 27, wherein the program triggers a warm reboot of the system, supplies power to the volatile memory during the warm reboot, preserves the list across the warm reboot and provide a memory location of the list after the warm reboot, and the system further comprises means for receiving the memory location of the list from the program, and means for reconstructing a filesystem associated with the restore data based on the list and the restore data.
  • Example 29 includes the computing system of Example 25, wherein the system further comprises means for conducting a verification of authenticity data associated with the restore data, and means for reconstructing a filesystem associated with the restore data in response to the verification.
  • Example 30 includes the computing system of Example 25, further comprising means for identifying that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and means for maintaining the address range for the restore data.
  • Example 31 includes the computing system of any one of Examples 25 to 30, wherein the restore data is checkpoints or snapshots, and the apparatus further comprises executing an iterative compression process that includes means for compressing a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, means for compressing a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and means for storing the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  • Example 32 includes the computing system of any one of Examples 25 to 30, further comprising means for storing the restore data into the volatile memory in response to a request from the one or more applications.
  • technology described herein may provide for an enhanced rebooting process that executes with lower latency and increased efficiency.
  • some embodiments may store reboot data of application to a volatile memory and reboot applications based on the reboot data stored in the volatile memory.
  • Embodiments are applicable for use with all types of semiconductor integrated circuit ( “IC” ) chips.
  • IC semiconductor integrated circuit
  • Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs) , memory chips, network chips, systems on chip (SoCs) , SSD/NAND controller ASICs, and the like.
  • PLAs programmable logic arrays
  • SoCs systems on chip
  • SSD/NAND controller ASICs solid state drive/NAND controller ASICs
  • signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner.
  • Any represented signal lines may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
  • Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured.
  • well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art.
  • Coupled may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections.
  • first may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
  • a list of items joined by the term “one or more of” may mean any combination of the listed terms.
  • the phrases “one or more of A, B or C” may mean A, B, C; A and B; A and C; B and C; or A, B and C.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Retry When Errors Occur (AREA)

Abstract

Systems, apparatuses and methods may provide for technology to resume applications. The technology identifies restore data stored in a volatile memory. The restore data is associated with the one or more applications. The technology generates a list associated with the restore data. The list includes memory locations of the restore data. The technology identifies that the computing system is to be rebooted. After the computing system has been rebooted, the technology restores the one or more applications based on the restore data and the list.

Description

APPLICATION RESTORE BASED ON VOLATILE MEMORY STORAGE ACROSS SYSTEM RESETS TECHNICAL FIELD
Embodiments generally relate to an accelerated system restore after warm restarts. More particularly, embodiments relate to volatile memory storage (e.g., based on a Ram-based Persist Filesystem) to allow snapshots and/or checkpoints of one or more applications to sustain across the warm reboots so that they may be rapidly resumed from memory.
BACKGROUND
Various triggers may cause system resets of computing systems. Some examples may include updates to operating system (OS) components, firmware components, user actuation, etc. A full system reboot may result in system changes (e.g., rebooting of firmware, Basic Input/Output Systems, Unified Extensible Firmware Interfaces, OS kernels, and OS frameworks, OS services, etc. ) . Moreover, applications may be restarted during the full system reboot. Such application restarts may be pronounced in cloud environments. For example, in a cloud environment, time may be spent to restore the applications to a previous running status, especially when “applications” include a virtual machines (VMs) and/or containers (e.g., software that packages code and dependencies of the code an application may runs quickly and reliably in the container) .
BRIEF DESCRIPTION OF THE DRAWINGS
The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
FIG. 1 is a process flow diagram of an example of a restoration process according to an embodiment;
FIG. 2 is a flowchart of an example of a method of restarting one or more applications according to an embodiment;
FIG. 3 is a process flow diagram of an example of using Ram-based Persist Filesystem to accelerate a reboot according to an embodiment;
FIG. 4 is a process flow diagram of an example of a Ram-based Persist Filesystem Driver operations according to an embodiment;
FIG. 5 is a block diagram of an example of a deduplicated memory system according to an embodiment;
FIG. 6 is a process flow diagram of an example of a Ram-based Persist Filesystem data flow according to an embodiment;
FIG. 7 is a process flow diagram of an example of a progressively accelerated parallel compression mechanism according to an embodiment;
FIG. 8 is a block diagram of an example of a performance-enhanced computing system according to an embodiment;
FIG. 9 is an illustration of an example of a semiconductor apparatus according to an embodiment;
FIG. 10 is a block diagram of an example of a processor according to an embodiment; and
FIG. 11 is a block diagram of an example of a multi-processor based computing system according to an embodiment.
DESCRIPTION OF EMBODIMENTS
Turning now to FIG. 1, a reduced latency restoration process 100 is illustrated. In some embodiments, a Ram-based Persist Filesystem (RPFS) is provided to allow for an enhanced and low latency reconstruction of an application 118 (e.g., VM and/or container application) . A RPFS and driver of the RPFS may create a storage area directly in a volatile memory 104 (e.g., random-access memory (RAM) , double data rate (DDR) memory, etc. ) similar to a partition on a disk drive, that persists across reboots and/or warm restarts (e.g., power is maintained) .
A system program 108 (e.g., an RPFS driver) may store restore data 106 (e.g., snapshots and/or checkpoints) of the application 118 to a volatile memory 104. In some embodiments, system program 108 the restore data 106 may be stored in response to one or more of a request from the application 118, a firmware update, a software update or an identification that the computing device 102 is to be restarted. The restore data 106 may be sustained across warm reboots of the computing device 102 so that the application 118 may be rapidly resumed from volatile memory 104. Storing the restore data 106 to volatile memory 104 may accelerate the reboot and reconstruction time as compared to storing the restore data 106 to a long-term, non-volatile storage such as hard disk drive, network-based disk/file systems, etc. due to the read/write speeds of such storage and transfer latencies.
Further, some embodiments may further enhance efficiency of memory usage and reduce cost by using RPFS with memory deduplication (explained below) and data compression. For example, memory deduplication may identify and merge identical memory pages to reduce storage space in the volatile memory 104 of the restore data 106. Some embodiments may further employ compression algorithms to compress the restore data 106 to reduce storage space in the volatile memory 104.
Thus, some embodiments may employ a RPFS in the volatile memory 104 to maintain application snapshots in the restore data 106 in a valid state and retained across the warm reboots. For example, before a reset and/or restart of a computing device 102, snapshots and/or checkpoints of applications (e.g., VMs, Containers, etc. ) may be saved as restore data 106 to the volatile memory 104 that employs RPFS and driver (e.g., based on DDR technology) . During the reboot, the restore data 106 in the volatile memory 104 may be substantially unaltered while the application 118 is being restored, for example by the BIOS/UEFI and OS initializations. After the system reset and/or reboot, a RPFS driver may re-construct the RPFS (e.g., storage locations of data associated with applications) from the RPFS memory so that applications may be rapidly resumed from memory rapidly. For example, some embodiments may reconstruct file system meta data from a firmware reserved volatile memory portion so that applications may access data files of the applications from the file system and resume states rapidly.
Thus, applications (which may include VMs, containers, services, etc. ) may be recovered with less latency and more efficiency after reboot and up to around 80 times faster than rebooting applications from a disk. Furthermore, the RPFS mechanism and storage may be transparent to applications. Further, applications may already support snapshot and/or checkpointing technology and may use RPFS in conjunction with such technology reducing modifications to the applications to support the RPFS technology. Furthermore, an amount of the volatile memory 104 used by RPFS may be dynamically shared with normal memory usage to reduce memory costs.
For example, in FIG. 1, a system program 108 may store restore  data  106, 114 to volatile memory 104. The system program 108 may be an application manager of the application 118, an independent application and/or an operating system (OS) of the computing device 102. In some embodiments, the system program 108 may store the restore data 106 in response to a specific trigger being identified, such as an initiation of a shut-down or warm-restart of the computing device 102.
The system program 108 may further provide a notification 112 to a reboot program 110. The notification may include an identification of the restore data 106, such as memory addresses (e.g., data pointers) of the restore data 106, and/or an instruction to not overwrite the restore data 106 during reboot. The reboot program 110 may be BIOS or UEFI that is responsible for rebooting the computing device 102 and/or executing a boot process. Thus, the reboot program 110 may avoid overwriting the restore data 106 during the reboot process and ensure that power is provided to the volatile memory 104 during the reboot process. In detail, if the volatile memory 104 does not receive sufficient power, the data in the volatile memory 104, including the restore data 106, may be lost. That is, the volatile memory 104 may require power throughout the boot process to maintain stored information including the restore data 106.
The process 100 may then reboot 116. The system program 108 (e.g., the RPFS driver) may verify the authenticity of the restore data 106, 122 (e.g., checksum authentications and validations) to verify that the restore data 106 has not been tampered with and/or that the restore data 106 is malicious code. In some embodiments, the reboot program 110 may be considered “upstream” of the chain of trust, and maintains the preserved data, executes the reboot, and bootload of another OS instance. The system program 108 may verify and check the restore data 106 to avoid memory errors, corruptions, etc. that may cause a failure to reboot. For example, such verification and checking may be carried out by the RPFS driver in an OS context.
For example, to execute a verification process on the restore data 120, the RPFS driver, which may be part of the system program 108, may firstly execute a checksum of the restore data 106 when the restore data 106 is stored in the volatile memory 104, and store the checksum in a secure, non-volatile space. The system program 108 (e.g., RPFS driver) may pass data pointers (e.g., memory addresses) to the reboot program 110 as described above when the notification is provided 112. When the computing device 102 resumes, the reboot program 110 reports the preserved data pointer to the system program, 108 (e.g., the RPFS driver) . The system program 108 (e.g., the RPFS driver) rechecks the data against the saved checksum. The system program 108 (e.g., the RPFS driver) may use acceleration techniques to accelerate the calculation, so that the boot speed is improved.
After the restore data 106 has been verified, the system program 108 may then restore the application 118 from the restore data 106. For example, the system program  108 may include an RPFS driver that reconstructs file system views from the restore data 106 for an OS management application to access. The OS management application may read the snapshot and/or checkpoint from the reconstructed file system and resume the target applications, which in this example is application 118. By resuming the application 118 from the restore data 106 stored in the volatile memory 104, latency may be reduced, and the reboot process may execute with enhanced efficiency. Moreover, some embodiments may reduce power consumption since storage accesses to non-volatile memory may be reduced. Thus, efficiency may be enhanced, power consumption may be reduced, and applications may be securely restarted.
FIG. 2 shows a method 300 of restarting one or more applications. The method 300 may generally be implemented the computing device 102 (FIG. 1) , already discussed. More particularly, the method 300 may be implemented as one or more modules in a set of logic instructions stored in a machine-or computer-readable storage medium such as random access memory (RAM) , read only memory (ROM) , programmable ROM (PROM) , firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs) , FPGAs, complex programmable logic devices (CPLDs) , in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC) , complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.
For example, computer program code to carry out operations shown in the method 300 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, ISA instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc. ) .
Illustrated processing block 302 identifies restore data stored in a volatile memory, where the restore data is associated with the one or more applications. Illustrated processing block 304 generates a list associated with the restore data, where the list is to include memory locations of the restore data. Illustrated processing block  306 identifies that the computing system is to be rebooted. Illustrated processing block 308, after the computing system has been rebooted, restores the one or more applications based on the restore data and the list.
In some embodiments, the method 300 stores the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system. In some embodiments, the method 300 stores the restore data into the volatile memory in response to a request from the one or more applications. In some embodiments, the method 300 provides a notification to a program that is to reboot the computing system, where the notification indicates the memory locations and that the restore data is to be preserved across the reboot. In some embodiments, the program is to be one of a Basic Input Output System or a Unified Extensible Firmware Interface. In some embodiments, the method 300 conducts a verification of authenticity data (e.g., checksum data) associated with the restore data to verify that the restore data was not tampered with and is not malicious, and reconstructs a filesystem associated with the restore data in response to the verification. In some embodiments, the volatile memory is a double-data rate memory. In some embodiments, the restore data is to be one or more of a checkpoint or a snapshot.
FIG. 3 shows a process 400 of using RPFS and a driver of the RPFS to accelerate a reboot. The process 400 may be readily operated with the embodiments described herein, such as the process (FIG. 1) and/or method 300 (FIG. 2) . While a snapshot is discussed below, it will be understood that any restore data may be readily substituted for the snapshot (e.g., checkpoint) . A management agent 402 may trigger a reset 408. For example, the management agent 402 may initiate a software or firmware update, and then notify an OS 404 to reboot to activate the update.
The OS 404 may store snapshots to an RPFS 410. For example, a storage application (e.g., a VM manager, Container manager, and/or any application which is capable to save states and restore the states later) may cause a snapshot and/or checkpoint of an application to be stored to the RPFS. For example, an RPFS driver of the RPFS may receive the snapshot and store the snapshot according to the RPFS. Storing such a volume of data for the snapshot may normally consume an extensive amount of memory which may lead to detrimental performance. Thus, in some embodiments, the RPFS drive may utilize memory de-duplication and accelerators to compress the data (e.g., QuickAssist technology) in addition to utilizing RPFS, so that the actual memory used by RPFS is substantially reduced.
During the OS 404 shutdown, a RPFS driver operating in the OS 404 may construct an in-use memory page list of RPFS. For example, the RPFS driver may provide a notification to a reboot program 406 of the memory pages used by RPFS for preservation and request a warm reset 412 to avoid memory erasures of volatile memory that stores the snapshot. During reboot, the reboot program (e.g., BIOS and/or UEFI) may preserve the memory pages according to the list so that they will not be overridden. For example, the reboot program 406 may reboot without overriding the memory pages for preservation and pass verification data (e.g., checksum of RPFS metadata) to the  operating system  404, 414.
The RPFS driver verifies the memory pages based on the verification data and reconstructs the RPFS during initialization 416. For example, during RPFS driver initialization, the RPFS driver may reconstruct the RPFS if metadata information associated with the memory pages exists and if the memory pages pass verification based on checksum data of the verification data. The RPFS driver may then proceed to build the file system view for the OS application's access. The application may then be resumed from the snapshots on the RPFS.
FIG. 4 illustrates a process 450 that may be implemented by an RPFS driver 468. The process 450 may be readily operated with the embodiments described herein, such as the process (FIG. 1) , method 300 (FIG. 2) and/or process 400 (FIG. 3) . As already discussed, the RPFS driver 468 may implement aspects of the RPFS processes to accelerate reboot times. The RPFS driver 468 may provide a filesystem interface to allow user space access to the filesystem with memory as a backend “storage. ” The RPFS driver 468 may dynamically allocate memory on-demand as opposed to statically assigned memory spaces in the physical memory 474 so as to allow other applications access to the physical memory 474.
The RPFS driver 468 may maintain RPFS metadata information 470 and RPFS memory Preservation List 472 to save the RPFS data 478 so that the RPFS in the physical memory 474 may be reconstructed across a warm reboot. As illustrated, the RPFS metadata information 470 may include the metadata address 470a and metadata checksum 470b which may be a checksum of the whole page list, including both the RPFS memory preservation list 472 and associated memory pages (e.g., snapshots memory locations) , while the RPFS memory preservation list 472 may include addresses of memory list addresses as stored in the scatterlist [0] 472a-scatterlist [n] 472n. The scatterlist [0] 472a-scatterlist [n] 472n may be a preserve memory list that  includes every page used by RPFS (e.g., addresses of the restore data stored by the RPFS) . The RPFS memory preservation list 472 may be provided to reboot software so that the reboot software does not overwrite restore data. The RPFS metadata information 470 may be stored to a non-volatile, secure portion of the physical memory 474 so as to be preserved across reboots. The RPFS memory preservation list 472 may be stored into a volatile portion of the physical memory 474.
In this example, the first VM 452, the second VM 454 and container C1 456 (which may be executing on a computing system that is to be rebooted) save snapshots 476 to a RPFS, which is illustrated in the physical memory 474 (e.g., volatile portions. In some embodiments, RPFS driver 468 may allocate dedicated firmware preserved data-blocks to maintain the file across reboots.
As a result, the snapshots are saved and scattered in physical memory 474. In this example, the first VM 452 stores a first VM snapshot portion one 458a and a first VM snapshot portion two 458b. The second VM 454 stores a second VM snapshot portion one 460a, a second VM snapshot portion two 460b, and a second VM snapshot portion three 460c. The container C1 stores C1 snapshot portion one 464a and C1 snapshot portion one 464b.
Process 450 may then reboot 480 a computing system that executes the first VM, 452, second VM 454 and container C1 456. The RPFS driver 468 may then restore (e.g., reconstruct) the RPFS 482 based on the RPFS metadata information 470 and the RPFS memory preservation list 472. For example, the RFPS metadata information 470 and RPFS memory preservation list 472 may be passed to a reboot program during the saving of the RPFS data 478 and read back from the reboot program at during the reboot 480. Thus, at a warm reset, though the OS may be newly rebooted such that the RPFS driver is newly initialized, the RPFS driver 468 could still able to reconstruct the file system view (e.g. the RPFS) based on the RPFS metadata information 470 and RPFS memory preservation list 472.
While the metadata information 470 is saved in a secure, non-volatile storage, the RPFS memory preservation list 472 may be stored as the RPFS metadata chunk 462 (e.g., in volatile memory) . The RPFS driver 468 may then restore the snapshots 484 based on the data stored in the physical memory 474 to reconstruct the previous content from the scatterlist [0] 472a-scatterlist [n] 472n. OS management software may read snapshot files via the RPFS driver 468 so that the first VM 452, the second VM 454  and the container C1 456 may be resumed from the snapshots. In some embodiments, the physical memory 474 may be a volatile memory.
FIG. 5 illustrates a deduplieated memory system 500 that includes memory deduplication combined with RPFS systems. The deduplicated memory system 500 may be readily operated with the embodiments described herein, such as the process (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , and/or process 450 (FIG. 4) . Memory deduplication may increase an effective capacity of physical memory 522 unique values in memory. In the deduplicated memory system 500, memory may no longer be treated as a linear array. Rather, the deduplicated memory system 500 may be organized into two regions, including a physical memory address space 506 (e.g., a translation table that is an indirection table that maps each system address (SA) to a data line) and a physical memory 522 that is a data region (e.g., a region where data lines are stored) .
In an RPFS usage scenario, content of the first VM 502, or container in some embodiments, and snapshot data may be nearly exactly the same as running memory content except some small running state information. Thus, deduplication may be effectively used with RPFS to avoid redundant data storage. For example, operational memories and snapshots may be redirected to the same data that is stored only once in the physical memory 522.
For example, the first VM 502 may store first VM operational memory portion one 508, first VM operational memory portion two 510, and first VM operational memory portion three 512 in the physical memory address space 506. The first VM snapshot 504 may store first VM 1 st snapshot portion 518, first VM 2 nd snapshot portion 514, first VM 3 rd snapshot portion 516 and first VM running state information 520. The first VM operation memory portion one 508 and first VM 1 st snapshot 518 may be nearly identical, and thus it is possible to store only one copy of the first VM operational memory portion one 508 in the physical memory 522 which contains contents that correspond to both the first VM operation memory portion one 508 and the first VM 1 st snapshot portion 518. Both the first VM 1 st snapshot portion 518 and the first VM operational memory one 508 may be redirected from the physical memory address space 506 to the first VM operational memory one 508.
Likewise, both the first VM operational memory portion two 510 and first VM 2 nd snapshot potion 514 in the physical memory address space 506 may direct to the first VM operational memory portion two 510 in the physical memory 522. Similarly,  both the first VM operational memory portion three 512 and first VM 3 rd snapshot portion 516 in the physical memory address space 506 may direct to the first VM operational memory portion three 512 in the physical memory 522. The first VM running state information 520 in the physical memory 522 may be redirected to the first VM running state information 520.
Thus, in the event a system reboot occurs, the first VM 502 may be rebuilt from the first VM operational memory portion one 508, the first VM operational memory portion two 510, and first VM operational memory portion three 512. As implemented with RPFS and deduplication operating together, the first VM 1 st snapshot portion 518, the first VM 2 nd snapshot portion 514 and the first VM 3 rd snapshot portion 516 that are stored in the physical memory address space 506 occupy a substantially reduced amount of memory cells since duplicative data does not need to be stored. For example, a de-duplicated memory system, such as deduplicated memory system 500, will report more system addresses (e.g., a second memory address space) than the original memory system 500 (e.g., a first memory address space that is less than the second memory address space by an address range) actual has to the OS (e.g., total configured memory size may exceed the amount of available physical memory) . In some embodiments, the overcommit ratio may be 200%. In some embodiments, particularly for an RPFS case, firmware may report around 300%system address space, with a basic 200%overcommit plus extra 100%overcommit for RPFS snapshot saving. The extra 100%overcommitted system address range may be owned by RPFS driver and may not be used by other OS components, or applications. During snapshot saving, an RPFS driver may save application snapshots into the extra 100%overcommitted system ranges, where they are fully merged with the normal application data.
In some embodiments to enhance performance and reduce latency, the deduplication may be executed within a non-uniform memory access (NUMA) node. The RPFS (e.g., a driver) may also be aware of the NUMA node, which may provide the memory space in the same NUMA node to the VM and/or container for saving snapshot files of the VM and/or container. Doing so may allow for memory deduplication to be fully leveraged and to enhance a maximum compaction ratio. In some embodiments, on NUMA system, the extra snapshotting space is per-NUMA node. Thus, RPFS may be considered “NUMA aware” to the extent that the RPFS may save pages into the snapshotting space of the same NUMA node the original page of the corresponding application that is associated with snapshot is stored within.
For example, as showed in FIG. 5 when new snapshots of the first VM 502 are to be saved to physical memory 522, new entries in the physical memory address space 506 (e.g., the translation table) may be generated, but nearly no added consumption of the physical memory 522 as the new snapshots have been deduplicated and redirect to operational memories that are used for execution of the first VM 502. Then after restoring the snapshots from RPFS, the newly added translation table entries may be reclaimed by clearing the memory. For example, in some embodiments, memory de-duplication may be carried out within NUMA nodes. For example, in a system with multiple NUMA nodes, some embodiments may fully release the potential of memory compaction by configuring the RPFS to be NUMA aware. For example, the RPFS driver may own one extra overcommit range per NUMA node, which may correspond to physical memory 522. When there is a page written to RPFS, the RPFS driver may check the original NUMA node of the page and write to the overcommit range of the same NUMA node, thus guaranteeing the snapshot page and the original page may be merged.
FIG. 6 shows the RPFS data flow 600 to save of using compression (e.g., QuickAssist technology (QAT) ) technology to compress VM or Container snapshots/checkpoints. The flow 600 may be readily implemented in conjunction with any of the embodiments described herein, such as the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , and/or memory system 500 (FIG. 5) .
The RPFS restore flow with compression may be the opposite of flow 600. In detail, a VM or Container manager 602 may generate snapshots and/or checkpoints and ready then request RPFS driver 604 to save data. The RPFS driver 604 may submit a request to the compression driver/library 606 for compressing snapshot data and save to physical memory 610 (e.g., a DDR memory) . The compression accelerator 608 may access snapshots and/or checkpoints data from (e.g., direct memory accessed) from memory 610, compress the data and transmit the data back to physical memory 610 (e.g., a RPFS memory) . The RPFS driver 604 may receive a compression done signal from the compression driver/library 606 and compression accelerator 608 and notifies the application manager 602 that the snapshots/checkpoints data saving completed.
FIG. 7 shows an example of a flow 640 of a progressively accelerated parallel compression mechanism. The flow 640 may be readily implemented in conjunction with any of the embodiments described herein, such as the process 100 (FIG. 1) , method  300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) and/or flow 600 (FIG. 6) .
As illustrated, physical memory 642 may include snapshots. In detail, each of the VM 1-VM n boxes are VM 1-VM n snapshots that are saved to physical memory 642 through RPFS. Each of VM 1-VM n snapshots has 2 gigabyte (G) size and the compression ratio may be around 50%. Thus, each of VM 1-VM n snapshots may be compressed to 1G size by different techniques, such as QAT. In some embodiments, the RPFS saving flow and compression flow may be carried out together to save the compressed snapshots in-place, as described in the paragraph below.
In operation 644, when the 2G VM1 snapshot is to be saved to RPFS and assuming that the system has 1G free memory space to support the starting of an iterative in-place saving flow which may exponentially compress more data as more free space emerges, the VM1 snapshot may be compressed (e.g., using QAT) to 1G and saved to 1G free memory through RPFS. Thus, the slot that was originally occupied by VM 1 is now free, and the compressed VM 1 is stored in the originally 1G free location.
In operation 646, as the VM1 2G memory space was released after operation 644, VM2 and VM3 snapshots may be compressed in parallel and saved to 2G free memory space. After operation 646, 4G memory was released by the compression of VM2 and VM3, and therefore operation 648 may then compress VM4, VM5, VM6 and VM7 snapshots in parallel and saved to 4G free RPFS. After operation 648, operation 650 may compress 8 VM snapshots in parallel, then 16 VMs. Thereafter, 32 VMs may be compressed and so forth. Thus, if a total of N VMs are provided, only log2N rounds of compressions of snapshots may be needed to substantially reduce the occupied memory space.
Turning now to FIG. 8, a performance-enhanced computing system 150 is shown. The system 150 may generally be part of an electronic device/platform having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, convertible tablet, server) , communications functionality (e.g., smart phone) , imaging functionality (e.g., camera, camcorder) , media playing functionality (e.g., smart television/TV) , wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry) , vehicular functionality (e.g., car, track, motorcycle) , robotic functionality (e.g., autonomous robot) , etc., or any combination thereof. In the  illustrated example, the system 150 includes a host processor 152 (e.g., CPU) having an integrated memory controller (IMC) 154 that is coupled to a system memory 156.
The illustrated system 150 also includes an input output (IO) module 158 implemented together with the host processor 152 and a graphics processor 160 (e.g., GPU) on a semiconductor die 162 as a system on chip (SoC) . The illustrated IO module 158 communicates with, for example, a display 164 (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display) , a network controller 166 (e.g., wired and/or wireless) , and mass storage 168 (e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory) .
The host processor 152, the graphics processor 160 and/or the IO module 158 may execute instructions 170 retrieved from the system memory 156 and/or the mass storage 168. In an embodiment, the computing system 150 is operated in a system150 restart stage and the instructions 170 include executable program instructions to perform one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) .
FIG. 9 shows a semiconductor apparatus 172 (e.g., chip, die, package) . The illustrated apparatus 172 includes one or more substrates 174 (e.g., silicon, sapphire, gallium arsenide) and logic 176 (e.g., transistor array and other integrated circuit/IC components) coupled to the substrate (s) 174. In an embodiment, the apparatus 172 is operated in an application development stage and the logic 176 perform one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) .
The logic 176 may be implemented at least partly in configurable logic or fixed-functionality hardware logic. In one example, the logic 176 includes transistor channel regions that are positioned (e.g., embedded) within the substrate (s) 174. Thus, the interface between the logic 176 and the substrate (s) 174 may not be an abrupt junction. The logic 176 may also be considered to include an epitaxial layer that is grown on an initial wafer of the substrate (s) 174.
FIG. 10 illustrates a processor core 200 according to one embodiment. The processor core 200 may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP) , a network processor, or other device to execute code. Although only one processor core 200 is illustrated in FIG. 10,  a processing element may alternatively include more than one of the processor core 200 illustrated in FIG. 10. The processor core 200 may be a single-threaded core or, for at least one embodiment, the processor core 200 may be multithreaded in that it may include more than one hardware thread context (or “logical processor” ) per core.
FIG. 10 also illustrates a memory 270 coupled to the processor core 200. The memory 270 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory 270 may include one or more code 213 instruction (s) to be executed by the processor core 200, wherein the code 213 may implement one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) . The processor core 200 follows a program sequence of instructions indicated by the code 213. Each instruction may enter a front end portion 210 and be processed by one or more decoders 220. The decoder 220 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution.
The processor core 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic 250 performs the operations specified by code instructions.
After completion of execution of the operations specified by the code instructions, back end logic 260 retires the instructions of the code 213. In one embodiment, the processor core 200 allows out of order execution but requires in order retirement of instructions. Retirement logic 265 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like) . In this manner, the processor core 200 is transformed during execution of the code 213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250.
Although not illustrated in FIG. 10, a processing element may include other elements on chip with the processor core 200. For example, a processing element may include memory control logic along with the processor core 200. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches.
Referring now to FIG. 11, shown is a block diagram of a computing system 1000 embodiment in accordance with an embodiment. Shown in FIG. 11 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080. While two  processing elements  1070 and 1080 are shown, it is to be understood that an embodiment of the system 1000 may also include only one such processing element.
The system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated in FIG. 11 may be implemented as a multi-drop bus rather than point-to-point interconnect.
As shown in FIG. 11, each of  processing elements  1070 and 1080 may be multicore processors, including first and second processor cores (i.e.,  processor cores  1074a and 1074b and  processor cores  1084a and 1084b) .  Such cores  1074a, 1074b, 1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 10.
Each  processing element  1070, 1080 may include at least one shared  cache  1896a, 1896b. The shared  cache  1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the  cores  1074a, 1074b and 1084a, 1084b, respectively. For example, the shared  cache  1896a, 1896b may locally cache data stored in a  memory  1032, 1034 for faster access by components of the processor. In one or more embodiments, the shared  cache  1896a, 1896b may include one or more mid-level caches, such as level 2 (L2) , level 3 (L3) , level 4 (L4) , or other levels of cache, a last level cache (LLC) , and/or combinations thereof.
While shown with only two  processing elements  1070, 1080, it is to be understood that the scope of the embodiments are not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of  processing elements  1070, 1080 may be an element other  than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element (s) may include additional processors (s) that are the same as a first processor 1070, additional processor (s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units) , field programmable gate arrays, or any other processing element. There can be a variety of differences between the  processing elements  1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the  processing elements  1070, 1080. For at least one embodiment, the  various processing elements  1070, 1080 may reside in the same die package.
The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include a MC 1082 and  P-P interfaces  1086 and 1088. As shown in FIG. 11, MC's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034, which may be portions of main memory locally attached to the respective processors. While the  MC  1072 and 1082 is illustrated as integrated into the  processing elements  1070, 1080, for alternative embodiments the MC logic may be discrete logic outside the  processing elements  1070, 1080 rather than integrated therein.
The first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 1076 1086, respectively. As shown in FIG. 11, the I/O subsystem 1090 includes  P-P interfaces  1094 and 1098. Furthermore, I/O subsystem 1090 includes an interface 1092 to couple I/O subsystem 1090 with a high performance graphics engine 1038. In one embodiment, bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090. Alternately, a point-to-point interconnect may couple these components.
In turn, I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.
As shown in FIG. 11, various I/O devices 1014 (e.g., biometric scanners, speakers, cameras, sensors) may be coupled to the first bus 1016, along with a bus  bridge 1018 which may couple the first bus 1016 to a second bus 1020. In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012, communication device (s) 1026, and a data storage unit 1019 such as a disk drive or other mass storage device which may include code 1030, in one embodiment. The illustrated code 1030 may implement one or more aspects the process 100 (FIG. 1) , method 300 (FIG. 2) , process 400 (FIG. 3) , process 450 (FIG. 4) , memory system 500 (FIG. 5) , flow 600 (FIG. 6) and/or flow 640 (FIG. 7) . Further, an audio I/O 1024 may be coupled to second bus 1020 and a battery 1010 may supply power to the computing system 1000.
Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of FIG. 11, a system may implement a multi-drop bus or another such communication topology. Also, the elements of FIG. 11 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 11.
Additional Notes and Examples:
Example 1 includes an efficiency-enhanced computing system comprising a processor coupled to a volatile memory and to execute one or more applications, and a memory including a set of executable program instructions, which when executed by the processor, cause the computing system to identify restore data stored in the volatile memory, wherein the restore data is associated with the one or more applications, generate a list associated with the restore data, wherein the list is to include memory locations of the restore data, identify that the computing system is to be rebooted, and after the computing system has been rebooted, cause the one or more applications to be restored based on the restore data and the list.
Example 2 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
Example 3 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to provide a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, store the restore data in volatile memory, and pass the list to the program.
Example 4 includes the computing system of Example 3, wherein the program is to: trigger a warm reboot of the system, supply power to the volatile memory during the warm reboot, preserve the list across the warm reboot, provide a memory location of the list after the warm reboot, and wherein the instructions, when executed, further cause the computing system to receive the memory location of the list from the program, and reconstruct a filesystem associated with the restore data based on the list and the restore data. Example 5 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to conduct a verification of authenticity data associated with the restore data, and reconstruct a filesystem associated with the restore data in response to the verification.
Example 6 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to identify that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and maintain the address range for the restore data.
Example 7 includes the computing system of any one of Examples 1 to 6, wherein the restore data is to be checkpoints or snapshots, and the instructions, when executed, further cause the computing system to execute an iterative compression process that is to include compression of a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, compression of a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and storage of the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
Example 8 includes the computing system of any one of Examples 1 to 6, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to a request from the one or more applications.
Example 9 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to identify restore data stored in a volatile memory of the computing system, wherein the restore data is to be associated with one or more applications, generate a list associated with the restore data, wherein the list is to include memory locations of the restore data, identify that the computing system is to be rebooted, and after the  computing system has been rebooted, cause the one or more applications to be restored based on the restore data and the list.
Example 10 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
Example 11 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to:
provide a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, store the restore data in volatile memory, and pass the list to the program.
Example 12 includes the at least one computer readable storage medium of Example 11, wherein the program is to trigger a warm reboot of the system, supply power to the volatile memory during the warm reboot, preserve the list across the warm reboot, provide a memory location of the list after the warm reboot, and wherein the instructions, when executed, further cause the computing system to receive the memory location of the list from the program, and reconstruct a filesystem associated with the restore data based on the list and the restore data.
Example 13 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to conduct a verification of authenticity data associated with the restore data, and reconstruct a filesystem associated with the restore data in response to the verification.
Example 14 includes the at least one computer readable storage medium of Example 9, wherein the instructions, when executed, further cause the computing system to identify that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and maintain the address range for the restore data.
Example 15 includes the at least one computer readable storage medium of any one of Examples 9 to 14, wherein the restore data is checkpoints or snapshots, wherein the instructions, when executed, further cause the computing system to execute an iterative compression process that is to include compression of a first plurality of the  checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, compression of a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and storage of the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
Example 16 includes the at least one computer readable storage medium of any one of Examples 9 to 14, wherein the instructions, when executed, further cause the computing system to store the restore data into the volatile memory in response to a request from the one or more applications.
Example 17 includes a method of restoring one or more applications operating on a computing system, comprising identifying restore data stored in a volatile memory of the computing system, wherein the restore data is associated with the one or more applications, generating a list associated with the restore data, wherein the list includes memory locations of the restore data, identifying that the computing system is to be rebooted, and after the computing system has been rebooted, causing the one or more applications to be restored based on the restore data and the list.
Example 18 includes the method of Example 17, further comprising storing the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
Example 19 includes the method of Example 17, further comprising providing a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, storing the restore data in volatile memory, and passing the list to the program.
Example 20 includes the method of Example 19, wherein the program triggers a warm reboot of the system, supplies power to the volatile memory during the warm reboot, preserves the list across the warm reboot and provide a memory location of the list after the warm reboot, and the method further comprises receiving the memory location of the list from the program, and reconstructing a filesystem associated with the restore data based on the list and the restore data.
Example 21 includes the method of Example 17, wherein the method further comprises conducting a verification of authenticity data associated with the restore data, and reconstructing a filesystem associated with the restore data in response to the verification.
Example 22 includes the method of Example 17, further comprising identifying that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and maintaining the address range for the restore data.
Example 23 includes the method of any one of Examples 17 to 22, wherein the restore data is checkpoints or snapshots, and the method further comprises executing an iterative compression process that includes compressing a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, compressing a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and storing the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
Example 24 includes the method of any one of Examples 17 to 22, further comprising storing the restore data into the volatile memory in response to a request from the one or more applications.
Example 25 includes an efficiency-enhanced computing system comprising means for identifying restore data stored in a volatile memory of the computing system, wherein the restore data is associated with the one or more applications, means for generating a list associated with the restore data, wherein the list includes memory locations of the restore data, means for identifying that the computing system is to be rebooted, and after the computing system has been rebooted, means for causing the one or more applications to be restored based on the restore data and the list.
Example 26 includes the computing system of Example 25, further comprising means for storing the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
Example 27 includes the computing system of Example 25, further comprising means for providing a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot, means for storing the restore data in volatile memory, and means for passing the list to the program.
Example 28 includes the computing system of Example 27, wherein the program triggers a warm reboot of the system, supplies power to the volatile memory during the warm reboot, preserves the list across the warm reboot and provide a memory location of the list after the warm reboot, and the system further comprises means for  receiving the memory location of the list from the program, and means for reconstructing a filesystem associated with the restore data based on the list and the restore data.
Example 29 includes the computing system of Example 25, wherein the system further comprises means for conducting a verification of authenticity data associated with the restore data, and means for reconstructing a filesystem associated with the restore data in response to the verification.
Example 30 includes the computing system of Example 25, further comprising means for identifying that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range, and means for maintaining the address range for the restore data.
Example 31 includes the computing system of any one of Examples 25 to 30, wherein the restore data is checkpoints or snapshots, and the apparatus further comprises executing an iterative compression process that includes means for compressing a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory, means for compressing a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel, and means for storing the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
Example 32 includes the computing system of any one of Examples 25 to 30, further comprising means for storing the restore data into the volatile memory in response to a request from the one or more applications.
Thus, technology described herein may provide for an enhanced rebooting process that executes with lower latency and increased efficiency. For example, some embodiments may store reboot data of application to a volatile memory and reboot applications based on the reboot data stored in the volatile memory.
Embodiments are applicable for use with all types of semiconductor integrated circuit ( “IC” ) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs) , memory chips, network chips, systems on chip (SoCs) , SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at  one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.
The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first” , “second” , etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
As used in this application and in the claims, a list of items joined by the term “one or more of” may mean any combination of the listed terms. For example, the phrases “one or more of A, B or C” may mean A, B, C; A and B; A and C; B and C; or A, B and C.
Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms.  Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.

Claims (24)

  1. An efficiency-enhanced computing system comprising:
    a processor coupled to a volatile memory and to execute one or more applications; and
    a memory including a set of executable program instructions, which when executed by the processor, cause the computing system to:
    identify restore data stored in the volatile memory, wherein the restore data is associated with the one or more applications;
    generate a list associated with the restore data, wherein the list is to include memory locations of the restore data;
    identify that the computing system is to be rebooted; and
    after the computing system has been rebooted, cause the one or more applications to be restored based on the restore data and the list.
  2. The computing system of claim 1, wherein the instructions, when executed, further cause the computing system to:
    store the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  3. The computing system of claim 1, wherein the instructions, when executed, further cause the computing system to:
    provide a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot;
    store the restore data in volatile memory; and
    pass the list to the program.
  4. The computing system of claim 3, wherein the program is to:
    trigger a warm reboot of the system;
    supply power to the volatile memory during the warm reboot;
    preserve the list across the warm reboot;
    provide a memory location of the list after the warm reboot; and
    wherein the instructions, when executed, further cause the computing system to:
    receive the memory location of the list from the program; and reconstruct a filesystem associated with the restore data based on the list and the restore data.
  5. The computing system of claim 1, wherein the instructions, when executed, further cause the computing system to:
    conduct a verification of authenticity data associated with the restore data; and
    reconstruct a filesystem associated with the restore data in response to the verification.
  6. The computing system of claim 1, wherein the instructions, when executed, further cause the computing system to:
    identify that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range; and
    maintain the address range for the restore data.
  7. The computing system of any one of claims 1 to 6, wherein the restore data is to be checkpoints or snapshots, and the instructions, when executed, further cause the computing system to execute an iterative compression process that is to include:
    compression of a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory;
    compression of a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel; and
    storage of the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  8. The computing system of any one of claims 1 to 6, wherein the instructions, when executed, further cause the computing system to:
    store the restore data into the volatile memory in response to a request from the one or more applications.
  9. At least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to:
    identify restore data stored in a volatile memory of the computing system, wherein the restore data is to be associated with one or more applications;
    generate a list associated with the restore data, wherein the list is to include memory locations of the restore data;
    identify that the computing system is to be rebooted; and
    after the computing system has been rebooted, cause the one or more applications to be restored based on the restore data and the list.
  10. The at least one computer readable storage medium of claim 9, wherein the instructions, when executed, further cause the computing system to:
    store the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  11. The at least one computer readable storage medium of claim 9, wherein the instructions, when executed, further cause the computing system to:
    provide a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot;
    store the restore data in volatile memory; and
    pass the list to the program.
  12. The at least one computer readable storage medium of claim 11, wherein the program is to:
    trigger a warm reboot of the system;
    supply power to the volatile memory during the warm reboot;
    preserve the list across the warm reboot;
    provide a memory location of the list after the warm reboot; and
    wherein the instructions, when executed, further cause the computing system to:
    receive the memory location of the list from the program; and
    reconstruct a filesystem associated with the restore data based on the list and the restore data.
  13. The at least one computer readable storage medium of claim 9, wherein the instructions, when executed, further cause the computing system to:
    conduct a verification of authenticity data associated with the restore data; and
    reconstruct a filesystem associated with the restore data in response to the verification.
  14. The at least one computer readable storage medium of claim 9, wherein the instructions, when executed, further cause the computing system to:
    identify that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range; and
    maintain the address range for the restore data.
  15. The at least one computer readable storage medium of any one of claims 9 to 14, wherein the restore data is checkpoints or snapshots, wherein the instructions, when executed, further cause the computing system to execute an iterative compression process that is to include:
    compression of a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory;
    compression of a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel; and
    storage of the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  16. The at least one computer readable storage medium of any one of claims 9 to 14, wherein the instructions, when executed, further cause the computing system to:
    store the restore data into the volatile memory in response to a request from the one or more applications.
  17. A method of restoring one or more applications operating on a computing system, comprising:
    identifying restore data stored in a volatile memory of the computing system, wherein the restore data is associated with the one or more applications;
    generating a list associated with the restore data, wherein the list includes memory locations of the restore data;
    identifying that the computing system is to be rebooted; and
    after the computing system has been rebooted, causing the one or more applications to be restored based on the restore data and the list.
  18. The method of claim 17, further comprising:
    storing the restore data into the volatile memory in response to one or more of a software update, a firmware update or a restart of the computing system.
  19. The method of claim 17, further comprising:
    providing a notification to a program that is to reboot the computing system, wherein the notification is to indicate the memory locations and that the restore data is to be preserved across the reboot;
    storing the restore data in volatile memory; and
    passing the list to the program.
  20. The method of claim 19, wherein the program triggers a warm reboot of the system, supplies power to the volatile memory during the warm reboot, preserves the list across the warm reboot and provide a memory location of the list after the warm reboot; and
    the method further comprises:
    receiving the memory location of the list from the program; and
    reconstructing a filesystem associated with the restore data based on the list and the restore data.
  21. The method of claim 17, wherein the method further comprises:
    conducting a verification of authenticity data associated with the restore data; and
    reconstructing a filesystem associated with the restore data in response to the verification.
  22. The method of claim 17, further comprising:
    identifying that a first memory address space is to be reported as a second memory address space that is to be greater than the first memory address space by an address range; and
    maintaining the address range for the restore data.
  23. The method of any one of claims 17 to 22, wherein the restore data is checkpoints or snapshots, and the method further comprises executing an iterative compression process that includes:
    compressing a first plurality of the checkpoints or the snapshots stored in the volatile memory in parallel to free storage space of the volatile memory;
    compressing a second plurality of the checkpoints or the snapshots stored in the volatile memory in parallel; and
    storing the compressed second plurality of checkpoints or the snapshots into the free storage space of the volatile memory, wherein the second plurality is to be greater than the first plurality.
  24. The method of any one of claims 17 to 22, further comprising:
    storing the restore data into the volatile memory in response to a request from the one or more applications.
PCT/CN2020/118297 2020-09-28 2020-09-28 Application restore based on volatile memory storage across system resets Ceased WO2022061859A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/118297 WO2022061859A1 (en) 2020-09-28 2020-09-28 Application restore based on volatile memory storage across system resets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/118297 WO2022061859A1 (en) 2020-09-28 2020-09-28 Application restore based on volatile memory storage across system resets

Publications (1)

Publication Number Publication Date
WO2022061859A1 true WO2022061859A1 (en) 2022-03-31

Family

ID=80844863

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118297 Ceased WO2022061859A1 (en) 2020-09-28 2020-09-28 Application restore based on volatile memory storage across system resets

Country Status (1)

Country Link
WO (1) WO2022061859A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240028335A1 (en) * 2022-07-19 2024-01-25 Microsoft Technology Licensing, Llc Application state synchronization across computing environments to an alternate application
US20240419434A1 (en) * 2023-06-16 2024-12-19 Dell Products L.P. Firmware distribution method for an information handling system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070162779A1 (en) * 2006-01-12 2007-07-12 Microsoft Corporation Capturing and restoring application state after unexpected application shutdown
US20130346793A1 (en) * 2010-12-13 2013-12-26 Fusion-Io, Inc. Preserving data of a volatile memory
US20150178097A1 (en) * 2013-12-20 2015-06-25 Microsoft Corporation Memory-Preserving Reboot
US20170132070A1 (en) * 2014-05-12 2017-05-11 International Business Machines Corporation Restoring an application from a system dump file
US9767015B1 (en) * 2013-11-01 2017-09-19 Amazon Technologies, Inc. Enhanced operating system integrity using non-volatile system memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070162779A1 (en) * 2006-01-12 2007-07-12 Microsoft Corporation Capturing and restoring application state after unexpected application shutdown
US20130346793A1 (en) * 2010-12-13 2013-12-26 Fusion-Io, Inc. Preserving data of a volatile memory
US9767015B1 (en) * 2013-11-01 2017-09-19 Amazon Technologies, Inc. Enhanced operating system integrity using non-volatile system memory
US20150178097A1 (en) * 2013-12-20 2015-06-25 Microsoft Corporation Memory-Preserving Reboot
US20170132070A1 (en) * 2014-05-12 2017-05-11 International Business Machines Corporation Restoring an application from a system dump file

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240028335A1 (en) * 2022-07-19 2024-01-25 Microsoft Technology Licensing, Llc Application state synchronization across computing environments to an alternate application
US20240419434A1 (en) * 2023-06-16 2024-12-19 Dell Products L.P. Firmware distribution method for an information handling system
US12474919B2 (en) * 2023-06-16 2025-11-18 Dell Products L.P. Firmware distribution method for an information handling system

Similar Documents

Publication Publication Date Title
US11556327B2 (en) SOC-assisted resilient boot
US8627012B1 (en) System and method for improving cache performance
CN103098043B (en) Method and system for on-demand virtual machine image streaming
US8930947B1 (en) System and method for live migration of a virtual machine with dedicated cache
US9235524B1 (en) System and method for improving cache performance
US9563513B2 (en) O(1) virtual machine (VM) snapshot management
US9811276B1 (en) Archiving memory in memory centric architecture
US20210064234A1 (en) Systems, devices, and methods for implementing in-memory computing
US11354233B2 (en) Method and system for facilitating fast crash recovery in a storage device
US10496492B2 (en) Virtual machine backup with efficient checkpoint handling based on a consistent state of the virtual machine of history data and a backup type of a current consistent state of the virtual machine
US12117908B2 (en) Restoring persistent application data from non-volatile memory after a system crash or system reboot
US10474539B1 (en) Browsing federated backups
US10678431B1 (en) System and method for intelligent data movements between non-deduplicated and deduplicated tiers in a primary storage array
US10216630B1 (en) Smart namespace SSD cache warmup for storage systems
US10705733B1 (en) System and method of improving deduplicated storage tier management for primary storage arrays by including workload aggregation statistics
US11301338B2 (en) Recovery on virtual machines with existing snapshots
US10180800B2 (en) Automated secure data and firmware migration between removable storage devices that supports boot partitions and replay protected memory blocks
WO2022061859A1 (en) Application restore based on volatile memory storage across system resets
US11847030B2 (en) Prioritizing virtual machines for backup protection at a virtual machine disk level
US9053033B1 (en) System and method for cache content sharing
US9009416B1 (en) System and method for managing cache system content directories
US11513902B1 (en) System and method of dynamic system resource allocation for primary storage systems with virtualized embedded data protection
US12481506B2 (en) Embedded payload metadata signatures for tracking dispersed basic input output system components during operating system and pre-boot operations
EP4180936B1 (en) Virtualized system and method of controlling access to nonvolatile memory device in virtualization environment
US11221985B2 (en) Metadata space efficient snapshot operation in page storage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954710

Country of ref document: EP

Kind code of ref document: A1