US20180074971A1 - Ddr storage adapter - Google Patents
Ddr storage adapter Download PDFInfo
- Publication number
- US20180074971A1 US20180074971A1 US15/262,434 US201615262434A US2018074971A1 US 20180074971 A1 US20180074971 A1 US 20180074971A1 US 201615262434 A US201615262434 A US 201615262434A US 2018074971 A1 US2018074971 A1 US 2018074971A1
- Authority
- US
- United States
- Prior art keywords
- memory
- buffer
- pages
- dimm
- physical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1009—Address translation using page tables, e.g. page table structures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
- G06F12/1045—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/122—Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/128—Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/65—Details of virtual memory and virtual address translation
-
- G06F2212/69—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/70—Details relating to dynamic memory management
Definitions
- the memory buffer comprises at least one of a host buffer and a DIMM buffer.
- the mapping includes detecting if a physical page of the DIMM buffer is available, and if the physical page of the DIMM buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the DIMM buffer.
- the mapping includes detecting if a physical page of the host buffer is available, and if the physical page of the host buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the host buffer.
- FIG. 6 is a flowchart of method steps for enabling software application access to a DASS, according to one embodiment of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
- The present invention generally relates to adapting storage technologies to the Double Data Rate (DDR) memory interface that are not intrinsically compatible.
- Today, operating systems (OSs) provide software mechanisms to allow virtual memory backed by traditional physical block access storage devices such as hard disk drives (HDDs) and solid state drives (SSDs). These mechanisms allow for expanded application memory availability by making the available physical dynamic random-access memory (DRAM) of a host system act as a cache for a much larger traditional physical block access storage device (i.e. HDDs or SSDs). However, traditional physical block access storage devices such as HDDs and SSDs remain reliant on legacy communications interfaces, such as Serial AT Attachment (SATA) with the overhead of a storage software stack to perform input/output (I/O) operations.
- Modern storage technologies can produce storage devices whose access latency is significantly lower than traditional spinning disk storage devices (i.e. HDDs) and even flash-based storage devices. However, these modern storage technologies have not yet achieved a low-enough latency to render them compatible with DDR specifications. The latency overhead of the software stack that current OSs have in place to support block and file system access is disproportionate and acts as a significant performance penalty for these modern low-latency storage devices.
- What is needed, therefore, is an improved computing environment having access to new, high-performance persistent memory technologies over the memory interface compatible with existing standards or host system software, and without the overhead of a storage software stack to perform I/O operations.
- In one embodiment, a method of accessing a persistent memory over a memory interface includes allocating a virtual address range comprising virtual memory pages to be associated with physical pages of a memory buffer and marking each page table entry associated with the virtual address range as not having a corresponding one of the physical pages of the memory buffer. The method further includes generating a page fault when one or more of the virtual memory pages within the virtual address range is accessed, and mapping page table entries of the virtual memory pages to the physical pages of the memory buffer. The method further includes transferring data between a physical page of the persistent memory and one of the physical pages of the memory buffer mapped to a corresponding one of the virtual memory pages.
- In one embodiment, the persistent memory has a latency that is higher than a maximum specified latency of the memory interface. In one embodiment, the memory interface is a double data rate (DDR)-compliant interface. In one embodiment, the persistent memory comprises non-volatile flash memory devices. In one embodiment, the persistent memory comprises magnetic recording media. In one embodiment, the persistent memory comprises a dual in-line memory module (DIMM)-attached storage subsystem (DASS). In one embodiment, the DASS comprises an SSD. In another embodiment, the DASS comprises an HDD. In yet a further embodiment, the DASS comprises a solid state hybrid drive (SSHD).
- In one embodiment, the memory buffer comprises at least one of a host buffer and a DIMM buffer. In one embodiment, the mapping includes detecting if a physical page of the DIMM buffer is available, and if the physical page of the DIMM buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the DIMM buffer. In yet a further embodiment, the mapping includes detecting if a physical page of the host buffer is available, and if the physical page of the host buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the host buffer.
- In one embodiment, if the physical page of the host buffer is unavailable, the method further includes evicting a selected one of the physical pages of the DIMM buffer and updating the page table entry of the one of the virtual memory pages with the selected one of the physical pages of the DIMM buffer. In one embodiment, evicting the selected one of the physical pages of the DIMM buffer includes determining whether the selected one of the physical pages of the DIMM buffer corresponds to a dirty page table entry and, if so, transferring data from the selected one of the physical pages of the DIMM buffer to a physical page of the persistent memory.
- In one embodiment, the selected one of the physical pages of the DIMM buffer is selected using an eviction algorithm. In one embodiment, the eviction algorithm is one of a Least Recently Used (LRU) algorithm, an Adaptive Replacement Cache (ARC) algorithm, or a Least Frequently Used (LFU) algorithm.
- In one embodiment, a computer system includes a memory management unit communicatively coupled to a memory buffer having physical pages, a memory interface controller, and a persistent memory via a memory interface. The memory management unit is configured to generate a page fault in response to a request to access one or more virtual memory pages of a virtual address range not having a corresponding one or more of the physical pages of the memory buffer. The memory interface controller is configured to transfer data between a physical page of the persistent memory and one of the physical pages of the memory buffer corresponding to one of the one or more virtual memory pages.
- In one embodiment, the memory buffer comprises at least one of aa host buffer and a DIMM buffer. In one embodiment, the persistent memory has a latency that is higher than a maximum specified latency of the memory interface. In one embodiment, the memory interface is a DDR-compliant interface. In one embodiment, the persistent memory comprises non-volatile flash memory devices. In one embodiment, the persistent memory comprises magnetic recording media. In one embodiment, the persistent memory comprises a DASS. In one embodiment, the DASS comprises an SSD. In another embodiment, the DASS comprises an HDD. In yet a further embodiment, the DASS comprises a solid state hybrid drive (SSHD).
-
FIG. 1 is a block diagram of a host system, according to one embodiment of the invention. -
FIG. 2 is a block diagram of hardware and software components enabling access to a DASS, according to one embodiment of the invention. -
FIG. 3 is a block diagram software applications accessing a DASS over a memory interface, according to one embodiment of the invention. -
FIG. 4 is a flowchart of method steps to trigger a page fault in order to map a requested virtual address range to access a DASS, according to one embodiment of the invention. -
FIG. 5 is a flowchart of method steps for updating PTEs to un-map a virtual address range previously mapped to access a DASS, according to one embodiment of the invention. -
FIG. 6 is a flowchart of method steps for enabling software application access to a DASS, according to one embodiment of the invention. -
FIG. 1 is a block diagram of ahost system 100, according to one embodiment of the invention. As shown inFIG. 1 ,host system 100 comprises a Central Processing Unit (CPU) 104 having a Memory Management Unit (MMU) 106. The MMU 106 is communicatively coupled to a plurality of DIMM connectors 108 (108 a, 108 b, 108 c, and 108 d) and is responsible for managing access to memory connected to theDIMM connectors 108. In one embodiment, the MMU 106 is communicatively coupled to the plurality ofDIMM connectors 108 via a DDR-compliant interface. While only four 108 a, 108 b, 108 c, and 108 d are shown inDIMM connectors FIG. 1 for simplicity, in other embodiments, the number ofDIMM connectors 108 may be one or more. - In one embodiment, one or more memory modules may be connected to the
108 a, 108 b, 108 c, or 108 d. The memory modules may comprise any suitable memory devices, including DRAM, static random-access memory (SRAM), magnetoresistive random-access memory (MRAM), or the like, and may serve as a host memory buffer for theDIMM connectors host system 100. One or more DASSs may also be attached to the 108 a, 108 b, 108 c, or 108 d. The DASSs may serve as a persistent memory for applications running on theDIMM connectors host system 100. The DASSs may include one or more DIMM-connected memory buffers. The one or more DIMM buffers may be any suitable memory devices, including DRAM, SRAM, MRAM, etc. In one embodiment, the DASSs comprise non-volatile flash memory. In one embodiment, the DASSs comprise magnetic recording media. In one embodiment, the DASSs comprise an SSD. In another embodiment, the DASSs comprise an HDD. In yet a further embodiment, the DASSs comprises a SSHD. -
FIG. 2 is a block diagram of hardware and software components enabling access to aDASS 218, according to one embodiment of the invention. As shown inFIG. 2 , the hardware components includes a CPU/MMU 210 communicatively coupled to a pool of I/O memory buffers 212 that includes DIMM buffers 215 associated with theDASS 218 mounted to one or more DIMM connectors (such as theDIMM connectors 108 shown inFIG. 1 ), host buffers 217 (also mounted to one or more DIMM connectors, such as theDIMM connectors 108 shown inFIG. 1 ), and acontroller command buffer 213 of theDIMM controller 216. TheDIMM interface 220 exposes the DIMMcontroller command buffer 213 used by the software components to communicate with theDIMM controller 216. The DIMM buffers 215, theDIMM controller 216, and aDASS 218 are connected to aDIMM interface 220. - In one embodiment, the
DIMM interface 220 operates in accordance with the DDR standards, such as DDR3 or DDR4. In one embodiment, the host buffers 217 may comprise DDR memory modules. In one embodiment, the DIMM buffers 215 comprises DDR memory modules. In other embodiments, the host buffers 217 and the DIMM buffers 215 may comprise any suitable volatile memory modules capable of operating according to the DDR standards. In one embodiment, theDASS 218 comprises non-volatile flash memory. In one embodiment, theDASS 218 comprises magnetic recording media. In one embodiment, theDASS 218 comprises an SSD. In another embodiment, theDASS 218 comprises an HDD. In yet a further embodiment, theDASS 218 comprises a SSHD. - In operation,
software applications 202 allocate a virtual address range for use by thesoftware applications 202. Thesoftware applications 202 may use Load/Store CPU instructions over theDIMM interface 220. Thesoftware applications 202 callsOS Agent 206 to provide memory mapping functions to map some or all of theDASS 218 physical address space to the allocated virtual memory address range. This is because virtual memory is as its name implies: virtual. Virtual memory should be backed by a physical memory. The relationship between the virtual memory address space and the physical memory address space is stored as Page Table Entries (PTEs) 208. - Given the
DASS 218 is not inherently compliant with theDIMM interface 220 standards (e.g. the DDR standards), thesoftware applications 202 cannot directly access the physical address spaces of theDASS 218 over theDIMM interface 220. Rather, theOS Agent 206 acts on a page fault that is generated by the CPU/MMU 210 in response to an attempt by thesoftware applications 202 to access a location within the allocated virtual address range which is not yet mapped to a physical address space. CPU/MMU 210 will generate a page fault signal when a Load/Store instruction to read or write a virtual memory address fails because thePTE 208 associated with the address is either indicated as “not valid” or “physical page not present,” or the like, and theOS Agent 206 receives the page fault signal and is responsible for resolving the fault. - Following the page fault, the
OS Agent 206 allocates one or more pages of the memory buffers 212 to be used for mapping the virtual address range. TheOS Agent 206 may allocate pages of the DRAM buffers 212 in a number of different ways. In one embodiment, theOS Agent 206 allocates pages of the memory buffers 212 based on its record of current mappings in effect using the memory buffers 212. In another embodiment, theOS Agent 206 allocates pages of the memory buffers 212 based on a free memory buffer page list. - In yet another embodiment, the
OS Agent 206 allocates pages of the memory buffers 212 using an eviction algorithm to evict a previous mapping of the memory buffers 212. The eviction algorithm selects a location in the memory buffers 212 to be re-used for mapping the virtual address range. The eviction algorithm may be any suitable algorithm, such as a LRU algorithm, an ARC algorithm, or a LFU algorithm. - Once the DRAM buffers 212 have been allocated, the
OS Agent 206 sends a request to the DIMMcontroller proxy service 204 to write out dirty (modified but unwritten) page(s) from the allocatedmemory buffers 212 to theDASS 218 if a previous mapping is being evicted (i.e. the allocated pages of the memory buffers 212 were previously mapped to another virtual address space), and read in the page(s) from theDASS 218 into the allocated memory buffers 212. - The DIMM
controller proxy service 204 uses a DIMM controller protocol to communicate with theDIMM controller 216 via the DIMMcontroller command buffer 213. In one embodiment, if the pages of the memory buffers 212 allocated to the virtual memory address space requested by thesoftware applications 202 belong to the DIMM buffers 215, a DASS interface function copies the page(s) to/from the specifiedDIMM buffer 215. In one embodiment, if the pages of the memory buffers 212 allocated to the virtual memory address space requested by thesoftware applications 202 belong to the host buffers 217, theDIMM controller 216 copies the page to/from an internal field programmable gate array (FPGA) buffer. In other embodiments, theDIMM controller 216 may comprise an application-specific integrated circuit (ASIC) or a TOSHIBA FIT FAST STRUCTURED ARRAY (FFSA). - Once the page(s) has been copied to/from the
DASS 218, the DIMMcontroller proxy service 204 completes the mapping operation by causing the page to be copied from the FPGA buffer to the host buffer, if the memory buffers 212 allocated to the virtual memory address space arehost buffer 217 pages, and sends a response to theOS Agent 206 that the mapping operation has been completed. TheOS Agent 204 updates thePTEs 208. In one embodiment, where the memory buffers 212 were allocated using an eviction algorithm, theOS Agent 204 also invalidates any PTEs that referred tomemory buffers 212 that were selected for eviction. TheOS Agent 204 then updates the PTEs to associate page(s) for the requested virtual address range for thesoftware applications 202 to the allocated memory buffers 212. - Now that there are
physical memory buffers 212 allocated to the virtual address range, the page fault condition has been resolved, and theOS Agent 204 can allow thesoftware applications 202 to resume. By generating the page fault, mapping the virtual memory spaces to system memory buffers 212, such as host buffers 217 or DIMM buffers 215, and copying pages to/from theDASS 218 to the mappedmemory buffers 212, thesoftware applications 202 may access theDASS 218 over the memory interface without modifying existing standards or thesoftware applications 202. Thus, otherwise incompatible persistent memory storage devices having a latency higher than a maximum latency specified by a host system's memory interface, the persistent memory storage devices can still be accessed via the host system's memory interface without the overhead of a legacy storage software stack to perform I/O operations. - While
FIG. 2 shows and describes an embodiment of the present invention in the form of asingle software application 202 and asingle DASS 218, in other embodiments, a plurality ofsoftware applications 202 may access theDASS 218 or a plurality ofDASSs 218 in the manner shown and described in connection withFIG. 2 . -
FIG. 3 is a block diagram of 302 and 304 accessing asoftware applications DASS 320 over amemory interface 300, according to one embodiment of the invention. As shown inFIG. 3 , 302 and 304 each have asoftware applications 306 and 308, respectively.virtual address space Software application 302 allocates avirtual address range 303 andsoftware application 304 allocates avirtual address range 305. The virtual address ranges 303 and 305 havecorresponding PTEs 315 of page table 310 andPTEs 313 of page table 312, respectively. The 315 and 313 corresponding to the virtual address ranges 303 and 305 are mapped to DIMM buffers 318 of a DIMM-connectedPTEs volatile memory device 316 associated with theDASS 320. - The mapping of the DIMM buffers 318 to the virtual address ranges 303 and 305 may be accomplished using page faults in the manner as shown and described in
FIG. 2 . As previously described in connection withFIG. 2 , the mapping of the virtual address ranges 303 and 305 to physical memory address of a memory buffer is not limited to the DIMM buffers 318, and can be, for example, host buffers 314. The mappedDRAM buffers 318 can then be copied to/frompages 322 of theDASS 320, in effect, allowing the 302 and 304 to access thesoftware applications DASS 320. -
FIG. 4 is a flowchart of method steps 400 to trigger a page fault in order to map a requested virtual address range to access a DASS, according to one embodiment of the invention. Atstep 402, an OS Agent, for example theOS Agent 206 shown and described inFIG. 2 , above, receives a request for a virtual address range from a software application. Atstep 404, the OS Agent marks each PTE within the requested virtual address region as “physical page not present” or “not valid,” or the like. In this manner, a page fault will be triggered when the application attempts to read or write to the virtual address range due to the lack of corresponding physical pages backing the virtual address range, allowing the OS Agent to map the virtual address range to DRAM buffer page(s) as shown and described inFIG. 2 . -
FIG. 5 is a flowchart of method steps 500 for updating PTEs to un-map a virtual address range previously mapped to access a DASS, according to one embodiment of the invention. Atstep 502, a PTE's physical buffer page corresponding to the virtual address range is determined to be dirty or not. As previously discussed, a dirty page refers to a modified page that has not been written to the DASS. If the PTE's physical buffer page is not dirty, then atstep 504, whether the PTE's corresponding physical buffer page is a host system DRAM page (i.e. a host buffer) or not is determined. As previously discussed, the host buffer can comprise any suitable memory device, including SRAM and MRAM, and is not limited to DRAM. If not, then the physical buffer page is a DIMM buffer page, and atstep 512, the DIMM buffer page corresponding to the PTE is placed back on a free buffer list. If yes, then atstep 514, the host system DRAM page is released from the PTE. Atstep 516, the PTE is then marked “physical page not present” or “not valid,” or the like. - Alternatively, if at
step 502 the PTE's physical buffer page is dirty, then atstep 506, whether the PTE's corresponding physical buffer page is a host system DRAM page or not is determined. If not, then the physical buffer page is a DIMM buffer page, and atstep 508, data from the DIMM buffer page is moved to a page of the DASS, and the DIMM buffer page is placed back on the free buffer list atstep 512. If yes, then atstep 510, data from the host system DRAM page is moved to a page of the DASS, and the host system DRAM page is released from the PTE atstep 514. Again, atstep 516, the PTE is then marked “physical page not present” or “not valid,” or the like. - The method steps 500 are then repeated for every other PTE corresponding to the requested virtual address range.
-
FIG. 6 is a flowchart of method steps 600 for enabling software application access to a DASS, according to one embodiment of the invention. Atstep 602, a software application requests access to the DASS using a Load/Store CPU instruction. Atstep 604, the CPU's MMU locates the PTEs corresponding to the requested virtual address range. Atstep 606, the MMU determines whether the PTEs corresponding to the requested virtual address range have been marked as “physical page not present,” or “not valid,” or the like, or whether they have been marked as “physical page present” or “valid,” or the like. If the PTEs have been marked as “physical page present” then atstep 608, the Load/Store instruction completes successfully and the software application reads or writes data to the physical pages corresponding to the virtual address range. - If the PTEs have been marked as “physical page not present,” then as previously described, the MMU generates a page fault signal at
step 610. In this case the OS Agent takes over and atstep 612, and it is determined whether the page fault resides within the virtual address range to be associated with the DASS. In one embodiment, the OS Agent can be, for example, theOS Agent 204 shown and described in connection withFIG. 2 . If the page fault does not reside within the virtual address range to be associated with the DASS, then atstep 614, the page fault is passed to another system component to handle as it is unrelated to the request by the software application to access the DASS. If, however, the page fault does in fact correspond to the virtual address range to be associated with the DASS, then atstep 616, it is determined whether a free DIMM buffer page is available. - If a free DIMM buffer page is unavailable, then at
step 618, it is determined whether the host system DRAM usage limit has been reached (i.e. there are no available host buffer pages to be allocated to the requested virtual address range). If not, and there are free host system DRAM buffer pages, then atstep 622, the free host system DRAM buffer pages are allocated to the virtual address range and data (either write or read, depending on the Load/Store CPU instruction) is moved into the allocated host system DRAM buffer pages. Atstep 636, the PTEs are updated with the allocated host system DRAM buffer pages, and marked as “physical page present” or “valid,” or the like. Now that the virtual address range is backed by physical buffer pages, the Load/Store CPU instruction is re-tried atstep 602, the MMU locates the PTEs corresponding to the virtual address range that have now been marked as “physical page present” atstep 604, which proceeds to step 606 and ontostep 608 where the instruction is completed successfully and the instruction is carried out. - Alternatively, if at
step 618 it is determined that the host system DRAM usage limit has been reached, then atstep 624, an eviction algorithm is used to identify one or more “victim” DIMM page buffers (depending on the virtual address range) to evict. As previously mentioned, the eviction algorithm may be any suitable algorithm, such as a LRU algorithm, an ARC algorithm, or a LFU algorithm. Once the victim DIMM page buffers have been identified, atstep 630, the PTEs corresponding to the victim DIMM page buffers are marked as “physical page not present” or “not valid,” or the like. Atstep 632, it is determined whether the victim DIMM page buffers are dirty. If they are, then atstep 634, the data from the victim DIMM page buffers are moved to page buffers of the DASS, and atstep 636, the PTEs corresponding to the virtual address range are updated with the allocated victim DIMM buffer pages, and marked as “physical page present” or “valid,” or the like. If the victim DIMM page buffers are determined to be not dirty atstep 632, then there is no need to move the data atstep 634, and atstep 636, the PTEs corresponding to the virtual address range are updated with the allocated victim DIMM buffer pages, and marked as “physical page present” or “valid,” or the like. - Again, now that the virtual address range is backed by physical buffer pages, the Load/Store CPU instruction is re-tried at
step 602, the MMU locates the PTEs corresponding to the virtual address range that have now been marked as “physical page present” atstep 604, which proceeds to step 606 and ontostep 608 where the instruction is completed successfully and the instruction is carried out. Following the method steps 600, data can be written to or read from the allocated memory buffers by the DASS, as shown inFIG. 3 , for example, in effect enabling the software application to access the DASS over the memory interface. - Other objects, advantages and embodiments of the various aspects of the present invention will be apparent to those who are skilled in the field of the invention and are within the scope of the description and the accompanying Figures. For example, but without limitation, structural or functional elements might be rearranged, or method steps reordered, consistent with the present invention. Similarly, principles according to the present invention could be applied to other examples, which, even if not specifically described here in detail, would nevertheless be within the scope of the present invention.
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/262,434 US9916256B1 (en) | 2016-09-12 | 2016-09-12 | DDR storage adapter |
| US15/888,483 US10430346B2 (en) | 2016-09-12 | 2018-02-05 | DDR storage adapter |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/262,434 US9916256B1 (en) | 2016-09-12 | 2016-09-12 | DDR storage adapter |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/888,483 Continuation US10430346B2 (en) | 2016-09-12 | 2018-02-05 | DDR storage adapter |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US9916256B1 US9916256B1 (en) | 2018-03-13 |
| US20180074971A1 true US20180074971A1 (en) | 2018-03-15 |
Family
ID=61525573
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/262,434 Active US9916256B1 (en) | 2016-09-12 | 2016-09-12 | DDR storage adapter |
| US15/888,483 Active US10430346B2 (en) | 2016-09-12 | 2018-02-05 | DDR storage adapter |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/888,483 Active US10430346B2 (en) | 2016-09-12 | 2018-02-05 | DDR storage adapter |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US9916256B1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190042460A1 (en) * | 2018-02-07 | 2019-02-07 | Intel Corporation | Method and apparatus to accelerate shutdown and startup of a solid-state drive |
| WO2020061098A1 (en) * | 2018-09-17 | 2020-03-26 | Micron Technology, Inc. | Cache operations in a hybrid dual in-line memory module |
| US10802748B2 (en) * | 2018-08-02 | 2020-10-13 | MemVerge, Inc | Cost-effective deployments of a PMEM-based DMO system |
| US11061609B2 (en) | 2018-08-02 | 2021-07-13 | MemVerge, Inc | Distributed memory object method and system enabling memory-speed data access in a distributed environment |
| US11134055B2 (en) | 2018-08-02 | 2021-09-28 | Memverge, Inc. | Naming service in a distributed memory object architecture |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112835820B (en) * | 2019-11-22 | 2025-09-30 | 北京忆芯科技有限公司 | Method and storage device for quickly accessing HMB |
| KR102400977B1 (en) * | 2020-05-29 | 2022-05-25 | 성균관대학교산학협력단 | Method for processing page fault by a processor |
| CN112463665B (en) * | 2020-10-30 | 2022-07-26 | 中国船舶重工集团公司第七0九研究所 | Switching method and device for multi-channel video memory interleaving mode |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4520441A (en) * | 1980-12-15 | 1985-05-28 | Hitachi, Ltd. | Data processing system |
| US6112285A (en) * | 1997-09-23 | 2000-08-29 | Silicon Graphics, Inc. | Method, system and computer program product for virtual memory support for managing translation look aside buffers with multiple page size support |
| US7475183B2 (en) * | 2005-12-12 | 2009-01-06 | Microsoft Corporation | Large page optimizations in a virtual machine environment |
| US20090113111A1 (en) * | 2007-10-30 | 2009-04-30 | Vmware, Inc. | Secure identification of execution contexts |
| US20110161620A1 (en) * | 2009-12-29 | 2011-06-30 | Advanced Micro Devices, Inc. | Systems and methods implementing shared page tables for sharing memory resources managed by a main operating system with accelerator devices |
| US8719543B2 (en) * | 2009-12-29 | 2014-05-06 | Advanced Micro Devices, Inc. | Systems and methods implementing non-shared page tables for sharing memory resources managed by a main operating system with accelerator devices |
| US8943296B2 (en) * | 2011-04-28 | 2015-01-27 | Vmware, Inc. | Virtual address mapping using rule based aliasing to achieve fine grained page translation |
| US9063877B2 (en) * | 2013-03-29 | 2015-06-23 | Kabushiki Kaisha Toshiba | Storage system, storage controller, and method for managing mapping between local address and physical address |
| US9740637B2 (en) * | 2007-10-30 | 2017-08-22 | Vmware, Inc. | Cryptographic multi-shadowing with integrity verification |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7716411B2 (en) | 2006-06-07 | 2010-05-11 | Microsoft Corporation | Hybrid memory device with single interface |
| US8738840B2 (en) * | 2008-03-31 | 2014-05-27 | Spansion Llc | Operating system based DRAM/FLASH management scheme |
-
2016
- 2016-09-12 US US15/262,434 patent/US9916256B1/en active Active
-
2018
- 2018-02-05 US US15/888,483 patent/US10430346B2/en active Active
Patent Citations (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4520441A (en) * | 1980-12-15 | 1985-05-28 | Hitachi, Ltd. | Data processing system |
| US6112285A (en) * | 1997-09-23 | 2000-08-29 | Silicon Graphics, Inc. | Method, system and computer program product for virtual memory support for managing translation look aside buffers with multiple page size support |
| US7475183B2 (en) * | 2005-12-12 | 2009-01-06 | Microsoft Corporation | Large page optimizations in a virtual machine environment |
| US8555081B2 (en) * | 2007-10-30 | 2013-10-08 | Vmware, Inc. | Cryptographic multi-shadowing with integrity verification |
| US9336033B2 (en) * | 2007-10-30 | 2016-05-10 | Vmware, Inc. | Secure identification of execution contexts |
| US20090113216A1 (en) * | 2007-10-30 | 2009-04-30 | Vmware, Inc. | Cryptographic multi-shadowing with integrity verification |
| US9740637B2 (en) * | 2007-10-30 | 2017-08-22 | Vmware, Inc. | Cryptographic multi-shadowing with integrity verification |
| US8261265B2 (en) * | 2007-10-30 | 2012-09-04 | Vmware, Inc. | Transparent VMM-assisted user-mode execution control transfer |
| US20090113111A1 (en) * | 2007-10-30 | 2009-04-30 | Vmware, Inc. | Secure identification of execution contexts |
| US8607013B2 (en) * | 2007-10-30 | 2013-12-10 | Vmware, Inc. | Providing VMM access to guest virtual memory |
| US20090113110A1 (en) * | 2007-10-30 | 2009-04-30 | Vmware, Inc. | Providing VMM Access to Guest Virtual Memory |
| US8819676B2 (en) * | 2007-10-30 | 2014-08-26 | Vmware, Inc. | Transparent memory-mapped emulation of I/O calls |
| US9658878B2 (en) * | 2007-10-30 | 2017-05-23 | Vmware, Inc. | Transparent memory-mapped emulation of I/O calls |
| US8719543B2 (en) * | 2009-12-29 | 2014-05-06 | Advanced Micro Devices, Inc. | Systems and methods implementing non-shared page tables for sharing memory resources managed by a main operating system with accelerator devices |
| US20110161620A1 (en) * | 2009-12-29 | 2011-06-30 | Advanced Micro Devices, Inc. | Systems and methods implementing shared page tables for sharing memory resources managed by a main operating system with accelerator devices |
| US8943296B2 (en) * | 2011-04-28 | 2015-01-27 | Vmware, Inc. | Virtual address mapping using rule based aliasing to achieve fine grained page translation |
| US9063877B2 (en) * | 2013-03-29 | 2015-06-23 | Kabushiki Kaisha Toshiba | Storage system, storage controller, and method for managing mapping between local address and physical address |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190042460A1 (en) * | 2018-02-07 | 2019-02-07 | Intel Corporation | Method and apparatus to accelerate shutdown and startup of a solid-state drive |
| US10802748B2 (en) * | 2018-08-02 | 2020-10-13 | MemVerge, Inc | Cost-effective deployments of a PMEM-based DMO system |
| US11061609B2 (en) | 2018-08-02 | 2021-07-13 | MemVerge, Inc | Distributed memory object method and system enabling memory-speed data access in a distributed environment |
| US11134055B2 (en) | 2018-08-02 | 2021-09-28 | Memverge, Inc. | Naming service in a distributed memory object architecture |
| WO2020061098A1 (en) * | 2018-09-17 | 2020-03-26 | Micron Technology, Inc. | Cache operations in a hybrid dual in-line memory module |
| US11169920B2 (en) | 2018-09-17 | 2021-11-09 | Micron Technology, Inc. | Cache operations in a hybrid dual in-line memory module |
| US11561902B2 (en) | 2018-09-17 | 2023-01-24 | Micron Technology, Inc. | Cache operations in a hybrid dual in-line memory module |
Also Published As
| Publication number | Publication date |
|---|---|
| US20180157597A1 (en) | 2018-06-07 |
| US9916256B1 (en) | 2018-03-13 |
| US10430346B2 (en) | 2019-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10430346B2 (en) | DDR storage adapter | |
| US10152428B1 (en) | Virtual memory service levels | |
| US9910602B2 (en) | Device and memory system for storing and recovering page table data upon power loss | |
| US9323659B2 (en) | Cache management including solid state device virtualization | |
| US8627040B2 (en) | Processor-bus-connected flash storage paging device using a virtual memory mapping table and page faults | |
| US9047200B2 (en) | Dynamic redundancy mapping of cache data in flash-based caching systems | |
| US10769062B2 (en) | Fine granularity translation layer for data storage devices | |
| US11016905B1 (en) | Storage class memory access | |
| KR102168193B1 (en) | System and method for integrating overprovisioned memory devices | |
| WO2012109679A2 (en) | Apparatus, system, and method for application direct virtual memory management | |
| CN107391391A (en) | The method, system and solid state hard disc of data copy are realized in the FTL of solid state hard disc | |
| US20170228191A1 (en) | Systems and methods for suppressing latency in non-volatile solid state devices | |
| US20220382478A1 (en) | Systems, methods, and apparatus for page migration in memory systems | |
| US11449423B1 (en) | Enhancing cache dirty information | |
| US9785552B2 (en) | Computer system including virtual memory or cache | |
| US20240403241A1 (en) | Systems, methods, and apparatus for cache operation in storage devices | |
| US12105968B2 (en) | Systems, methods, and devices for page relocation for garbage collection | |
| US20240061786A1 (en) | Systems, methods, and apparatus for accessing data in versions of memory pages | |
| US10241906B1 (en) | Memory subsystem to augment physical memory of a computing system | |
| JP4792065B2 (en) | Data storage method | |
| US12222854B2 (en) | Snapshotting pending memory writes using non-volatile memory | |
| TW202520078A (en) | Storage-side page tables for memory systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: TOSHIBA MEMORY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:043397/0380 Effective date: 20170706 |
|
| AS | Assignment |
Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAXEY, DAVID;KAMATH, NIDISH;AGRAWAL, VIKAS;SIGNING DATES FROM 20160910 TO 20160916;REEL/FRAME:043836/0860 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: K.K. PANGEA, JAPAN Free format text: MERGER;ASSIGNOR:TOSHIBA MEMORY CORPORATION;REEL/FRAME:055659/0471 Effective date: 20180801 Owner name: KIOXIA CORPORATION, JAPAN Free format text: CHANGE OF NAME AND ADDRESS;ASSIGNOR:TOSHIBA MEMORY CORPORATION;REEL/FRAME:055669/0001 Effective date: 20191001 Owner name: TOSHIBA MEMORY CORPORATION, JAPAN Free format text: CHANGE OF NAME AND ADDRESS;ASSIGNOR:K.K. PANGEA;REEL/FRAME:055669/0401 Effective date: 20180801 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |