US20150212744A1 - Method and system of eviction stage population of a flash memory cache of a multilayer cache system - Google Patents
Method and system of eviction stage population of a flash memory cache of a multilayer cache system Download PDFInfo
- Publication number
- US20150212744A1 US20150212744A1 US14/164,248 US201414164248A US2015212744A1 US 20150212744 A1 US20150212744 A1 US 20150212744A1 US 201414164248 A US201414164248 A US 201414164248A US 2015212744 A1 US2015212744 A1 US 2015212744A1
- Authority
- US
- United States
- Prior art keywords
- cache
- memory
- data
- primary cache
- multilayer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
- G06F12/0833—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means in combination with broadcast means (e.g. for invalidation or updating)
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
- G06F12/0897—Caches characterised by their organisation or structure with two or more cache hierarchy levels
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/128—Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1072—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers for memories with random access ports synchronised on clock signal pulse trains, e.g. synchronous memories, self timed memories
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2206/00—Indexing scheme related to dedicated interfaces for computers
- G06F2206/10—Indexing scheme related to storage interfaces for computers, indexing schema related to group G06F3/06
- G06F2206/1014—One time programmable [OTP] memory, e.g. PROM, WORM
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1032—Reliability improvement, data loss prevention, degraded operation etc
- G06F2212/1036—Life time enhancement
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/22—Employing cache memory using specific memory technology
- G06F2212/225—Hybrid cache memory, e.g. having both volatile and non-volatile portions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/62—Details of cache specific to multiprocessor cache arrangements
-
- G06F2212/69—
Definitions
- This application relates generally to computer memory management, and more specifically to a system, article of manufacture and method for eviction stage population of a flash memory cache of a multilayer cache system.
- Flash memory can be an electronic non-volatile computer storage medium that can be electrically erased and reprogrammed. While it can be read and/or programmed a byte or a word at a time in a random access fashion, some forms of flash memory can only be erased a unit block at a time. Additionally, some forms of flash memory may have as finite number of program-erase cycle before the wear begins to deteriorate the integrity of the storage.
- data may be fetched from lower layers (e.g. a secondary cache) to populate a higher layer (e.g. a primary cache).
- the lower layer may fetch data from secondary storage (e.g. a hard-disk drive).
- secondary storage e.g. a hard-disk drive
- a primary cache is maintained in a main memory of a computer system.
- the primary cache is populated with a set of data from a secondary data storage system.
- a secondary cache is maintained in another memory of the computer system.
- a subset of data is selected from the set of data in the primary cache.
- a trigger event is detected.
- the secondary cache is populated with the subset of data selected from the set of data in the primary cache.
- a lifespan of each memory page in the primary cache can be estimated.
- Memory pages with lifespans within a specified lifespan range can be associated.
- a set of associated memory pages with lifespans within the specified lifespan range can be written to a block in the flash memory system.
- the main memory of the computer system can include a dynamic random-access memory (DRAM) memory system.
- the other memory of the computer system can include a flash memory system in a solid-state storage device.
- the secondary data storage system can include a hard-disk storage system.
- FIG. 1 depicts, in block diagram format, an example of a computer system implementing eviction stage population of a flash memory cache of a multilayer cache, according to some embodiments.
- FIG. 2 illustrates an example process of populating a flash memory cache of a multilayer cache during an eviction process of a primary cache (e.g. in RAM memory), according to some embodiments.
- FIG. 3 depicts an example process of migrating memory pages cached in a primary cache to a secondary cache in an SSD device during an eviction stage of the primary cache, according to some embodiments.
- FIG. 4 depicts an exemplary process of reducing storage of metadata in a secondary cache stored in a flash memory of an SSD device, according to some embodiments.
- FIG. 5 depicts a computing system with a number of components that can be used to perform any of the processes described herein.
- FIG. 6 is a block diagram of a sample computing environment that can be utilized to implement some embodiments.
- FIG. 7 depicts an example distributed database system (DDBS) that implements the multilayer caching processes provided herein according to some embodiments.
- DDBS distributed database system
- the following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein may be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments.
- the schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods ma be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, and they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
- FIG. 1 depicts, in block diagram format, an example of a computer system 100 implementing eviction stage population of a flash memory cache (e.g. a secondary cache) of a multilayer cache, according to some embodiments.
- computer system 100 can include a central processing unit (CPU) 102 .
- CPU 102 can be a hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system.
- CPU 102 can be communicatively coupled with a dynamic random-access memory (DRAM) memory device 104 (and/or other type memory device used to store data or programs on a temporary or permanent basis for use in a computer).
- DRAM dynamic random-access memory
- DRAM memory 104 can include a primary cache 112 populated with data from a data storage system (e.g. as indicated with step 112 ) such as a hard disk drive (HDD) and/or remote network storage 108 .
- DRAM memory 104 can be communicatively coupled with to solid-state storage device such flash memory device 106 .
- Additional caches can be stored in various secondary systems such as flash memory device 106 (e.g. secondary cache 116 ).
- secondary cache 116 e.g. secondary cache 116 .
- primary cache 112 can be analyzed and various pages thereof selected according to one or more specified metrics (e.g. see infra). Accordingly, in some embodiments, the population phase of secondary cache 116 in the multilayer cache system of computer system 100 can be moved from a fetch stage (e.g.
- an eviction process can refer to the process by which old, relatively unused, and/or excessively voluminous data can be dropped from the cache, allowing the cache to remain within a memory budget.
- two or more secondary caches can be populated by a primary cache in a random access memory.
- one secondary cache can be populated during an eviction stage of a primary cache and another secondary cache can be populated based on other metrics and/or triggers (e.g. based on metric and/or triggers that facilitate a ‘big’ data computing process).
- the secondary cache can be remote and reside in other nodes of a distributed database cluster (e.g. infra).
- system 100 can be implemented in a system with SSD cards in a server to layer virtualization methods.
- system 100 can be implemented in a system with a remote SSD appliance (e.g. can be remotely accessed via a computer network) that is outside of a server (with the CPU and primary cache) and a storage system (with the hard disk drive).
- Software in the server can implement the population of the secondary cache store in the remote SSD appliance.
- system 100 can be implemented in a central (e.g. monolithic) storage environment and/or distributed storage systems (local or remote) (e.g. see FIG. 7 ).
- the local CPU can view the remote secondary cache's SSD appliance as a backend storage.
- FIG. 2 illustrates an example process 200 of populating a flash memory cache of a multilayer cache during an eviction process of a primary cache (e.g. in RAM memory), according to some embodiments.
- the flash memory cache can be a secondary cache in a multilayer cache system (e.g. see FIG. 1 ).
- the population phase of the flash memory cache can occur after the fetch phase of the primary cache from a backend storage (e.g. be triggered by a later eviction operation performed on the primary cache).
- the primary cache can be populated directly from the second storage device (e.g. skipping a secondary cache in a flash storage device).
- a backend storage device can be a secondary storage system such as a hard disk device and the like.
- data in the primary cache of a multilayer cache is selected to populate secondary (or other non-primary cache(s)). This data can be selected based on various metrics such a recency of use by an application, size, a time stamp threshold, an analysis of the history of access to the data, etc.
- a trigger event can be detected. In one example, the trigger event can be an eviction process of data in the primary cache.
- the data selected in step 202 can be populated to the secondary cache (or other non-primary cache(s)) in step 206 .
- Process 200 can then be repeated.
- the size of the data sets can be varied based on various factors such as type of computing system, type of data, project type (e.g. ‘big’ data projects can include larger data sets), and the like.
- FIG. 3 depicts an example process 300 of migrating memory pages cached in a primary cache to a secondary cache in an SSD device during an eviction stage of the primary cache, according to some embodiments.
- a memory page can be a fixed-length contiguous block of memory (e.g. virtual memory).
- garbage collection can be a form of automatic memory management.
- a garbage collector in a memory management module (not shown) can reclaim memory occupied by objects that are no longer in use by the program (i.e. ‘garbage’).
- garbage collection in an SSD device data can be written to the flash memory in units of pages.
- a memory page can be made up of multiple cells of the flash memory.
- the flash memory may be set to be erased in larger units called blocks (e.g. made up of multiple pages). Accordingly, in step 302 , a probably lifespan of each memory page in a primary cache can be determined. The probable lifespan can be determined based on such factors as analysis of historical lifespans of other memory pages with similar data, recency of access of the data in the memory pages (e.g. the ‘five-minute rule’), etc. In step 304 , various memory pages with lifespans with a specified range can be associated together. The size of this association can be based on the size of the block units of flash memory in the SSD device that stores the secondary cache. In step 306 , a trigger event can be detected. In one example, the trigger event can be an eviction process of data in the primary cache.
- step 308 associated memory pages can be written to the block of flash memory that stores the secondary cache.
- garbage collection processes in the flash memory can be more efficient because each block in more likely to include all and/or greater amounts of valid data and/or memory pages with similar lifetimes.
- FIG. 4 depicts an exemplary process 400 of reducing storage of metadata in a secondary cache stored in a flash memory of an SSD device, according to some embodiments.
- a contiguous memory pages in a primary cache can be identified.
- the contiguous memory pages can be associated (e.g. assigned a common eviction time, associated for migration to a common secondary cache, etc.).
- a trigger event can be detected.
- the trigger event can be an eviction process of data in the primary cache.
- the associated contiguous memory pages can be written to a secondary cache in a flash memory of the SSD device.
- the grouping of the contiguous memory pages can reduce the amount of metadata about the contiguous memory pages also stored in the secondary cache.
- the metadata is the address table becomes be decrease utilized process 400 .
- Memory pages can be store in the primary cache in a DRAM device in four (4) kilobytes groupings and evicted in sixty-four (64) kilobytes grouping as a unit. This 64 kilobytes unit can then be utilized as the page size for secondary cache.
- data that is accessed sequentially may not be cached in the secondary cache. For example, it can be determine if data sequential in the primary cache is sequential. If yes, then this data may not be stored sequentially in secondary cache. When sequential data is discovered in the secondary cache, the memory pages already in the secondary cache can be overridden and a smaller sample of the data can be retained for sequential access. For example, it is noted that in some embodiments, data that is accessed in a sequential manner may benefit less from long-term caching. Rotating-media hard drives may be better suited to handle sequential access. In this case, a pre-fetch algorithm can be used to detect sequential streams and/or read-ahead the data on demand to reduce read latency.
- some embodiments can avoid storing sequential data in a secondary cache to avoid unnecessary wear in the solid-state device. Moreover, by delaying the population phase of a secondary (and/or other non-primary cache) cache, the probability of detection of sequential access can be increased. In this way, the amount of sequentially-accessed data being stored in the secondary cache can be decreased.
- FIG. 5 depicts an exemplary computing system 500 that can be configured to perform several of the processes provided herein.
- computing system 500 can include, for example, a processor, memory, storage, and I/O devices (e.g., monitor, keyboard, disk drive, Internet connection, etc.).
- computing system 500 can include circuitry or other specialized hardware for carrying out some or all aspects of the processes.
- computing system 500 can be configured as a system that includes one or more units, each of which is configured to carry out some aspects of the processes either in software, hardware, or some combination thereof.
- FIG. 5 depicts a computing system 500 with a number of components that can be used to perform any of the processes described, herein.
- the main system 502 includes a motherboard 504 having an I/O section 506 , one or more central processing units (CPU) 505 , and a memory section 510 , which can have a flash memory card 512 related to it.
- the I/O section 506 can be connected to a display 514 , a keyboard and/or other attendee input (not shown), a disk storage unit 516 , and a media drive unit 518 .
- the media drive unit 518 can read/write a computer-readable medium 520 , which can include programs 522 and/or data.
- Computing system 500 can include a web browser.
- computing system 500 can be configured to include additional systems in order to fulfill various functionalities
- Display 514 can include a touch-screen system.
- system 500 can be included in and/or be utilized by the various systems and/or methods described herein.
- a value judgment can refer to a judgment based upon a particular set of values or on a particular value system.
- FIG. 6 is a block diagram of a sample computing environment 600 that can be utilized to implement some embodiments.
- the system 600 further illustrates a system that includes one or more client(s) 602 .
- the client(s) 602 can be hardware and/or software (e.g., threads, processes, computing devices).
- the system 600 also includes one or more server(s) 604 .
- the server(s) 604 can also be hardware and/or software (e.g., threads, processes, computing devices).
- One possible communication between a client 602 and a server 604 may be in the form of a data packet adapted to be transmitted between two or more computer processes.
- the system 600 includes a communication framework 610 that can be employed to facilitate communications between the client(s) 602 and the server(s) 604 .
- the client(s) 602 are connected to one or more client data store(s) 606 that can be employed to store information local to the client(s) 602 .
- the server(s) 604 are connected to one or more server data store(s) 608 that can be employed to store information local to the server(s) 604 .
- FIG. 7 depicts an example distributed database system (DDBS) 700 that implements the multilayer caching processes provided herein, according to some embodiments.
- DDBS 700 can implement processes 200 , 300 and 400 as well as those provided in FIG. 1 .
- DDBS 700 can be a modified version of system 100 in distributed database system environment
- a secondary cache can be in a different node than the primary cache.
- a secondary cache can be stored in one or more other nodes (e.g. either completely or partially replicated in multiple nodes).
- each node 702 A-B can include a primary cache 704 A-B and a secondary cache 706 A-B respectively.
- the primary cache 704 A in node 702 A can utilized a remote secondary cache such as the secondary cache 706 B in node 702 B (e.g. to implement process 200 , 300 and/or 400 and/or any modifications thereof).
- a remote secondary cache such as the secondary cache 706 B in node 702 B (e.g. to implement process 200 , 300 and/or 400 and/or any modifications thereof).
- the particular multilayer caching implementation of the present figure is provide by way of example and can be modified to implement other permutations of other multilayer caching implementations (e.g. with three layers, four layers, five layers, etc.).
- DDBS 700 can be implemented in various distributed database and/or distributed file systems (e.g. Hadoop®, Cassandra®, OpenStack® data systems, various other ‘big data’ applications, etc.).
- the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
- the machine-readable medium can be a non-transitory form of machine-readable medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
In one exemplary aspect, a primary cache is maintained in a main memory of a computer system. The primary cache is populated with a set of data from a secondary data storage system. A secondary cache is maintained in another memory of the computer system. A subset of data is selected from the set of data in the primary cache. A trigger event is detected. The secondary cache is populated with the subset of data selected from the set of data in the primary cache. Optionally, a lifespan of each memory page in the primary cache can be estimated. Memory pages with lifespans within a specified lifespan range can be associated. A set of associated memory pages with lifespans within the specified lifespan range can be written to a block in the flash memory system. The main memory of the computer system can include a dynamic random-access memory (DRAM) memory system. The other memory of the computer system can include a flash memory system in a solid-state storage device.
Description
- 1. Field
- This application relates generally to computer memory management, and more specifically to a system, article of manufacture and method for eviction stage population of a flash memory cache of a multilayer cache system.
- 2. Related Art
- Flash memory can be an electronic non-volatile computer storage medium that can be electrically erased and reprogrammed. While it can be read and/or programmed a byte or a word at a time in a random access fashion, some forms of flash memory can only be erased a unit block at a time. Additionally, some forms of flash memory may have as finite number of program-erase cycle before the wear begins to deteriorate the integrity of the storage.
- In some forms of multilayer caching, data may be fetched from lower layers (e.g. a secondary cache) to populate a higher layer (e.g. a primary cache). The lower layer may fetch data from secondary storage (e.g. a hard-disk drive). This model can result in inefficient and/or unnecessarily writes in the flash memory of the secondary cache. These unnecessary writes can prematurely degrade the flash memory of the lower layer caches. There is therefore a need and an opportunity to improve the methods and systems whereby a secondary cache implemented in a flash memory can be populated.
- In one aspect, a primary cache is maintained in a main memory of a computer system. The primary cache is populated with a set of data from a secondary data storage system. A secondary cache is maintained in another memory of the computer system. A subset of data is selected from the set of data in the primary cache. A trigger event is detected. The secondary cache is populated with the subset of data selected from the set of data in the primary cache.
- Optionally, a lifespan of each memory page in the primary cache can be estimated. Memory pages with lifespans within a specified lifespan range can be associated. A set of associated memory pages with lifespans within the specified lifespan range can be written to a block in the flash memory system. The main memory of the computer system can include a dynamic random-access memory (DRAM) memory system. The other memory of the computer system can include a flash memory system in a solid-state storage device. The secondary data storage system can include a hard-disk storage system.
- The present application can be best understood by reference to the following description taken in conjunction with the accompanying figures, in which like parts may be referred to by like numerals.
-
FIG. 1 depicts, in block diagram format, an example of a computer system implementing eviction stage population of a flash memory cache of a multilayer cache, according to some embodiments. -
FIG. 2 illustrates an example process of populating a flash memory cache of a multilayer cache during an eviction process of a primary cache (e.g. in RAM memory), according to some embodiments. -
FIG. 3 depicts an example process of migrating memory pages cached in a primary cache to a secondary cache in an SSD device during an eviction stage of the primary cache, according to some embodiments. -
FIG. 4 depicts an exemplary process of reducing storage of metadata in a secondary cache stored in a flash memory of an SSD device, according to some embodiments. -
FIG. 5 depicts a computing system with a number of components that can be used to perform any of the processes described herein. -
FIG. 6 is a block diagram of a sample computing environment that can be utilized to implement some embodiments. -
FIG. 7 depicts an example distributed database system (DDBS) that implements the multilayer caching processes provided herein according to some embodiments. - The Figures described above are a representative set, and are not an exhaustive with respect to embodying the invention.
- Disclosed are a system, method, and article of setting eviction stage population of a flash memory multilayer cache. The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein may be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments.
- Reference throughout this specification to “one embodiment,” “an embodiment,” “one example,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
- Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art can recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
- The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods ma be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, and they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
-
FIG. 1 depicts, in block diagram format, an example of acomputer system 100 implementing eviction stage population of a flash memory cache (e.g. a secondary cache) of a multilayer cache, according to some embodiments. In the present example,computer system 100 can include a central processing unit (CPU) 102. CPU 102 can be a hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system. CPU 102 can be communicatively coupled with a dynamic random-access memory (DRAM) memory device 104 (and/or other type memory device used to store data or programs on a temporary or permanent basis for use in a computer).DRAM memory 104 can include aprimary cache 112 populated with data from a data storage system (e.g. as indicated with step 112) such as a hard disk drive (HDD) and/orremote network storage 108.DRAM memory 104 can be communicatively coupled with to solid-state storage device suchflash memory device 106. Additional caches can be stored in various secondary systems such as flash memory device 106 (e.g. secondary cache 116). For example, in step 114,primary cache 112 can be analyzed and various pages thereof selected according to one or more specified metrics (e.g. see infra). Accordingly, in some embodiments, the population phase ofsecondary cache 116 in the multilayer cache system ofcomputer system 100 can be moved from a fetch stage (e.g. stage when a cache is populated from an HDD) to the eviction stage. As used herein, in some examples, an eviction process can refer to the process by which old, relatively unused, and/or excessively voluminous data can be dropped from the cache, allowing the cache to remain within a memory budget. - It is further noted, that the system and methods of
FIG. 1 are provided by way of example. In another example, two or more secondary caches can be populated by a primary cache in a random access memory. In still another example, one secondary cache can be populated during an eviction stage of a primary cache and another secondary cache can be populated based on other metrics and/or triggers (e.g. based on metric and/or triggers that facilitate a ‘big’ data computing process). It is also noted that the secondary cache can be remote and reside in other nodes of a distributed database cluster (e.g. infra). In some embodiments,system 100 can be implemented in a system with SSD cards in a server to layer virtualization methods. In someembodiments system 100 can be implemented in a system with a remote SSD appliance (e.g. can be remotely accessed via a computer network) that is outside of a server (with the CPU and primary cache) and a storage system (with the hard disk drive). Software in the server can implement the population of the secondary cache store in the remote SSD appliance. Accordingly,system 100 can be implemented in a central (e.g. monolithic) storage environment and/or distributed storage systems (local or remote) (e.g. seeFIG. 7 ). In one example of a remote distributed storage system, the local CPU can view the remote secondary cache's SSD appliance as a backend storage. -
FIG. 2 illustrates anexample process 200 of populating a flash memory cache of a multilayer cache during an eviction process of a primary cache (e.g. in RAM memory), according to some embodiments. The flash memory cache can be a secondary cache in a multilayer cache system (e.g. seeFIG. 1 ). Inprocess 200, the population phase of the flash memory cache can occur after the fetch phase of the primary cache from a backend storage (e.g. be triggered by a later eviction operation performed on the primary cache). It is noted, that the primary cache can be populated directly from the second storage device (e.g. skipping a secondary cache in a flash storage device). As used herein, a backend storage device can be a secondary storage system such as a hard disk device and the like. Instep 202 ofprocess 200, data in the primary cache of a multilayer cache is selected to populate secondary (or other non-primary cache(s)). This data can be selected based on various metrics such a recency of use by an application, size, a time stamp threshold, an analysis of the history of access to the data, etc. Instep 204, a trigger event can be detected. In one example, the trigger event can be an eviction process of data in the primary cache. Upon detection of the trigger event, the data selected instep 202 can be populated to the secondary cache (or other non-primary cache(s)) instep 206.Process 200 can then be repeated. Furthermore, the size of the data sets can be varied based on various factors such as type of computing system, type of data, project type (e.g. ‘big’ data projects can include larger data sets), and the like. -
FIG. 3 depicts anexample process 300 of migrating memory pages cached in a primary cache to a secondary cache in an SSD device during an eviction stage of the primary cache, according to some embodiments. As used herein, a memory page can be a fixed-length contiguous block of memory (e.g. virtual memory). As used herein, garbage collection (GC) can be a form of automatic memory management. A garbage collector in a memory management module (not shown) can reclaim memory occupied by objects that are no longer in use by the program (i.e. ‘garbage’). During garbage collection in an SSD device data can be written to the flash memory in units of pages. A memory page can be made up of multiple cells of the flash memory. Additionally, the flash memory may be set to be erased in larger units called blocks (e.g. made up of multiple pages). Accordingly, instep 302, a probably lifespan of each memory page in a primary cache can be determined. The probable lifespan can be determined based on such factors as analysis of historical lifespans of other memory pages with similar data, recency of access of the data in the memory pages (e.g. the ‘five-minute rule’), etc. Instep 304, various memory pages with lifespans with a specified range can be associated together. The size of this association can be based on the size of the block units of flash memory in the SSD device that stores the secondary cache. Instep 306, a trigger event can be detected. In one example, the trigger event can be an eviction process of data in the primary cache. Instep 308, associated memory pages can be written to the block of flash memory that stores the secondary cache. In this way, garbage collection processes in the flash memory can be more efficient because each block in more likely to include all and/or greater amounts of valid data and/or memory pages with similar lifetimes. -
FIG. 4 depicts anexemplary process 400 of reducing storage of metadata in a secondary cache stored in a flash memory of an SSD device, according to some embodiments. Instep 402 ofprocess 400, a contiguous memory pages in a primary cache can be identified. Instep 404, the contiguous memory pages can be associated (e.g. assigned a common eviction time, associated for migration to a common secondary cache, etc.). Instep 406, a trigger event can be detected. In one example, the trigger event can be an eviction process of data in the primary cache. Instep 408, the associated contiguous memory pages can be written to a secondary cache in a flash memory of the SSD device. In this way, the grouping of the contiguous memory pages can reduce the amount of metadata about the contiguous memory pages also stored in the secondary cache. In one example, the metadata is the address table becomes be decrease utilizedprocess 400. Memory pages can be store in the primary cache in a DRAM device in four (4) kilobytes groupings and evicted in sixty-four (64) kilobytes grouping as a unit. This 64 kilobytes unit can then be utilized as the page size for secondary cache. - It is noted that data that is accessed sequentially may not be cached in the secondary cache. For example, it can be determine if data sequential in the primary cache is sequential. If yes, then this data may not be stored sequentially in secondary cache. When sequential data is discovered in the secondary cache, the memory pages already in the secondary cache can be overridden and a smaller sample of the data can be retained for sequential access. For example, it is noted that in some embodiments, data that is accessed in a sequential manner may benefit less from long-term caching. Rotating-media hard drives may be better suited to handle sequential access. In this case, a pre-fetch algorithm can be used to detect sequential streams and/or read-ahead the data on demand to reduce read latency. Accordingly, some embodiments can avoid storing sequential data in a secondary cache to avoid unnecessary wear in the solid-state device. Moreover, by delaying the population phase of a secondary (and/or other non-primary cache) cache, the probability of detection of sequential access can be increased. In this way, the amount of sequentially-accessed data being stored in the secondary cache can be decreased.
-
FIG. 5 depicts anexemplary computing system 500 that can be configured to perform several of the processes provided herein. In this context,computing system 500 can include, for example, a processor, memory, storage, and I/O devices (e.g., monitor, keyboard, disk drive, Internet connection, etc.). However,computing system 500 can include circuitry or other specialized hardware for carrying out some or all aspects of the processes. In some operational settings,computing system 500 can be configured as a system that includes one or more units, each of which is configured to carry out some aspects of the processes either in software, hardware, or some combination thereof. -
FIG. 5 depicts acomputing system 500 with a number of components that can be used to perform any of the processes described, herein. Themain system 502 includes amotherboard 504 having an I/O section 506, one or more central processing units (CPU) 505, and a memory section 510, which can have a flash memory card 512 related to it. The I/O section 506 can be connected to a display 514, a keyboard and/or other attendee input (not shown), adisk storage unit 516, and amedia drive unit 518. Themedia drive unit 518 can read/write a computer-readable medium 520, which can includeprograms 522 and/or data.Computing system 500 can include a web browser. Moreover, it is noted thatcomputing system 500 can be configured to include additional systems in order to fulfill various functionalities Display 514 can include a touch-screen system. In some embodiments,system 500 can be included in and/or be utilized by the various systems and/or methods described herein. As used herein, a value judgment can refer to a judgment based upon a particular set of values or on a particular value system. -
FIG. 6 is a block diagram of asample computing environment 600 that can be utilized to implement some embodiments. Thesystem 600 further illustrates a system that includes one or more client(s) 602. The client(s) 602 can be hardware and/or software (e.g., threads, processes, computing devices). Thesystem 600 also includes one or more server(s) 604. The server(s) 604 can also be hardware and/or software (e.g., threads, processes, computing devices). One possible communication between aclient 602 and aserver 604 may be in the form of a data packet adapted to be transmitted between two or more computer processes. Thesystem 600 includes acommunication framework 610 that can be employed to facilitate communications between the client(s) 602 and the server(s) 604. The client(s) 602 are connected to one or more client data store(s) 606 that can be employed to store information local to the client(s) 602. Similarly, the server(s) 604 are connected to one or more server data store(s) 608 that can be employed to store information local to the server(s) 604. -
FIG. 7 depicts an example distributed database system (DDBS) 700 that implements the multilayer caching processes provided herein, according to some embodiments. For example,DDBS 700 can implement 200, 300 and 400 as well as those provided inprocesses FIG. 1 .DDBS 700 can be a modified version ofsystem 100 in distributed database system environment For example, a secondary cache can be in a different node than the primary cache. A secondary cache can be stored in one or more other nodes (e.g. either completely or partially replicated in multiple nodes). InFIG. 7 , each node 702A-B can include a primary cache 704A-B and a secondary cache 706A-B respectively. The primary cache 704A in node 702A can utilized a remote secondary cache such as the secondary cache 706B in node 702B (e.g. to implement 200, 300 and/or 400 and/or any modifications thereof). It is noted that the particular multilayer caching implementation of the present figure is provide by way of example and can be modified to implement other permutations of other multilayer caching implementations (e.g. with three layers, four layers, five layers, etc.).process DDBS 700 can be implemented in various distributed database and/or distributed file systems (e.g. Hadoop®, Cassandra®, OpenStack® data systems, various other ‘big data’ applications, etc.). - Although the present embodiments have been described with reference to specific example embodiments, various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, modules, etc. described herein can be enabled and operated using hardware circuitry, firmware, software or any combination of hardware, firmware, and software (e.g. embodied in a machine-readable medium).
- In addition, it may be appreciated that the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium.
Claims (20)
1. A method of managing a primary cache and a second cache in a multilayer cache system comprising:
maintaining a primary cache in a main memory of a computer system, wherein the primary cache is populated with a set of data from a secondary data storage system;
maintaining a secondary cache in another memory of the computer system;
selecting a subset of data from the set of data in the primary cache;
detecting a trigger event; and
populating secondary cache with the subset of data Selected from the set of data the primary cache.
2. The method of claim 1 , wherein the main memory of the computer system comprises a dynamic random-access memory (DRAM) memory system.
3. The method of claim 1 , wherein the other memory of the computer system comprises a flash memory system in a solid-state storage device.
4. The method of claim 1 , wherein the secondary data storage system comprises a hard-disk storage system.
5. The method of claim 1 , wherein the trigger event comprises an eviction stage implemented in the primary cache.
6. The method of claim 5 further comprising:
determining a probable lifespan of each memory page in the primary cache.
7. The method of claim 6 further comprising:
associating memory pages with lifespans within a specified lifespan range.
8. The method of claim 7 further comprising:
writing a set of associated memory pages with lifespans within the specified lifespan range to a block in the flash memory system.
9. The method of claim 1 further comprising:
identifying a set of contiguous memory pages in the primary cache; and
grouping the set of contiguous memory pages in the secondary cache when the contiguous memory pages are in the subset of data from the primary cache written to the secondary cache.
10. A computerized multilayer-cache system comprising:
a processor configured to execute instructions;
a memory containing instructions when executed on the processor, causes the processor to perform operations that:
maintaining a primary cache in a main memory of a computer system, wherein the primary cache is populated with a set of data from a secondary data storage system;
maintaining a secondary cache in another memory of the computer system;
selecting a subset of data from the set of data in the primary cache;
detecting a trigger event; and
populate secondary cache with the subset of data selected from the set of data in the primary cache.
11. The computerized multilayer-cache system of claim 10 , wherein the main memory of the computer system comprises a dynamic random-access memory (DRAM) memory system.
12. The computerized multilayer-cache system of claim 10 , wherein the other memory of the computer system comprises a flash memory system in a solid-state storage device.
13. The computerized multilayer-cache system of claim 10 , wherein the other memory of the computer system comprises a flash memory system in a solid-state storage device.
14. The computerized multilayer-cache system of claim 10 , wherein the trigger event comprises an eviction process implemented in the primary cache.
15. The computerized multilayer-cache system of claim 10 , wherein memory containing instructions when executed on the processor, causes the processor to perform operations that:
estimate a lifespan of each memory page in the primary cache;
associate memory pages with lifespans within a specified lifespan range; and
write a set of associated memory pages with lifespans within the specified lifespan range to a block in the flash memory system.
16. The computerized multilayer-cache system of claim 15 , wherein memory containing instructions when executed on the processor, causes the processor to perform operations that:
identify a set of contiguous memory pages in the primary cache; and
group the set of contiguous memory pages together in the secondary cache when the contiguous memory pages are written to the secondary cache.
17. A method of a multilayer cache system comprising:
obtaining one or memory pages from a secondary storage system;
writing the memory pages to a primary cache in a random access memory of a computing system;
identifying a subset of memory pages to write to another cache of the multilayer cache system;
evicting the memory pages from the primary cache; and
writing the subset of memory pages to the secondary cache after evicting the memory pages from the primary cache.
18. The method of claim 17 ,
wherein the subset of memory pages written to the other cache are selected based on a recency of use time of each memory page by an application program,
wherein a set of sequentially-accessed data detected in the primary cache is removed from the subset of memory pages written to the other cache, and
wherein the subset of memory pages are written from the primary cache to the other cache such that the other cache is not directly populated from the secondary storage system.
19. The method of claim 17 , wherein the computing system comprises a distributed database system (DDBS) implementing a multilayer cache system.
20. The method of claim 19 ,
wherein the primary cache is located in a first node of the DDBS, and
wherein the other cache is located in a second node of the DDBS.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/164,248 US20150212744A1 (en) | 2014-01-26 | 2014-01-26 | Method and system of eviction stage population of a flash memory cache of a multilayer cache system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/164,248 US20150212744A1 (en) | 2014-01-26 | 2014-01-26 | Method and system of eviction stage population of a flash memory cache of a multilayer cache system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20150212744A1 true US20150212744A1 (en) | 2015-07-30 |
Family
ID=53679081
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/164,248 Abandoned US20150212744A1 (en) | 2014-01-26 | 2014-01-26 | Method and system of eviction stage population of a flash memory cache of a multilayer cache system |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US20150212744A1 (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107346216A (en) * | 2017-06-30 | 2017-11-14 | 联想(北京)有限公司 | A kind of storage device and its data processing method |
| WO2020072378A1 (en) * | 2018-10-05 | 2020-04-09 | Oracle International Corporation | Secondary storage server caching |
| US11327887B2 (en) | 2017-09-14 | 2022-05-10 | Oracle International Corporation | Server-side extension of client-side caches |
| US20220222004A1 (en) * | 2015-08-24 | 2022-07-14 | Pure Storage, Inc. | Prioritizing Garbage Collection Based On The Extent To Which Data Is Deduplicated |
| US11755481B2 (en) | 2011-02-28 | 2023-09-12 | Oracle International Corporation | Universal cache management system |
| CN116909493A (en) * | 2023-09-12 | 2023-10-20 | 合肥康芯威存储技术有限公司 | Memory and control method thereof |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070050548A1 (en) * | 2005-08-26 | 2007-03-01 | Naveen Bali | Dynamic optimization of cache memory |
| US20130019057A1 (en) * | 2011-07-15 | 2013-01-17 | Violin Memory, Inc. | Flash disk array and controller |
| US20130232290A1 (en) * | 2012-03-01 | 2013-09-05 | Mark Ish | Reducing write amplification in a flash memory |
-
2014
- 2014-01-26 US US14/164,248 patent/US20150212744A1/en not_active Abandoned
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070050548A1 (en) * | 2005-08-26 | 2007-03-01 | Naveen Bali | Dynamic optimization of cache memory |
| US20130019057A1 (en) * | 2011-07-15 | 2013-01-17 | Violin Memory, Inc. | Flash disk array and controller |
| US20130232290A1 (en) * | 2012-03-01 | 2013-09-05 | Mark Ish | Reducing write amplification in a flash memory |
Non-Patent Citations (1)
| Title |
|---|
| Jiang, LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance, ACM SIGMETRICS Performance Evaluation Review - Measurement and modeling of computer systems, Volume 30 Issue 1, June 2002 Pages 31-42 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11755481B2 (en) | 2011-02-28 | 2023-09-12 | Oracle International Corporation | Universal cache management system |
| US20220222004A1 (en) * | 2015-08-24 | 2022-07-14 | Pure Storage, Inc. | Prioritizing Garbage Collection Based On The Extent To Which Data Is Deduplicated |
| US11868636B2 (en) * | 2015-08-24 | 2024-01-09 | Pure Storage, Inc. | Prioritizing garbage collection based on the extent to which data is deduplicated |
| CN107346216A (en) * | 2017-06-30 | 2017-11-14 | 联想(北京)有限公司 | A kind of storage device and its data processing method |
| US11327887B2 (en) | 2017-09-14 | 2022-05-10 | Oracle International Corporation | Server-side extension of client-side caches |
| WO2020072378A1 (en) * | 2018-10-05 | 2020-04-09 | Oracle International Corporation | Secondary storage server caching |
| US10831666B2 (en) * | 2018-10-05 | 2020-11-10 | Oracle International Corporation | Secondary storage server caching |
| CN113015967A (en) * | 2018-10-05 | 2021-06-22 | 甲骨文国际公司 | Secondary storage server cache |
| CN116909493A (en) * | 2023-09-12 | 2023-10-20 | 合肥康芯威存储技术有限公司 | Memory and control method thereof |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Eisenman et al. | Flashield: a hybrid key-value cache that controls flash write amplification | |
| US9772949B2 (en) | Apparatus, system and method for providing a persistent level-two cache | |
| US10739996B1 (en) | Enhanced garbage collection | |
| US9779027B2 (en) | Apparatus, system and method for managing a level-two cache of a storage appliance | |
| EP2735978B1 (en) | Storage system and management method used for metadata of cluster file system | |
| US8417878B2 (en) | Selection of units for garbage collection in flash memory | |
| CN113254358B (en) | Method and system for address table cache management | |
| CN111344684A (en) | Multi-level cache placement mechanism | |
| US20150212744A1 (en) | Method and system of eviction stage population of a flash memory cache of a multilayer cache system | |
| US9645922B2 (en) | Garbage collection in SSD drives | |
| US20130138867A1 (en) | Storing Multi-Stream Non-Linear Access Patterns in a Flash Based File-System | |
| US9507705B2 (en) | Write cache sorting | |
| JP2018133086A5 (en) | ||
| TW201232260A (en) | Semiconductor storage device | |
| US10621104B2 (en) | Variable cache for non-volatile memory | |
| CN102841854A (en) | Method and system for executing data reading based on dynamic hierarchical memory cache (hmc) awareness | |
| Chai et al. | WEC: Improving durability of SSD cache drives by caching write-efficient data | |
| US10366011B1 (en) | Content-based deduplicated storage having multilevel data cache | |
| CN109086141B (en) | Memory management method and device and computer readable storage medium | |
| JP2020013318A (en) | Database management system, memory management device, database management method and program | |
| CN102737068A (en) | Method and equipment for performing cache management on retrieval data | |
| CN109407985B (en) | Data management method and related device | |
| Wu et al. | CAGC: A content-aware garbage collection scheme for ultra-low latency flash-based SSDs | |
| US10185660B2 (en) | System and method for automated data organization in a storage system | |
| CN108664217B (en) | A caching method and system for reducing write performance jitter of solid state disk storage system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: ROBIN SYSTEMS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HELMAN, HAIM;YEDDANAPUDI, KRISHNA SATYASAI;SINGH, GURMEET;REEL/FRAME:036540/0921 Effective date: 20150904 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |