[go: up one dir, main page]

WO2018019119A1 - Procédé et dispositif de disposition de données parallèles partielles dynamique servant à la mémorisation continue de données - Google Patents

Procédé et dispositif de disposition de données parallèles partielles dynamique servant à la mémorisation continue de données Download PDF

Info

Publication number
WO2018019119A1
WO2018019119A1 PCT/CN2017/092403 CN2017092403W WO2018019119A1 WO 2018019119 A1 WO2018019119 A1 WO 2018019119A1 CN 2017092403 W CN2017092403 W CN 2017092403W WO 2018019119 A1 WO2018019119 A1 WO 2018019119A1
Authority
WO
WIPO (PCT)
Prior art keywords
strip
data
storage space
storage
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/092403
Other languages
English (en)
Chinese (zh)
Inventor
谭毓安
孙志卓
李元章
于潇
薛源
张全新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of WO2018019119A1 publication Critical patent/WO2018019119A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • the present invention relates to the field of redundant array technology for independent hard disks, and in particular, to a dynamic local parallel data layout method and apparatus for continuous data storage.
  • video surveillance has become an ubiquitous security facility in modern society because it plays an irreplaceable role in forensics and identification.
  • This kind of application requires a large amount of storage space, mainly performs write operations, mainly sequential access, and has low requirements on random performance.
  • the storage system is called a continuous data storage system.
  • RAID Redundant Arrays of Independent Disks
  • SSDs Solid State Disks
  • Striping A method of dividing a piece of continuous data into blocks of the same size and writing each piece of data to different disks in RAID.
  • Fault Tolerance Generates redundant verification data and saves it using some kind of operation, such as XOR operation. When the hard disk fails and data is lost, the verification data can be used for data recovery. XOR operations are usually indicated by "".
  • Distribution check The check data is distributed on each disk constituting the RAID according to a certain rule.
  • Partial Parallelism Only some of the hard disks in the array are parallel, rather than all hard disks in parallel, providing the right performance and facilitating the scheduling of the remaining hard drives.
  • RAIDs are RAID0, RAID5, and so on.
  • RAID0 is only striped, and does not have redundancy check capability.
  • RAID 5 writes data to the hard disks in the array in a stripe manner. The verification data is distributed and stored on each disk in the array. The access speed is improved by global parallelism, and the single disk is fault tolerant.
  • Continuous data storage systems are based on sequential access, which does not require random performance and generally does not require high performance provided by global parallelism.
  • the invention patents ZL201010256899.5, ZL201010256665.0, ZL201010256711.7, ZL201010256908.0, ZL201010256679.2, ZL201010256699.X, ZL201010575578.1, ZL201010575625.2, ZL201010575611.0, etc. propose a variety of local parallel data layout, Energy-efficient RAIDs using this type of locally parallel data layout are collectively referred to as S-RAID.
  • S-RAID The basic idea of S-RAID is: 1 to divide the storage in the array into several groups, and provide appropriate performance in parallel within the group. The grouping is convenient for scheduling some hard disks to run while the remaining hard disks are standby energy-saving; 2 adopting greedy addressing method in sequential access mode To ensure that the read and write operations are distributed over a certain period of time on a certain hard disk, other hard disks can be used for a long time to save energy.
  • the data layout of the S-RAID adopts the static mapping mechanism of the storage space, that is, when creating, the logical block address (LBA) and the physical block address of the hard disk are established according to parameters such as the number of disk blocks, the S-RAID type, and the strip size. (Physical Block Address, PBA) mapping; this mapping remains unchanged throughout the life of the S-RAID.
  • LBA logical block address
  • PBA Physical Block Address
  • S-RAID static data layout of S-RAID is suitable for a relatively stable workload, and the local parallelism cannot be dynamically adjusted according to the performance requirements of fluctuating load and burst load.
  • S-RAID needs performance according to peak load. The demand determines the degree of local parallelism, but the degree of parallelism is clearly excessive for the valley load. This excess performance leads to additional energy consumption and increases significantly with fluctuating loads and burst load strength.
  • Video data is generally compressed and transmitted and stored.
  • Existing video compression standards such as H.264/MPEG-4, are based on the temporal and spatial redundancy of video content for video compression. The video compression ratio will be very high. Change within a wide range. There are more moving objects during the day, the video compression is relatively small, and the amount of video data generated is large; there are fewer moving objects at night, the video compression is relatively high, and the amount of generated video data is small.
  • a high-intensity fluctuation load is also generated.
  • Cache devices usually have no fault tolerance mechanism, and add fault tolerance to cache devices, which will further increase hardware cost and power consumption.
  • DPPDL Dynamic Partial-Parallel Data Layout
  • the object of the present invention is to solve the problem that the existing static partial parallel data layout can not be better adapted to the fluctuating load and the sudden load.
  • a dynamic localization oriented to continuous data storage is proposed. Row data layout method and device.
  • DPPDL dynamic local parallel data layout method
  • Step 1.1 divides each hard disk in the N hard disks into 1 ⁇ N storage blocks; wherein, 1 is greater than or equal to 1, and N is greater than or equal to 3;
  • step 1.1 the N storage blocks with the same starting address in each hard disk form a stripe, which constitutes 1 ⁇ N strips; each strip contains 1 parity storage block, and N-1 data storage blocks.
  • the check storage block is referred to as a check block
  • the data storage block is referred to as a data block;
  • Step 1.2 Each data block and the check block in step 1.1 are M equal-sized sub-blocks, and each sub-block includes a plurality of consecutive storage areas, which are respectively called a data sub-block strip and a parity sub-block PStrip;
  • Step 1.3 in step 1.1 each sub-block with the same starting address in the strip is formed into a sub-strip stripe, and then the strip in the sub-strip is XORed to generate a PStrip in the sub-strip;
  • each strip includes M sub-strips of the same size; the sub-band PStrip m of the strip stripe stripe is generated by X X of its N-1 data sub-blocks, see equation (1), ;
  • the storage space dynamic mapping is specifically:
  • the disk array storage space is allocated and managed by the dynamic mapping mechanism of the logical block address and the physical block address, and the write data received by the disk array layer can be dynamically mapped to different numbers of hard disks; that is, according to the performance requirement parameters of the load k, dynamically allocate storage space with k hard disk parallelism, that is, k is the number of hard disks that need to write data concurrently, and does not include the hard disk where the verification data of the written data is located; when the load is minimum, only maps to one hard disk Write data only to the hard disk; when the load is maximum, it maps to the remaining hard disks of the hard disk that does not include the check data, and writes data to the remaining hard disks of the hard disk that does not include the check data;
  • the free strip is an unmapped strip;
  • the strip list is a one-way circular list consisting of all strips, CurBank is the currently mapped strip, the initial value is strip 0;
  • CurStripe is CurBank a stripe that can be mapped;
  • CurStripe If the number of free strips in CurStripe is 0, it means that CurStripe has no free strip mapable, equivalent to CurBank without free strip mapable, go to (3), otherwise go to (5);
  • NextBank is the next stripe to be mapped, adjacent to the CurBank number.
  • the initial value is strip 1;
  • mapping the logical address space to a physical address space having k hard disk parallelism after obtaining k free strips, performing storage space mapping, mapping the logical address space to a physical address space having k hard disk parallelism; and recording the mapping relationship in the mapping table;
  • mapping table Determine the offset of the disk and disk in the strip according to the strip, the sub-strip, and the number in the sub-strip where the strip is located, and record it in the mapping table.
  • the mapping table is an important part of the metadata, stored at the end of each working disk, with a version number, the version number is from small to large, and the disk array is loaded the most when the power is restored. Numbered version
  • DPPDL performs the access competition check. If there is no access competition or the end strip causes the access competition, the last strip is deleted; otherwise, the strip causing the access competition is deleted. Finally, obtain k strips that do not cause access competition and can be accessed concurrently;
  • the access competition check refers to whether two sub-blocks of the same hard disk are concurrently accessed
  • the access competition avoidance can be used to replace the step (6) in the storage space dynamic mapping
  • DPPDL uses the data transfer rate as a performance requirement indicator.
  • Step A DPPDL statistics history information of the disk array layer I/O request queue
  • Step B DPPDL performs analysis and prediction
  • the nth I/O request in the window T wherein, ta, pos, and len are the arrival time, the starting logical address, and the request length of the request rn, respectively, and the request length len of rn is represented by rn.len;
  • num is the number of I/Os coming in the time window T
  • is the coefficient of performance, which can be between 1.2 and 1.5;
  • DPPDL senses the data transmission rate of the load demand, it determines the number of hard disks that need to concurrently write data according to the data transmission rate that can be provided by different hard disk parallelism in the actual application scenario.
  • DPPDL uses a dynamic local parallel strategy to dynamically allocate storage space with appropriate parallelism according to the performance requirements of different workloads. DPPDL not only guarantees long-term standby energy saving for most disks, but also dynamically provides appropriate local parallelism, higher availability, and higher energy efficiency;
  • DPPDL sequentially allocates and reclaims storage space in strips, when the number of stripes is large (for Large-capacity disks are possible) to ensure that data is deleted in a substantially chronological order.
  • Strip is used as the mapping unit, and the appropriate amount of Strip parallel is selected on Stripe according to performance requirements, and the appropriate parallelism is dynamically provided, thus solving the contradiction between dynamic local parallelism and sequential deletion characteristics.
  • FIG. 1 is a flow chart showing the steps of a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of strip partitioning in a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of subdivision of strip partitioning in a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of dynamic mapping of a storage space in a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of access competition generation in a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of access competition avoidance in a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of dynamic mapping of storage space after access competition avoidance in a dynamic local parallel data layout method for continuous data storage according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a dynamic local parallel data layout apparatus for continuous data storage according to an embodiment of the present invention.
  • the dynamic local parallel data layout method for continuous data storage in the embodiment of the present invention may specifically include the following steps:
  • Step 101 Striping, dividing each hard disk in the N hard disks into 1 ⁇ N storage blocks equally;
  • Step 102 Perceive performance requirements and obtain performance requirements of the storage application
  • Step 103 Determine whether there is access competition, if yes, proceed to step 104, otherwise perform step 105;
  • Step 104 Accessing the competition avoidance and optimizing the dynamic mapping mechanism of the storage space
  • Step 105 Perform allocation management on the storage space by using a dynamic mapping mechanism.
  • storage space dynamic mapping is the core
  • performance requirement perception is the basis of storage space dynamic mapping
  • stripe partitioning is the premise of storage space dynamic mapping
  • access competition avoidance is the optimization and completeness of storage space dynamic mapping.
  • the strip is divided, and the specific steps are:
  • Step 1.1 divides each hard disk in the N hard disks into 1 ⁇ N storage blocks; wherein, 1 is greater than or equal to 1, and N is greater than or equal to 3;
  • Step 1.2 Each data block and check block in step 1.1 is M equal-sized sub-blocks, and each sub-block includes a plurality of consecutive storage areas (such as disk sectors), which are respectively called data sub-blocks. Strip and check sub-blocks (denoted as PStrip);
  • step 1.3 the sub-blocks in the strip with the same starting address in each strip form a sub-strip (recorded as stripe), and then the strip in the sub-band is XORed to generate the sub-strip. Inside the PStrip.
  • the N storage blocks having the same starting address in each hard disk form a stripe, which constitutes 1 ⁇ N strips; each strip includes one parity storage block, N-1 Data storage block, check storage block referred to as check block, data storage block referred to as data block;
  • each strip includes M sub-bands of the same size; the parity sub-block PStrip m of the strip stripe stripe is generated by X X of its N-1 data sub-blocks, ;
  • the operation idea of the dynamic mapping of the storage space is:
  • the disk array storage space is allocated and managed by the dynamic mapping mechanism of the logical block address and the physical block address, and the write data received by the disk array layer can be dynamically mapped to different numbers of hard disks; that is, according to the load
  • the performance requirement parameter k dynamically allocates the storage space with k hard disk parallelism, that is, k is the number of hard disks that need to concurrently write data, and does not include the hard disk where the verification data of the written data is located; when the load is minimum, only maps to On a certain hard disk, only data is written to the hard disk; when the load is maximum, it is mapped to the remaining hard disks except one block of one hard disk, and the data is written to the remaining hard disks except one block of one hard disk;
  • Strip linked list a one-way circular linked list consisting of all strips
  • CurBank the current stripe to be mapped, called the current map strip, the initial value is stripe 0;
  • NextBank the next stripe to be mapped, called the adjacent map strip, adjacent to the CurBank number, the initial value is strip 1;
  • CurStripe a route that can be mapped in CurBank
  • NextStripe Stripe that can be mapped in NextBank
  • the storage space is dynamically mapped, and the specific steps are:
  • Step (1) Selecting the stripe with the largest free strip as the CurStripe in CurBank;
  • the free strip is a strip that is not mapped
  • Step (2) If the number of free strips in CurStripe is 0, indicating that CurStripe has no free strip mappable, go to step (3), otherwise go to step (5);
  • Step (3) determines whether NextBank has a free strip mappable, and if there is no free strip mappable, delete the stored data on the NextBank to perform storage space recovery;
  • Step (4) takes NextBank as CurBank and reacquires CurStripe, and then moves NextBank back;
  • Step (5) If the number of free strips in CurStripe is not less than k, then sequentially extract k strips from CurStripe, go to step (7), otherwise go to step (6);
  • Step (6) First take all free strips from CurStripe, and then take the remaining free strips from NextStripe to form k free strips. If NextStripe does not have enough free strips, delete the stored data on NextBank, reclaim the storage space, and re-acquire NextStripe;
  • Step (7) obtains k free strips, performs storage space mapping, maps the logical address space to a physical address space having k hard disks parallelism, and records the mapping relationship in the mapping table;
  • mapping table Determine the offset of the disk and disk in the strip according to the strip, the sub-strip, and the number in the sub-strip where the strip is located, and record it in the mapping table.
  • the mapping table is an important part of the metadata, stored at the end of each working disk, with a version number, the version number is from small to large, and the disk array is loaded the most when the power is restored. Numbered version.
  • the access competition avoids, and the specific steps are:
  • Step 1) When DPPDL selects a strip across 2 strips for storage space mapping, first take all free strips from CurStripe, and then take the remaining free strips from NextStripe and form k+1 free strips together. If NextStripe does not. With enough free strips, delete the stored data on NextBank, reclaim the storage space, and re-acquire NextStripe;
  • Step 2 DPPDL performs an access competition check. If there is no access competition or the end strip causes the access competition, the last strip is deleted; otherwise, the strip causing the access competition is deleted; finally, k strips that do not cause the access competition and can be concurrently accessed are obtained;
  • the access competition check refers to whether two sub-blocks of the same hard disk are concurrently accessed.
  • the performance requirement is perceived, specifically:
  • Continuous data storage applications are not very sensitive to response time and require a stable data transfer rate, so the data transfer rate is used as a performance requirement indicator;
  • the data transmission rate requirement is specifically:
  • Step A DPPDL statistics history information of the disk array layer I/O request queue
  • Step B DPPDL performs analysis and prediction
  • the data transfer rate for perceived load demand is expressed as:
  • num is the number of I/Os coming in the time window T
  • is the coefficient of performance, which can be between 1.2 and 1.5;
  • the I/O request rn in formula (2) comes from the request queue of the RAID layer
  • the DPPDL After the DPPDL senses the data transmission rate of the load demand, it determines the number of hard disks that need to concurrently write data according to the data transmission rate that can be provided by different hard disk parallelism in the actual application scenario.
  • the parity sub-block of the sub-strip is generated by the X-OR operation of the four data sub-blocks of the sub-strip; for example, the parity sub-block of the sub-strip 1 in the strip 0, and the four data sub-blocks of the sub-strip Or the operation is generated.
  • DPPDL uses dynamic mapping mechanism to allocate and manage storage space.
  • Load A requires two disks (excluding the disk where the check data is located) in parallel, that is, concurrently writes data to disk 0 and disk 1, for three time periods, namely, time period 1, time period 2, and time period 3;
  • Load C requires 3 disks (excluding the disk where the check data is located) in parallel, that is, concurrently writes data to disk 2, disk 3, and disk 0 for 3 time periods, that is, time period 7, time period 8, time Paragraph 9;
  • Load D requires 1 disk (not including the disk where the parity data is located) in parallel for 5 periods.
  • time period 11 time period 12
  • data is written to disk 0
  • data is written to disk 1 during time period 13, time period 14.
  • the load E needs two disks (excluding the disk where the check data is located) in parallel, that is, concurrently writes data to the disk 1, the disk 2, and lasts for two time periods, that is, the time period 15 and the time period 16.
  • the data is preferentially written to the current working disk.
  • the current working disk can meet the performance requirements, it does not need to access the standby disk, and has good local parallelism and dynamically allocates storage space with appropriate parallelism.
  • the write load is minimum (such as load D), only write data to 1 disk; when the write load is maximum (such as load B), write data to 4 disks concurrently.
  • DPPDL provides large elastic parallelism to meet the performance needs of different loads, and has very High energy efficiency.
  • DPPDL has access competition problems when using the above dynamic mapping mechanism to allocate and manage storage space.
  • FIG. 5 is a schematic diagram of the access competition generated in the embodiment.
  • the prerequisites for accessing competition avoidance are:
  • the load C7 needs to concurrently access three data sub-blocks spanning two strips, one selects four data sub-blocks first (at the dotted line in FIG. 6); 2 performs an access competition check, and finds the disk 3
  • the upper data sub-block (at the dashed box with ⁇ in Figure 6) and the check sub-block of strip 1 are all located on the disk 3, which will cause the access of the disk 3 to compete, thus deleting the data sub-block;
  • 3 load C The remaining 3 data sub-blocks are written in parallel (at the dashed box labeled C7 in Figure 6). Finally, three data sub-blocks that do not cause access competition and can be accessed concurrently are obtained.
  • DPPDL needs to sense the performance requirements of the load and then dynamically adjust the number of parallel disks to provide the right performance for higher energy efficiency.
  • This example uses equation (2) to sense the data transmission rate of the load demand, where ⁇ is 1.2 and window time T is 5 seconds.
  • Embodiments of the present disclosure can be implemented as a device that performs the desired configuration using any suitable hardware, firmware, software, or any combination thereof.
  • FIG. 8 schematically illustrates an exemplary apparatus 1300 that can be used to implement various embodiments described in this application.
  • FIG. 8 illustrates an exemplary apparatus 1300 having one or more processors 1302, a control module (chipset) 1304 coupled to at least one of the processor(s) 1302. a memory 1306 coupled to the control module 1304, a non-volatile memory (NVM)/storage device 1308 coupled to the control module 1304, one or more input/output devices 1310 coupled to the control module 1304, and
  • the network interface 1312 is coupled to the control module 1306.
  • Processor 1302 can include one or more single or multi-core processors, and processor 1302 can comprise any combination of general purpose or special purpose processors (eg, graphics processors, application processors, baseband processors, etc.).
  • the device 1300 can be used as a server or the like of the transcoding terminal described in the embodiment of the present application.
  • apparatus 1300 can include one or more computer readable media (eg, memory 1306 or NVM/storage device 1308) having instructions 1314 and, in conjunction with the one or more computer readable media, configured to The one or more processors 1302 that execute the instructions 1314 to implement the modules to perform the actions described in this disclosure.
  • computer readable media eg, memory 1306 or NVM/storage device 1308
  • the one or more processors 1302 that execute the instructions 1314 to implement the modules to perform the actions described in this disclosure.
  • control module 1304 can include any suitable interface controller to provide any suitable device or component to at least one of processor(s) 1302 and/or any suitable device or component in communication with control module 1304. Interface.
  • Control module 1304 can include a memory controller module to provide an interface to memory 1306.
  • the memory controller module can be a hardware module, a software module, and/or a firmware module.
  • Memory 1306 can be used, for example, to load and store data and/or instructions 1314 for device 1300.
  • memory 1306 can include any suitable volatile memory, such as a suitable DRAM.
  • memory 1306 can include double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • control module 1304 can include one or more input/output controllers to provide an interface to NVM/storage device 1308 and input/output device(s) 1310.
  • NVM/storage device 1308 can be used to store data and/or instructions 1314.
  • NVM/storage device 1308 may comprise any suitable non-volatile memory (eg, flash memory) and/or may include any suitable non-volatile storage device(s) (eg, one or more hard disk drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • HDD hard disk drives
  • CD compact disc
  • DVD digital versatile disc
  • NVM/storage device 1308 can include a storage resource physically part of a device on which device 1300 is installed, or it can be accessed by the device without necessarily being part of the device.
  • the NVM/storage device 1308 can be accessed via the network via the input/output device(s) 1310.
  • the input/output device(s) 1310 can provide an interface to the device 1300 to communicate with any other suitable device, and the input/output device 1310 can include a communication component, an audio component, a sensor component, and the like.
  • Network connection Port 1312 can provide an interface for device 1300 to communicate over one or more networks, and device 1300 can interact with one or more components of the wireless network in accordance with any of one or more wireless network standards and/or protocols.
  • Wireless communication is performed, such as accessing a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof for wireless communication.
  • At least one of the processor(s) 1302 can be packaged with the logic of one or more controllers (eg, memory controller modules) of the control module 1304.
  • at least one of the processor(s) 1302 can be packaged with the logic of one or more controllers of the control module 1304 to form a system in package (SiP).
  • at least one of the processor(s) 1302 can be integrated on the same mold as the logic of one or more controllers of the control module 1304.
  • at least one of the processor(s) 1302 can be integrated with the logic of one or more controllers of the control module 1304 on the same mold to form a system on a chip (SoC).
  • SoC system on a chip
  • device 1300 can be, but is not limited to, a terminal device such as a server, desktop computing device, or mobile computing device (eg, a laptop computing device, a handheld computing device, a tablet, a netbook, etc.).
  • device 1300 can have more or fewer components and/or different architectures.
  • device 1300 includes one or more cameras, a keyboard, a liquid crystal display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an application specific integrated circuit ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

L'invention concerne un procédé et un dispositif servant à une disposition de données parallèles partielles dynamique (DPPDL) servant à la mémorisation continue de données, applicables à la mémorisation de données continues. Le procédé est plus précisément réalisé par des solutions techniques de division de bandes, de mappage dynamique d'espace mémoire, d'évitement de concurrence d'accès et de détection d'exigence de performance, et consiste : à acquérir en premier lieu une exigence de performance d'une application de mémorisation par l'intermédiaire d'une détection d'exigence de performance, puis à exécuter un mécanisme de mappage dynamique d'espace mémoire, à attribuer de manière dynamique un espace mémoire présentant un niveau approprié de parallélisme de disque dur selon l'exigence de performance, et enfin à optimiser le mécanisme de mappage dynamique d'espace mémoire par l'intermédiaire d'un évitement de concurrence d'accès, le mappage dynamique d'espace mémoire étant un composant central, et la détection d'exigence de performance étant la base du mappage dynamique d'espace mémoire, la division de bandes étant une condition préalable au mappage dynamique de l'espace mémoire, et l'évitement de la concurrence d'accès optimisant et achevant le mappage dynamique de l'espace mémoire. La DPPDL utilise une stratégie parallèle partielle dynamique, et attribue de manière dynamique un espace mémoire présentant un niveau de parallélisme approprié en fonction des exigences de performance de différentes charges, garantissant ainsi une économie d'énergie lorsque la plupart des disques durs sont en attente pendant une longue période, et proposant de manière dynamique un parallélisme partiel approprié, ce qui permet d'obtenir une meilleure facilité d'utilisation et une efficacité d'économie d'énergie supérieure.
PCT/CN2017/092403 2016-07-26 2017-07-10 Procédé et dispositif de disposition de données parallèles partielles dynamique servant à la mémorisation continue de données Ceased WO2018019119A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610594843.8A CN106293511B (zh) 2016-07-26 2016-07-26 一种面向连续数据存储的动态局部并行数据布局方法
CN201610594843.8 2016-07-26

Publications (1)

Publication Number Publication Date
WO2018019119A1 true WO2018019119A1 (fr) 2018-02-01

Family

ID=57652864

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092403 Ceased WO2018019119A1 (fr) 2016-07-26 2017-07-10 Procédé et dispositif de disposition de données parallèles partielles dynamique servant à la mémorisation continue de données

Country Status (2)

Country Link
CN (1) CN106293511B (fr)
WO (1) WO2018019119A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858122A (zh) * 2018-08-23 2020-03-03 杭州海康威视系统技术有限公司 存储数据的方法和装置
CN111124296A (zh) * 2019-12-12 2020-05-08 北京浪潮数据技术有限公司 一种向固态硬盘写数据的方法、装置、设备及存储介质
CN111338782A (zh) * 2020-03-06 2020-06-26 中国科学技术大学 面向共享式突发数据缓存的基于竞争感知的节点分配方法
CN115599315A (zh) * 2022-12-14 2023-01-13 阿里巴巴(中国)有限公司(Cn) 数据处理方法、装置、系统、设备及介质
CN116027990A (zh) * 2023-03-29 2023-04-28 苏州浪潮智能科技有限公司 一种raid卡及其数据访问方法、系统及存储介质
CN116301662A (zh) * 2023-05-12 2023-06-23 合肥联宝信息技术有限公司 一种固态硬盘功耗管理方法及固态硬盘
CN117499442A (zh) * 2023-12-27 2024-02-02 天津数智物联科技有限公司 一种用于物联网能源监测装置的数据高效处理方法
CN120215842A (zh) * 2025-05-28 2025-06-27 苏州元脑智能科技有限公司 一种存储系统的分块编码确定方法及存储系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725826B (zh) * 2017-10-27 2022-05-24 伊姆西Ip控股有限责任公司 管理存储系统的方法、设备和计算机可读介质
CN108073363B (zh) * 2017-12-28 2021-10-01 深圳市得一微电子有限责任公司 数据存储方法、存储设备及计算机可读存储介质
CN108519926B (zh) * 2018-03-31 2020-12-29 深圳忆联信息系统有限公司 一种自适应的raid分组计算方法和装置
WO2020010604A1 (fr) * 2018-07-13 2020-01-16 华为技术有限公司 Procédé et dispositif de lecture de données de ssd
CN109933570B (zh) * 2019-03-15 2020-02-07 中山大学 一种元数据管理方法、系统及介质
CN110308875B (zh) * 2019-06-27 2023-07-14 深信服科技股份有限公司 数据读写方法、装置、设备及计算机可读存储介质
CN115202576B (zh) * 2022-06-22 2025-09-09 成都飞机工业(集团)有限责任公司 一种基于分层货架式存储结构的数据读写方法
CN117075821B (zh) * 2023-10-13 2024-01-16 杭州优云科技有限公司 一种分布式存储方法、装置、电子设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461914A (zh) * 2014-11-10 2015-03-25 浪潮电子信息产业股份有限公司 一种自动精简配置的自适应优化方法
CN105204785A (zh) * 2015-10-15 2015-12-30 中国科学技术大学 一种基于磁盘i/o队列的磁盘阵列写方式选择方法
CN105426427A (zh) * 2015-11-04 2016-03-23 国家计算机网络与信息安全管理中心 基于raid 0 存储的mpp 数据库集群副本实现方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080276041A1 (en) * 2007-05-01 2008-11-06 International Business Machines Corporation Data storage array scaling method and system with minimal data movement
CN101976174B (zh) * 2010-08-19 2012-01-25 北京同有飞骥科技股份有限公司 一种垂直排布分布校验的节能型磁盘阵列的构建方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461914A (zh) * 2014-11-10 2015-03-25 浪潮电子信息产业股份有限公司 一种自动精简配置的自适应优化方法
CN105204785A (zh) * 2015-10-15 2015-12-30 中国科学技术大学 一种基于磁盘i/o队列的磁盘阵列写方式选择方法
CN105426427A (zh) * 2015-11-04 2016-03-23 国家计算机网络与信息安全管理中心 基于raid 0 存储的mpp 数据库集群副本实现方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI, YUANZHANG ET AL.: "S-RAID 5: An Energy-Saving RAID for Sequential Access Based Applications", CHINESE JOURNAL OF COMPUTERS, vol. 36, no. 6, 30 June 2013 (2013-06-30) *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858122A (zh) * 2018-08-23 2020-03-03 杭州海康威视系统技术有限公司 存储数据的方法和装置
CN110858122B (zh) * 2018-08-23 2023-10-20 杭州海康威视系统技术有限公司 存储数据的方法和装置
CN111124296A (zh) * 2019-12-12 2020-05-08 北京浪潮数据技术有限公司 一种向固态硬盘写数据的方法、装置、设备及存储介质
CN111338782A (zh) * 2020-03-06 2020-06-26 中国科学技术大学 面向共享式突发数据缓存的基于竞争感知的节点分配方法
CN115599315A (zh) * 2022-12-14 2023-01-13 阿里巴巴(中国)有限公司(Cn) 数据处理方法、装置、系统、设备及介质
CN116027990A (zh) * 2023-03-29 2023-04-28 苏州浪潮智能科技有限公司 一种raid卡及其数据访问方法、系统及存储介质
CN116301662A (zh) * 2023-05-12 2023-06-23 合肥联宝信息技术有限公司 一种固态硬盘功耗管理方法及固态硬盘
CN116301662B (zh) * 2023-05-12 2023-08-01 合肥联宝信息技术有限公司 一种固态硬盘功耗管理方法及固态硬盘
CN117499442A (zh) * 2023-12-27 2024-02-02 天津数智物联科技有限公司 一种用于物联网能源监测装置的数据高效处理方法
CN117499442B (zh) * 2023-12-27 2024-05-10 天津数智物联科技有限公司 一种用于物联网能源监测装置的数据高效处理方法
CN120215842A (zh) * 2025-05-28 2025-06-27 苏州元脑智能科技有限公司 一种存储系统的分块编码确定方法及存储系统
CN120215842B (zh) * 2025-05-28 2025-08-08 苏州元脑智能科技有限公司 一种存储系统的分块编码确定方法及存储系统

Also Published As

Publication number Publication date
CN106293511A (zh) 2017-01-04
CN106293511B (zh) 2018-12-04

Similar Documents

Publication Publication Date Title
WO2018019119A1 (fr) Procédé et dispositif de disposition de données parallèles partielles dynamique servant à la mémorisation continue de données
US12216929B2 (en) Storage system, memory management method, and management node
US11880579B2 (en) Data migration method and apparatus
US9940261B2 (en) Zoning of logical to physical data address translation tables with parallelized log list replay
US10019352B2 (en) Systems and methods for adaptive reserve storage
JP6007329B2 (ja) ストレージコントローラ、ストレージ装置、ストレージシステム
KR102170539B1 (ko) 저장 장치에 의해 데이터를 저장하기 위한 방법 및 저장 장치
JP5944587B2 (ja) 計算機システム及び制御方法
US9612758B1 (en) Performing a pre-warm-up procedure via intelligently forecasting as to when a host computer will access certain host data
CN104484130A (zh) 一种横向扩展存储系统的构建方法
CN102982182B (zh) 一种数据存储规划方法及装置
EP3671423B1 (fr) Procédé d'accès à des données et réseau de stockage
CN103150128A (zh) 基于ssd和磁盘的可靠混合存储系统实现方法
CN103713861A (zh) 一种基于层次划分的文件处理方法及系统
CN109739696B (zh) 一种双控存储阵列固态硬盘缓存加速方法
US10853252B2 (en) Performance of read operations by coordinating read cache management and auto-tiering
US11561695B1 (en) Using drive compression in uncompressed tier
CN106775453B (zh) 一种混合存储阵列的构建方法
KR20150127434A (ko) 메모리제어장치 및 메모리제어장치의 동작 방법
US11947803B2 (en) Effective utilization of different drive capacities
CN117348789A (zh) 数据访问方法、存储设备、硬盘、存储系统及存储介质
US20250245149A1 (en) Generating a logical to physical data structure for a solid state drive using sectors of different sizes
US20250315163A1 (en) System and Method for Machine Learning-Based Forecasting of Active Data Sets in Storage Systems
TWI522797B (zh) 容錯與節能分散式儲存系統及方法
CN119225651A (zh) 固态硬盘使用方法、装置和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17833422

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17833422

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 250719)

122 Ep: pct application non-entry in european phase

Ref document number: 17833422

Country of ref document: EP

Kind code of ref document: A1