Disclosure of Invention
The application provides a data storage method and computing equipment, which realize zero erasing of a storage area by modifying management identification of a pre-allocated target storage area, and can greatly reduce occupied time of an IO interface in the pre-allocation process.
In a first aspect, the present application provides a data storage method. The method can be applied to computing equipment and comprises the steps of receiving a pre-allocation request of an application program, wherein the pre-allocation request comprises file identification, pre-allocating a target storage area in storage equipment for a file corresponding to the file identification, and indicating a controller of the storage equipment to update a value of management identification corresponding to the target storage area to a first target value, wherein the first target value indicates that the target storage area is not written with data.
In the scheme, in the pre-allocation process, the controller of the storage device is used for modifying the management identifier corresponding to the target storage area, and the target storage area is erased to zero, so that the occupied time of the IO interface of the storage device in the pre-allocation process can be reduced.
In a possible implementation manner of the first aspect, in a case that it is determined that the target data corresponding to the file needs to be written into the storage device, if the size of the target data is greater than a third target value, the target data is written into the target storage area in N times, where the size of the data written into the target storage area each time is less than or equal to the third target value.
In the above scheme, when the target data exceeds the third target value, the target data is written into the storage device for multiple times, and the occupied time of the IO interface of the storage device during one-time writing can be reduced by controlling the data volume written into the storage device each time, so that the phenomenon that other IO requests are blocked and the computing device is blocked due to longer occupied IO interface time due to larger writing data volume is avoided.
In a possible implementation manner of the first aspect, the method further includes, in a case of writing the target data to the target storage area, updating a value of a management flag of the target storage area to a second target value, where the second target value indicates that the target storage area has been written with data.
In the above scheme, after the data is written into the storage device, the controller modifies the management identifier of the target storage area, so that the calling times of the data writing process to the IO interface of the storage device can be reduced.
In one possible implementation manner of the first aspect, the target storage area includes at least one storage block, and the identifying the target storage area in the corresponding file pre-allocation storage device for the file includes adding metadata of one or more data blocks to metadata corresponding to the file, where the metadata of each data block includes a physical address of each data block, and the physical address of each data block maps to one or more storage blocks.
In the scheme, the target storage area is associated with the file through the metadata, so that the data of the file can be conveniently read/written into the target storage area.
In a possible implementation manner of the first aspect, the storage device includes a hard disk, and the storage block includes a sector.
In the above scheme, the pre-allocation process described in the first aspect can be implemented on the hard disk, so that the occupation time of the pre-allocation process on the IO interface of the hard disk is reduced, and the call times of the data on the IO interface of the hard disk when the data is written into the hard disk are reduced.
In a second aspect, the present application also provides a data storage device. The data storage device can be applied to a computing device and comprises a receiving module and a pre-allocation module.
The receiving module is used for receiving a pre-allocation request of the application program, wherein the pre-allocation request comprises a file identifier.
The pre-allocation module is used for pre-allocating a target storage area in the storage device for the file corresponding to the file identification, and indicating a controller of the storage device to update a value of a management identification corresponding to the target storage area to a first target value, wherein the first target value indicates that the target storage area is not written with data.
In a possible implementation manner of the second aspect, the data storage device includes a storage module, where the storage module is configured to, in a case where it is determined that target data corresponding to the file needs to be written to the storage device, write the target data to the target storage area in N times if the size of the target data is greater than a third target value, where the size of data written to the target storage area each time is less than or equal to the third target value.
In a possible implementation manner of the second aspect, in a case of writing the target data into the target storage area, the controller of the storage device updates a value of the management identifier of the target storage area to a second target value, where the second target value indicates that the target storage area has been written with data.
In a possible implementation manner of the second aspect, the target storage area includes at least one storage block, and the pre-allocation module is specifically configured to add metadata of one or more data blocks in metadata corresponding to the file, where the metadata of each data block includes a physical address of each data block, and the physical address of each data block maps to one or more storage blocks.
In a possible implementation manner of the second aspect, the storage device includes a hard disk, and the storage block includes a sector.
In a third aspect, the present application also provides a computing device. The computing device comprises a processor and a memory, the processor being configured to execute a computer program stored in the memory to implement the data storage method provided by the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application also provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the data storage method provided by the above first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the data storage method provided by the first aspect or any one of the possible implementations of the first aspect.
Any of the apparatuses or computing devices or computer storage media or computer program products provided above are used to perform the methods provided above, and thus, the advantages achieved by the methods are referred to as the advantages of the corresponding schemes in the corresponding methods provided above, and are not described herein.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be described below with reference to the accompanying drawings.
In describing embodiments of the present application, words such as "exemplary," "such as" or "for example" are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In the description of the embodiment of the present application, the term "and/or" is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B, and may indicate that a exists alone, B exists alone, and both a and B exist. In addition, unless otherwise indicated, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1a is a schematic view of a scenario of data storage in a computing device according to an embodiment of the present application. As shown in fig. 1a, an application layer, a kernel layer, and a device layer are included in the scene.
The application layer may include an application buffer (application buffer, APP buffer) and an IO buffer (IO buffer). The IO buffers may include, for example, a C language standard library IO buffer (clib buffer).
The kernel layer may include a page cache (PAGE CACHE), IO queues, and IO drivers.
The device layer may include a storage device and a cache of the storage device. The APP buffer, the IO buffer and PAGE CACHE are located in different storage areas of the memory of the computing device.
The application program realizes data storage through control flow and data flow. Specifically, as shown in fig. 1a, an application program may write data stored in an APP buffer into an IO buffer for buffering by calling an fwrite interface. Then, the application program can write the data cached in the IO buffer into the device layer by calling the fclose interface, or can write the data cached in the IO buffer into PAGE CACHE for caching by calling the fflush interface (or flush interface), so that the data is ensured not to be lost.
In addition, as shown in fig. 1a, the application program may also call the write interface to write the data stored in the APP buffer into PAGE CACHE of the kernel layer for caching. The data cached in PAGE CACHE is then written to the device layer by the kernel call fsync interface. When the kernel calls fsync an interface to write data, generating a corresponding IO request and adding the IO request into an IO queue, and scheduling the IO request in the IO queue to an IO driver according to a preset IO scheduling strategy through an IO scheduling algorithm, wherein the IO driver writes the data into a device layer according to the IO request. When writing data into the device layer, taking the storage device as a hard disk for example, the data may be written into DISK CACHE first, and then the data in DISK CACHE may be written into the disk.
The process of storing data by a database will be described below using an application program as an example of a database system.
Fig. 1b is a schematic diagram of a scenario in which a database system according to an embodiment of the present application stores data. Included in the database system in fig. 1b may be a set of table-drop disk threads, a set of main threads, and a set of redo log threads.
Each main thread in the main thread group may be used to execute a request of a user, cache data in a data storage area of the storage device to a page cache of the kernel, or perform a modification operation on the data in the data storage area, where the modification operation includes insertion, update, deletion, and the like. Each of the set of falling-disk threads may be used to write a table file in the database system from the kernel's page cache to the storage device, and each of the redo log threads may be used to record a transaction log of a user's modification operations on data (including) and write the transaction log to a log store of the storage device.
When each thread reads and writes data from and to the storage device, the interface provided by the file system of the kernel is required to be called. The file systems may include different types in a particular application, and may include, for example, but not limited to, file systems that manage data files, such as ext4 and f2fs, and file systems that manage log files, such as xfs.
In order to ensure that the data of a file is continuous on a storage device, it is often necessary to pre-allocate a storage area for the file before writing the data corresponding to the file to the storage device. For example, a disk-in-table thread and a redo log thread may pre-allocate storage area for a file through a file system before writing the data of the file to a storage device.
To avoid applications reading dirty data from pre-allocated storage areas, file pre-allocation typically involves a zero-wipe operation on the pre-allocated target storage area. Two schemes for performing a zero-wiping operation on a pre-allocated target storage area are described below.
In the first scheme, the application program performs actual writing 0 on the target storage area pre-allocated by the file system through the write interface and fsync interface of the file system, so as to wipe zero on the target storage area. Specifically, the thread of the application may call the write interface provided by the file system, write all 0 data from the buffer of the database system to PAGE CACHE of the kernel, and then the application may call the fsync interface of the file system to write all 0 data from PAGE CACHE to the pre-allocated target storage area. As shown in fig. 2a, in the pre-allocation process, each storage block included in the target storage area stores data of 0, and in the data disc-dropping process, each storage block included in the target storage area is written with actual data (in fig. 2a, the actual data is shown).
In the pre-allocation process, the fsync interface can call the IO interface to write data into the pre-allocated target storage area, and if the data writing quantity is larger, the fsync interface occupies the IO interface for a longer time, so that the pre-allocation process consumes longer time. In the case that multiple threads write data to the same storage device, if the fsync interface occupies the IO interface for a long time when processing the IO request of one thread, the fsync interface cannot be called by other, the IO interface cannot process the IO requests of other threads, and the IO requests of other threads are blocked, so that the computing device is blocked.
In order to solve the problem caused by the fact that 0 is actually written into a storage area in the pre-allocation process of the first scheme, a second scheme is provided.
In scheme II, the application program modifies the metadata of the file through fallocate interface of the file system to realize zero erasing of the storage area. In a particular application, the file system of the computing device records metadata for each file. The metadata of each file includes metadata of one or more data blocks corresponding to the file. The pre-allocated target storage area includes at least one storage block in the storage device, each data block mapping one or more storage blocks. The file system may read/write data of the file from/to the storage block based on metadata of each data block corresponding to the file. The metadata of the data block may include physical address, creation time, modification time, etc.
In the scheme, in order to achieve the purpose of erasing the storage area by modifying the metadata of the file, an identification field is added in the metadata of each data block corresponding to the file, wherein the identification field is used for indicating whether the storage block corresponding to the data block is written with data or not. Thus, after pre-allocating a storage area for a file, zero-wiping may be achieved by modifying the identification field of each data block in the metadata of the file, as shown in FIG. 2 b. The value of the identification field of the modified data block indicates that the memory block corresponding to the data block is not written with data. For example, after determining the pre-allocated target storage area, the file system may call the IO interface through fallocate interface to modify metadata of the file, so that a value of an identification field of a data block mapped by the pre-allocated target storage area is 0, so as to indicate that a storage block corresponding to each data block is not written with data. In this way, in the data reading process, if the identification field of the data block indicates that the data is not written, the storage block corresponding to the data block is not read, so that the application program can be prevented from reading the dirty data of the storage block.
Compared with the scheme one, the scheme two does not actually write data to the pre-allocated target storage area, the time for occupying the IO interface is short, the pre-allocation time consumption is short, and the computing equipment is not easy to be blocked. However, after the pre-allocation is performed according to the second application scheme, when the data is dropped, the file system needs to write the actual data into the storage area, and further needs to modify the identification field in the metadata of the file again, so that the identification field indicates that the data has been written. That is, when a thread writes data to a pre-allocated target storage area through the file system, the file system needs to call the IO interface twice through the fsync interface, as shown in fig. 2b, the identification field included in the metadata of the file in the storage device is modified once, and the data is written to the pre-allocated target storage area once. Therefore, when data is dropped, the number of times the IO interface is called is more, and the occupied time is longer.
Based on the problems existing in the two schemes, the embodiment of the application provides a data storage method.
In the data storage method provided by the embodiment of the application, after the storage area is pre-allocated for the file, the management identification of the pre-allocated target storage area is modified through the controller of the storage device, so that zero erasing of the storage area is realized. The controller of the storage device judges whether the storage area is written with data or not through the management identification of the storage area, and the modified management identification of the storage area indicates that the storage area is not written with data. After determining that the data is written into the storage area, the controller of the storage device can modify the management identification of the storage device without the need of the file system to call IO interface modification. Thus, when an application program writes data to a storage area through a file system, the file system only needs to call the IO interface once. The embodiment not only can reduce the calling times of the IO interface in the pre-allocation process and the occupation time of the IO interface in the pre-allocation process, but also can reduce the calling times of the IO interface in the data drop process, thereby reducing the occupation time of the IO interface in the data drop process.
Fig. 3 is a flowchart of a data storage method according to an embodiment of the present application. The method may be applied to a computing device and executed by a processor of the computing device for storing data of files of respective applications in the computing device.
As shown in fig. 3, the method may include S301-S303 as follows. Specifically, embodiments of the present application provide a pre-allocation interface that may be applied to various file systems in the kernel of a computing device, which, after running on a processor of the computing device, performs the various steps of fig. 3. The steps are described in detail below.
In S301, the pre-allocation interface receives a pre-allocation request of an application.
In this step, after the application program and the pre-allocation interface run on the processor of the computing device, when determining that a storage area needs to be pre-allocated for the file, the thread of the application program may call the pre-allocation interface and send a pre-allocation request to the pre-allocation interface.
The pre-allocation request may include a file identifier and/or a pre-allocated space size, which is used to request that a storage area be pre-allocated for a file corresponding to the file identifier. In the case where the application is the database system shown in fig. 1b, the file identification may comprise a file identification of a table file and/or a file identification of a log file in the database system.
In S302, the pre-allocation interface pre-allocates a target storage area in the storage device for the file corresponding to the file identifier.
In this step, one or more storage devices may be included in the computing device, and the pre-allocation interface determines, from one of the storage devices of the computing device, a target storage area that matches the pre-allocated space size according to the pre-allocated space size. The size of the pre-allocated space can also be preset in the pre-allocation interface. In the pre-allocation process, the target storage area does not store valid data of any one application program in the computing device. In other words, the target storage area is an unused area.
After determining the target storage area, the pre-allocation interface pre-allocates the target storage area to the file corresponding to the file identifier in the pre-allocation request. Specifically, the metadata corresponding to the file is updated according to the location information of the target storage area. In a particular application scenario, the target storage area may include at least one storage block in the storage device. Taking a storage device as an example of a hard disk, the storage block may include a sector.
Updating the metadata corresponding to the file according to the location information of the target storage area may include adding metadata of one or more data blocks to the metadata corresponding to the file. Wherein the metadata for each data block includes a physical address for each data block, the physical address for each data block mapping to one or more storage blocks in the target storage area.
In S303, the preassigned interface instructs the controller of the storage device to update the value of the management flag corresponding to the target storage area to a first target value indicating that the target storage area is not written with data.
Taking a storage device as an example of a hard disk, the hard disk includes a plurality of sectors. The controller of the hard disk manages a plurality of sectors of the hard disk by identifiers of the sectors. The identifier of the sector may include information such as the number, address, and flag bit of the sector. The flag bit of a sector is used to mark whether each sector is written with data, for example, the value of the flag bit of a sector may include a first target value that may indicate that a sector is not written with data and a second target value that indicates that a sector is written with data, where the first target value may be 0 and the second target value may be 1.
In this step, the target storage area may include one or more sectors, and the management identifier of the target storage area may include flag bits of the one or more sectors. The sector corresponding to the target storage area may be used before pre-allocation, and the sector may store invalid data. For applications currently requesting pre-allocation, the invalid data stored by the sector is dirty data.
In this step, to avoid the application program from reading dirty data, the pre-allocation interface may call the IO interface to send a modification request to the controller of the storage device, and instruct the controller of the storage device to set the value of the flag bit of each sector corresponding to the target storage area to 0. The modification request may include numbers and/or addresses of the sectors corresponding to the target storage area. After receiving the modification request, the controller of the hard disk sets the mark position 0 of each sector according to the serial number and/or address of each sector.
Wherein, each sector corresponding to the target storage area may have only a part of the sectors written with data. Therefore, in another implementation, the pre-allocation interface may further obtain the flag bit of each sector through the controller of the hard disk, determine, according to the flag bit of each sector corresponding to the target storage area, whether each sector has been written with data, and determine the target sector in each sector where data has been written. A modification request is then sent to the controller of the storage device, instructing the controller of the storage device to flag location 0 of the target sector. It will be appreciated that in this case, the modification request may include the number and/or address corresponding to the target sector.
In this embodiment, after writing target data into the target storage area, the controller of the storage device sets the value of the flag bit of the sector into which the target storage area is written to 1.
In the above scheme, on one hand, when the preallocation interface of the file system preallocates the storage area for the file of the application program, the controller of the storage device is instructed to modify the management identifier of the preallocated target storage area by calling the IO interface, so that zero erasing of the storage area is realized, and the time of occupying the IO interface in the preallocation process can be reduced. On the other hand, since the management identifier indicating whether each storage block in the storage device is written with data is maintained by the controller of the storage device, after writing the data of the file to the target storage area, the file system does not need to call the IO interface to instruct the hard disk controller to modify the management identifier of the storage block again, so that the number of times that the file system calls the IO interface when the data is dropped can be reduced.
In a computing device, there may be multiple threads writing data to the same storage device. Wherein the multiple threads may comprise different threads of the same application or threads of different applications. When the amount of data written by one thread to the storage device is large, the fsync interface is occupied for a long time, which can cause the task of writing data by other threads to be blocked, and cause the computing device to be blocked.
Fig. 4a is a schematic diagram of transaction blocking of a database system according to an embodiment of the present application.
In a database system, when a user requests to perform a plurality of operations on the database system, a main thread of the database system performs the plurality of operations as a transaction, records a redo log corresponding to the transaction through a redo log thread, and writes the redo log into a storage device in real time (i.e. a redo log is dropped).
A process of the redo log thread landing the redo log corresponding to the transaction is shown in fig. 4a, the redo log thread judges whether the capacity of the remaining storage area of the APP buffer meets the size of the redo log corresponding to the transaction, if yes, the redo log thread writes the redo log into the APP buffer, then the redo log thread enters a commit (commit) link, a write interface is called to write the redo log into a cache of a memory, and a fsync interface is called to write the redo log into storage equipment. Wherein, after the fsync interface writes the redo log to the storage device, the redo log thread group returns a message to the user indicating that the transaction was successful.
Since fsync interfaces can only be called by one thread in a time, they cannot be called by two threads at the same time. That is, the fsync interface is called by the next thread after the execution of the IO request of the previous thread is completed, and the data of the next thread is executed. Therefore, waiting phenomena may occur in the buffer obtaining link and the commit link of the redo log thread, so that the log landing time of the redo log thread is prolonged, and further, a transaction is blocked.
The following describes the states of three transactions, taking as an example the sequential invocation fsync of the interfaces by thread 1-thread 3, in conjunction with fig. 4 a.
When the thread 1 requests to drop the redo log corresponding to the transaction 1, the fsync interface is not called by other threads, as shown in fig. 4a, the drop process of the thread 1 can be normally performed, the thread 1 does not need to wait in the buffer obtaining link and the commit link, the success time of the transaction 1 is not prolonged, and the transaction 1 is in a normal state.
When the thread 2 requests to drop the redo log corresponding to the transaction 2, if the data volume written by the thread 1 is larger, the fsync interface is being called by the thread 1, and the thread 2 needs to wait for the completion of the IO request execution of the thread 1 in the commit link, so that the fsync interface can be called. That is, the commit link of thread 2 is extended in time, resulting in transaction 2 being successful in an exception state.
When thread 3 requests that the redo log corresponding to transaction 3 be dropped, if the amount of data written by thread 2 is large, the memory area of the buffer is exhausted and fsync interface is being called by thread 2. In this case, when thread 3 acquires a buffer, it needs to wait for the memory area of the buffer to be released, and in the commit link, it also needs to wait for the execution of the IO request of thread 2 to be completed, so as to call fsync the interface. That is, the time of acquiring the buffer link and the commit link of the thread 3 is prolonged, resulting in that the time of success of the transaction 3 is prolonged, and in an abnormal state.
Based on this, the embodiment of the application provides a data storage method, which can solve the above problems.
Fig. 4b is a flowchart of a data storage method according to an embodiment of the present application. The method may be applied in a computing device, and executed by a processor of the computing device. As shown in fig. 4b, the method may include S401-S405. Specifically, another write file interface provided by the embodiment of the present application may be applied to a file system of a computing device, where the write file interface performs the steps shown in fig. 4b after the file system runs on a processor of the computing device.
The steps shown in fig. 4b are described in detail below.
In S401, the write file interface determines target data of a file to be written by the application program.
In this step, the write file interface may have the functions of the write interface and fsync. Taking a storage device of a computing device as a hard disk for example, a thread of an application program can call a write file interface, send an IO request to the write file interface, and write target data into the storage device. Specifically, the file writing interface firstly writes target data from the APP buffer into the cache of the memory, and after the target data is written into the cache of the memory, the target data is written into the storage area of the storage device from the cache of the memory.
The IO request may include information such as an identifier of the file, offset of target data to be written corresponding to the file, and a size of the target data.
In S402, the write file interface determines whether the size of the target data is greater than a third target value.
In this embodiment, the write file interface may receive IO requests of multiple threads simultaneously, and implement the function of the write interface in parallel, that is, write multiple target data into the cache in parallel. When each target data is written into the cache, the write file interface compares the size of the target data with a third target value. In the case where it is determined that the size of the target data is greater than the third target value, the write file interface performs S403 to write the target data to the cache. In the case where it is determined that the size of the target data is less than or equal to the third target value, the write file interface executes S404 the target data write cache.
The third target value is preset in the write file interface and comprises the maximum capacity of the IO interface write-once data corresponding to the hard disk. The third target value can be determined according to the read-write rate of the IO interface corresponding to the hard disk and/or the maximum value of IO time set by the file system where the write file interface is located. The write file interface may compare the size of the target data with a third target value and determine whether the size of the target data exceeds the preset third target value.
In S403, the write file interface writes the target data to the cache of the memory N times.
In this step, taking the size of the target data as 16M and the third target value as 128K as an example, the write file interface first starts to read 128K data from the offset position offset in the IO request, writes the data into the cache, and then updates the offset. This process is performed in a loop until 16M is written to the cache entirely.
In S404, the write file interface writes the target data to the cache of the memory at a time.
In this step, taking the size of the target data as 64K and the third target value as 128K as an example, the write file interface can write the target data into the cache of the memory at one time.
In S405, the write file interface writes each data to the storage device in the write order of each data written to the cache.
In this embodiment, the writing file interface serially implements fsync functions, that is, sequentially writes each data into the storage device according to the writing sequence of each data written into the cache. Since the size of the data written into the cache each time is controlled to be not more than the third target value when the write file interface implements the write function, the size of the data written into the storage device each time is also not more than the third target value when the write file interface implements the fsync function. Therefore, the time of the IO interface corresponding to the storage device is occupied by the writing file interface every time the data of the storage device is written in the storage device is small.
In this embodiment, the data storage method further includes S406, which is executed by the controller of the storage device.
In S406, the controller of the storage device updates the management flag corresponding to the storage area in which the data is written in the storage device to the second target value. Taking writing data into the target storage area of the hard disk as shown in fig. 3b as an example, in this step, after writing the target data into the target storage area, the controller of the hard disk sets the value of the flag bit of the sector in which the target storage area is written to 1.
According to the method, when the writing file interface writes data into the storage device, the data are written into the cache of the memory and the storage device for multiple times under the condition that the data to be written are larger than the preset third target value, so that the time of the writing file interface occupying the IO interface corresponding to the storage device in the data disc dropping process can be reduced. The time that the write file interface occupies the IO interface each time is shorter, and when the write file interface does not occupy the IO interface, other threads can call the write file interface to write data into the storage device, so that IO requests of other threads are prevented from being blocked, and blocking is avoided. In other words, the writing file interface can alternately execute the writing tasks of a plurality of threads, and alternately drop part of data of the plurality of threads in batches, so that the writing capability of a single thread of an application program can be improved. When the method of fig. 4b is applied to the scenario of fig. 4a, the capability of writing the redo log corresponding to the transactions of the thread 2 and the thread 3 into the disk or the hard disk can be improved, so that the transaction of the thread 2 and the thread 3 is prevented from being blocked.
In the computing device applying the data storage method shown in fig. 3b and fig. 4b, when the data is dropped, only the data of the actual file is required to be written into the magnetic disk or the hard disk, and the metadata of the file is not required to be modified, so that the writing capability of the application program in the computing device can be further improved.
Based on the data storage methods shown in fig. 3b and fig. 4b, an embodiment of the present application provides a data storage device, which may be applied to a computing device.
Fig. 5 is a schematic structural diagram of a data storage device 500 according to an embodiment of the present application. The data store 500 can include a receiving module 501, a pre-allocation module 502, and a storage module 503.
The receiving module 501 is configured to receive a pre-allocation request of an application program, where the pre-allocation request includes a file identifier.
The pre-allocation module 502 is configured to pre-allocate a target storage area in a storage device for the file corresponding to the file identifier, and instruct a controller of the storage device to update a value of a management identifier corresponding to the target storage area to a first target value, where the first target value indicates that the target storage area is not written with data.
The storage module 503 is configured to, in a case where it is determined that the target data corresponding to the file needs to be written into the storage device, write the target data into the target storage area N times if the size of the target data is greater than a third target value, where the size of the data written into the target storage area each time is less than or equal to the third target value.
It should be noted that, in the data storage device 500 provided in the embodiment shown in fig. 5, only the above-mentioned division of each functional module is used for illustration when executing the data storage method, in practical application, the above-mentioned functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the above-mentioned functions. In addition, the computing device provided in the foregoing embodiment is the same as the data storage embodiment shown in fig. 3b or fig. 4b, and the detailed implementation process of the computing device is detailed in the method embodiment, which is not repeated herein.
Fig. 6 is a schematic diagram of a hardware architecture of a computing device 600 according to an embodiment of the present application.
The computing device 600 shown in fig. 6 may include a server, a notebook computer, a smart phone, and the like. With reference to fig. 6, the computing device 600 includes a processor 601, a memory 602, a communication interface 603, and a bus 604, the processor 601, the memory 602, and the communication interface 603 being connected to each other through the bus 604. The processor 601, memory 602, and communication interface 603 may also be connected by other means of connection than a bus 604.
The memory 602 may be various types of storage media, such as random access memory (random access memory, RAM), read-only memory (ROM), nonvolatile RAM (NVRAM), programmable ROM (PROM), erasable PROM (erasable PROM, EPROM), electrically erasable PROM (ELECTRICALLY ERASABLE PROM, EEPROM), flash memory, optical memory, hard disk, and the like.
Wherein the processor 601 may be a general purpose processor, which may be a processor that performs certain steps and/or operations by reading and executing content stored in a memory (e.g., memory 602). For example, the general purpose processor may be a central processing unit (central processing unit, CPU). The processor 601 may include at least one circuit to perform all or part of the steps of the data storage provided by the embodiments shown in fig. 3a or fig. 4 b.
Among other things, communication interface 603 includes input/output (I/O) interfaces, physical interfaces, logical interfaces, and the like for enabling interconnection of devices internal to computing device 600, as well as interfaces for enabling interconnection of computing device 600 with other devices (e.g., other computing devices or user devices). The physical interface may be an ethernet interface, a fiber optic interface, an ATM interface, etc.
Wherein the bus 604 may be any type of communication bus, such as a system bus, for implementing the interconnection of the processor 601, the memory 602, and the communication interface 603.
The above devices may be provided on separate chips, or may be provided at least partially or entirely on the same chip. Whether the individual devices are independently disposed on different chips or integrally disposed on one or more chips is often dependent on the needs of the product design. The embodiment of the application does not limit the specific implementation form of the device.
The computing device 600 shown in fig. 6 is merely exemplary, and in implementation, computing device 600 may also include other components, which are not listed here.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. It should be understood that, in the embodiment of the present application, the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application in further detail, and are not to be construed as limiting the scope of the application, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the application.