[go: up one dir, main page]

WO2019183547A1 - Gestion et sécurité de données dans un système de stockage distribué - Google Patents

Gestion et sécurité de données dans un système de stockage distribué Download PDF

Info

Publication number
WO2019183547A1
WO2019183547A1 PCT/US2019/023689 US2019023689W WO2019183547A1 WO 2019183547 A1 WO2019183547 A1 WO 2019183547A1 US 2019023689 W US2019023689 W US 2019023689W WO 2019183547 A1 WO2019183547 A1 WO 2019183547A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
processor
metadata
segments
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2019/023689
Other languages
English (en)
Inventor
David Yanovsky
Teimuraz NAMORADZE
Vera Dmitriyevna MILOSLAVSKAYA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datomia Research Labs Ou
Datomia Inc
Original Assignee
Datomia Research Labs Ou
Datomia Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datomia Research Labs Ou, Datomia Inc filed Critical Datomia Research Labs Ou
Publication of WO2019183547A1 publication Critical patent/WO2019183547A1/fr
Priority to IL277520A priority Critical patent/IL277520A/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database

Definitions

  • the application described herein generally, relates to a distributed storage system and, more particularly, to techniques for data protection, efficiency and security in distributed storage systems.
  • a distributed storage system may require many hardware devices, which often results in component failures that require recovery operations. Moreover, components in a distributed storage system may become unavailable, such as due to poor network connectivity or performance, without necessarily completely failing.
  • redundancy measures are often introduced to protect data against storage node failures and outages, or other impediments. Such measures can include distributing data with redundancy over a set of independent storage nodes.
  • MDS maximum distance separable
  • LDC Locally decodable codes
  • GCC generalized concatenated codes
  • RAID redundant arrays of independent disks
  • the number of disks within a RAID is usually limited to a relatively low number, resulting in codes having a relatively small length being employed. Accordingly, array codes such as RDP, EVENODD, are not optimal for cloud storage systems and distributed storage systems, in general.
  • a system and method provide secure distributed storage and transmission of electronic content over at least one communication network.
  • At least one data file is received and parsed into a plurality of segments, wherein each one of the segments has a respective size.
  • each of the plurality of segments is divided into a plurality of slices, wherein each one of the slices has a respective size.
  • a plurality of data chunks are encoded, each data chunk comprising a portion of at least two of the slices, wherein no portion comprises an entire slice.
  • the data chunks are packaged with at least metadata, and each of the packages is assigned to respective remote storage nodes. Each of the packages is transmitted to the respectively assigned remote storage node.
  • the step of packaging includes erasure coding, wherein the metadata is encoded and not visible to unauthorized users.
  • At least one processor abstracts the metadata with two or more of: additional metadata associated with a respective remote storage node; a configuration of a data vault; a hyperlink to an active data vault; and information representing a current state of data blocks.
  • the metadata includes information for reconstructing related segments from corresponding packages and/or information for reconstructing the at least one data file from the plurality of segments.
  • each of the packages include at least some redundant information from at least one other package.
  • At least one processor determines at least one parameter representing at least one of available network bandwidth, geographic proximity, and node availability, wherein selection of respective remote storage nodes is made as a function of the at least one parameter.
  • At least one processor applies categories of data, wherein the step of encoding is based at least in part on a respective category.
  • the respective storage nodes are provided as network addressable storage.
  • At least one processor provides a graphical user interface that is configured to display at least one map showing locations of the respective storage nodes and a respective operational status of the respective storage nodes.
  • the graphical user interface includes an interactive dashboard that identifies information associated with available storage space, used space, and a number of stored data objects.
  • FIG. 1 is a schematic block diagram illustrating a distributed storage system interacting with client applications in accordance with an example implementation of the present application
  • FIG. 2 illustrates data encoding and distribution, in accordance with an example implementation
  • FIG. 3 is a simplified illustration of an example package, in accordance with an example implementation of the present application.
  • FIGs. 4A-4F are block diagrams illustrating data management in connection with generating packages including codeword data chunks that include respective slices of file segments and encoded metadata, in accordance with one more example implementations of the present application;
  • FIG. 5 shows a flow diagram of steps associated with generating packages including encoded metadata, in accordance with an example implementation of the present application.
  • FIGs. 6A-6K illustrate example interactive data entry screens provided one or more graphical user interfaces, in accordance with an example implementation of the present application.
  • the present application includes systems and methods for distributing data over a plurality of respective remote storage nodes.
  • One or more processors that are configured by executing code can process data, such as of one or more files, and split the data into segments, with each segment being encoded into a number of codeword chunks.
  • the processor(s) is configured to process the data such that none of the codeword chunks contains any complete one of the segments.
  • the processor(s) is configured to process the data such that each codeword chunk can be packaged with metadata to represent, for example, encoding parameters and identifiers for at least one file and/or for related segments of at least one file.
  • program modules include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • program modules can be located in both local and remote computer system storage media including memory storage devices. Accordingly, modules can be configured to communicate with and transfer data to each other.
  • Metadata for the file(s) contain information that is usable for reconstructing the related segments from corresponding packages and/or to reconstruct the file(s) from the segments.
  • packages can be respectively assigned to remote storage nodes that can be selected and correspond to an optimized workload distribution. For example, the selection of respective storage nodes can be based on various parameters, including available network bandwidth, geographic proximity, node availability or other suitable criteria.
  • Each of the packages can be transmitted to at least one respective storage node, which can thereafter be retrieved for future assemblage of the segment(s) and data.
  • the present application provides secure distributed storage and transmission of data for use in various contexts including, for example, streaming and other applications.
  • the dispersed storage of data, including in particular streaming media data, on cloud servers is particularly useful.
  • media content including, without limitation video or audio content, can be made available for streaming through the Internet via the secure and distributed storage systems and methods shown and described herein.
  • data that are stored within a distributed storage system can be classified in several categories, and different coding techniques can be applied to the different data categories. For example, erasure coding techniques maximize storage efficiency and can be applied to a plurality of files containing original data, and metadata can be generated, packaged, and applied to minimize access latency.
  • the present application provides a big data storage solution that improves, for example, security, efficiency, performance, and availability.
  • Data can be stored on large-scale storage devices set forth in multiple and disparate geographic regions.
  • erasure-coding provides data integrity, and provided for customers via one or more global filesystems.
  • the present application further provides for data scaling, that is capable of forming highly available clusters, such as global computing storage nodes and across a network into a customer’ s own private data center.
  • data scaling is capable of forming highly available clusters, such as global computing storage nodes and across a network into a customer’ s own private data center.
  • Fig. 1 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with an example implementation of the present application.
  • Original data 106 e.g., files produced by client applications 109
  • Original data 106 is distributed over a set of storage nodes 103, and original data 106 is available to client applications 109 upon request. Any system producing and receiving data on the client side can be considered as an instance of a client application 109.
  • processing system 101 located on the client side.
  • processing system 101 can include one or several server clusters 107 in which original data 106 are transformed into encoded chunks 108, and vice-versa.
  • a server cluster 107 can include a file system server and one or more processing servers, although a server cluster may include just an individual server.
  • Storage nodes 103 can operate independently from each other, and can be physically located in different areas.
  • Processing system 101 ensures data integrity, security, protection against failures, compression and deduplication.
  • configuration of processing system 101 is specified by configuration metadata 104 maintained within highly protected storage 102.
  • System configuration may be adjusted via an administrator application 110.
  • Example interactive data entry display screens in accordance with an example graphical user interface associated with application 110 are provided herein.
  • the present application configures one or more processing devices to partition objects into segments, and each segment can be further encoded into a number of chunks, which can be transferred to storage nodes.
  • This structure significantly simplifies storage implementation processes, without compromising data security, integrity, protection and storage performance. For example and illustrated in the example
  • information about data is encrypted at the client and stored securely within packages with encapsulated encoded chunks that are dispersed across storage nodes.
  • a plurality of application servers, data vaults, a process is implemented in a virtual machine instance that includes operations for, for example, encryption, compression, and protection and, moreover, slicing the information into a respective chunks and objects.
  • the erasure codec generates various types of encoded chunks, which are spread across all the storage nodes and deployed for a vault installation.
  • the present application configures one or more processing devices to partition objects into segments, and each segment can be further encoded into a number of chunks, which can be transferred to storage nodes.
  • This structure significantly simplifies storage implementation processes, without compromising data security, integrity, protection and performance.
  • information about data is encrypted and stored securely within packages with encapsulated encoded chunks that are dispersed across storage nodes.
  • metadata 104 can be encoded in a way that is only visible and retrievable by the authorized data owner. This is implemented by abstracting erasure-coded metadata 104 and network addressable storage (“NAS”) metadata, which is thereafter dispersed between different storage nodes.
  • a package can be configured to contain encoded chunk together with related metadata 104: storage nodes configuration; a vault configuration; a link to active vault snapshot; and a current state of data blocks used for snapshot.
  • FIGs. 4A-44E are block diagrams illustrating data management in connection with generating packages, including codeword data chunks that include respective slices of file segments and encoded metadata 104, in accordance with one more example
  • Fig. 4A illustrates original data 106 that may include one or more data files having a total data size of 10 GB.
  • Original data 106 is parsed into five segments 402, as illustrated in Fig. 4B, including segments 402 I, 402 II, 402 III, 402 IV and 402 V.
  • the respective original data 106 in Fig. 4B is divided into the five respective segments for illustrative purposes only, and it is to be understood that original data 106 can be divided into virtually any number of segments, with each segment being defined to have any respective size.
  • Fig. 4C illustrates continued data management in connection with an example implementation of the present application.
  • segments 402 I - 402V are each parsed into groups of seven slices 404.
  • segment 402 I is parsed into a group of slices 404 I, and includes 404 IA, 404 IB, 404 IC, 404 ID, 404 IE, 404 1F and 404 1G.
  • 402 II, 402 III, 402 IV and 402 V is parsed into respective slice groups comprising 7 slices (e.g., A - G).
  • the respective slices A-G illustrated in Fig. 4C, comprising five respective groups of slices, is provided for illustrative purposes only, and it is to be understood that segments 402 can be divided into virtually any number of slices 404, with each slice being defined to have any respective size.
  • Fig. 4D illustrates continued data management in connection with an example implementation of the present application.
  • slices 404 are each encoded into respective data chunks 406 I, 406 II, 406 III, 406 IV, and 406 V.
  • none of the chunks contains all slices 404 comprised in a segment 402, nor all segments 402 within original data 106.
  • the chunks contain slices 404 as a function of an encoding scheme and file splitting scheme, such as shown and described herein.
  • slices A, B, and C that are comprised in group 404 I are encoded into chunk 406 I.
  • Slices A, B, and C, that are comprised in group 404 II i.e.,
  • 404 IIA, 404 IIB, and 404 IIC are encoded into chunk 406 II.
  • Each of slices A, B, and C from the respective slice groups 404 I, 404 II, 404 III, 404 IV, and 404 V are, accordingly, encoded in chunks 406 I, 406 II, 406 III, 406 IV, and 406 V, respectively.
  • data chunks 406 are encoded with more information than merely a few respective slices (A, B, and C).
  • additional slices 404 can be encoded in one or more chunks 406, for example randomly or in accordance with a respective algorithm. Additional slices 404 can be provided in data chunks 406 to provide, for example, for a new form of data redundancy, without the negative impact of storage overhead or bandwidth demands that are typically associated with redundant copies of data files in many storage centers.
  • small fractions of original data 106 are encoded into data chunks 406 and passed through to storage nodes relatively seamlessly, securely and extremely quickly.
  • data chunk 406 I is encoded with some file slices from groups 404 II, 404 III, 404 IV, and 404 V. More particularly and without limiting the disclosure herein to any particular encoding scheme, data chunk 406 I is encoded with three slices from group 404 I (e.g., 404 IA, IB, IC), as well as a slice 404 IID, a slice 404 HID, a slice 404 IVD, a slice 404 VD, and a slice 404 VH, respectively. All of the remaining chunks illustrated in Fig.
  • chunks 406 II, 406 III, 406 IV and 406 V are similarly encoded with file slices, thereby providing for a new form of data redundancy. Accordingly, and as shown by the non-limiting example data encoding scheme illustrated in Fig. 4D, the encoded data chunks 406 I - 406 V collectively contain all of the respective data slices 404 comprised in original data 106. By providing a degree of data redundancy in encoded data chunks 406, such as illustrated in the example set forth in Fig. 4D, reconstruction of data is more highly available in the event of failure, corruption or other unplanned negative data event.
  • the encoded data chunks 406 illustrated in Fig. 4D comprising five respective chunks, is provided for illustrative purposes only, and it is to be understood that chunks 406 can be encoded with virtually any number of slices, with each chunk being defined to have any respective size.
  • Metadata is generated and can contain information that is usable for reconstructing the related segments from corresponding packages and/or to reconstruct the original data 106.
  • Packages can be generated in accordance with the present application and assigned to respective remote storage nodes which correspond, for example, to an optimized workload distribution. For example, the selection of respective storage nodes can be based on various parameters, including available network bandwidth, geographic proximity, node availability or other suitable criteria.
  • Each of the packages can be transmitted to at least one respective storage node, which can thereafter be retrieved for future assemblage of the segment(s) and data.
  • examples of information that can be provided in metadata include a location where a package is stored, a location where the original data 106 resides, a respective file system, access rules and permissions, attributes, file names, and other suitable attributes.
  • the present application supports encoding data chunks 406 with metadata 410.
  • packages 408A - 408E are shown comprising data chunks 406A-406E (Fig. 4D), and metadata 410 are further encoded into the respective chunks.
  • the metadata 410 can be treated or considered as original data 106, and segmented (Fig. 4B), sliced (Fig. 4C), encoded in data chunks (Fig. 4D), and packaged (Fig. 4E).
  • Fig. 4F is a simple block diagram illustrating these respective elements.
  • metadata is created as a function of erasure coding and data distribution and is included in respective data chunks 406.
  • Fig. 5 shows a flow diagram of example steps associated with preparing and distributing data packages in accordance with an example implementation of the present patent application. After the process starts, at step 502 original data 106 is accessed.
  • Original data 106 can be accessed from a respective data storage 102, or accessed from a client application 109, or a combination thereof. Thereafter, the original data 106 is parsed into respective segments 402 (step 504). The segments 402 are parsed into respective file slices 404 (step 506). The slices 404, thereafter, are encoded into data chunks 406 (step 508). Metadata 410 is generated as a function of the segments 402, slices 404 and chunks 406 (step 510). At step 512, a determination is made whether to parse the metadata 410. The determination can be made as a function of a setting within administrator application 110, a client configuration or an analysis of the metadata, such as by processing system 101.
  • the metadata can be parsed into at least one of segments and slices, and at least some of the metadata can be encoded into chunks 406. Thereafter, the process continues to step 516. If, in the alternative, the result of the determination at 512 is that the metadata is not to be parsed, then the process branches to step 516, and data packages 408 are generated, comprising the data chunks 406 and metadata 410. Thereafter, at step 518, the packages 408 are distributed among storage nodes and the process ends.
  • Figs. 6A-6K illustrate example interactive data entry screens provided in one or more example graphical user interfaces, in accordance with an implementation of the present application.
  • Fig. 6A includes a map showing locations of storage machines, codec machines, storage nodes and the operational status of such devices, such as whether the devices are online, off-line or disabled. Other information includes a number of online and off-line cloud storage nodes, the number of storage machines and the number of online and off-line codec machines.
  • Fig. 6A also includes a dashboard, formatted as a circular gauge identifying total used space, actual used space, and a number of objects.
  • Fig. 6B illustrates an interactive display that identifies various storage vaults and locations thereof, and includes graphical screen controls (e.g., buttons) for testing the vaults. Options are available for selecting vaults, storage machines, codec machines, storage nodes, certificates, instances and users.
  • Fig. 6C shows an example data entry display screen in which storage nodes are selected, and include names of storage devices and providers, as well as options for testing and setting the status (e.g., on or off). Other options include controls for adding new storage nodes.
  • Fig. 6D illustrates an example data entry display screen that includes controls for editing information associated with a respective storage vault(s).
  • Options displayed in Fig. 6D include editable options for the storage name, the size (e.g., number of) blocks, the relative security (e.g., high security), a degree of redundancy (e.g., a percentage value), and encryption options (e.g., whether to encrypt and a respective encryption algorithm).
  • Fig. 6E identifies a plurality of object storage vaults, and includes selectable icons associated therewith. Selecting the respective icons provides, for example, information associated with data stored in a respective vault, such as file name and size (Fig. 6F).
  • Fig. 6G illustrates a display screen that provides options for generating a date-based query in connection with a history of a respective storage vault. For example, a start and end date can be submitted and file information (e.g., date, name, size) as well as various management information (e.g., upload, wipe or other functionality) can be provided, as well as performance information (e.g., average speed, duration and storage machine) can be provided.
  • file information e.g., date, name, size
  • management information e.g., upload, wipe or other functionality
  • performance information e.g., average speed, duration and storage machine
  • Figs. 6F1-6K show additional display screens associated with a dashboard interface (Fig. 6H) a file manager associated with local and/or remote storage (Fig. 6 I) identifications of storage vaults including file system information, storage capacity and use and management controls, such as to configure respective vaults (Fig. 6J) and setting options, such as in connection with a respective storage vault in connection with respective volumes, protocols and status (Fig. 6K).
  • the present application can include a virtual file system and that can be implemented as a virtual RAID, which can be self-managing and that can exclusively store metadata associated with the encoding and distribution functionality shown and described herein.
  • metadata 410 is generated as a function of original data 106 that have been segmented, sliced and encoded into data chunks, such as shown and described herein.
  • a new layer on top of an existing platform can be created and used to store the metadata 410.
  • metadata 410 is highly significant, as it is needed for locating data packages 408 and reconstructing original data 106 based on at least a portion thereof.
  • each server in a respective node and/or vault can be configured with a virtual system that includes such a metadata database, which is regularly updated as packages 408 are generated and distributed in accordance with the teachings herein.
  • Such an architecture increases efficiency in case, for example, one or more disks or other storage devices gets corrupted. A new layer on top of the existing platform can be easily reconstructed in the database re-created as needed.
  • the present application provides benefits beyond storage efficiency and security.
  • the present application can implement use of relatively short programming code, such as by distributing JavaScript that, when executed by a client device provides for access to the content directly from respective data servers.
  • JavaScript executing in a client device can request respective data packages 408 from respective data centers.
  • the content of original data 106 (which may be multimedia content) can be reassembled and provided to the client device extremely quickly.
  • Such an architecture provides an improvement over streaming content via a specific geographic area such as a city and respective network, and operates more akin to a bit torrent and eliminates a need for a single source of data.
  • API application programming interface
  • the present application provides for high performance with ultra-high data resilience. Unlike known systems in which erasure coding that increases data resilience often comes with a cost of latency, due CPU or network bottlenecks, the present application provides for intelligent digital fragments that solve challenges typically faced in connection with speed and scalability.
  • the present application effectively moves from the hardware level and the software level effective to a data level, comprised in encoded chunks and packages 408 that take advantage of erasure coding and distribution.
  • Relatively small files can be aggregated into one single object to reduce the number of objects to be transmitted to storage nodes, and to reduce amount of metadata.
  • Objects can be partitioned into segments, and each segment can be further encoded. Thus, a number of encoded chunks are produced from each segment, and the chunks can be encapsulated with corresponding metadata in packages, which are transferred to storage nodes.
  • a distributed storage system includes system devices that are configured to process, distribute and/or access client data securely, quickly, efficiently over a set of storage nodes.
  • processing system devices can include one or several server clusters, in which each server cluster is configured with or as a file system server and a number of processing servers.
  • a specially designed object-based file system can be included and deployed within each server cluster.
  • File system servers of the server clusters can operate to maintain identical instances of the object-based file system.
  • a frequently used part of an object-based file system may be maintained within the processing system, while an entire object-based file system can be packed in a plurality of encoded chunks, encapsulated into packages and, thereafter, distributed over a set of storage nodes.
  • Object search speed is, accordingly, enhanced as a result of selection of an appropriate tree data structure or a directed graph.
  • An example object-based file system of the present application operates over large data blocks, referred as compound blocks. Compound blocks significantly reduce an amount of metadata, the number of operations performed by the object-based file system and the number of objects transmitted to storage nodes.
  • a merging of NAS technology and object storage is provided, wherein files are also configured as objects, each having a unique ID.
  • This provides the ability for files to be accessed from any application, from any geographic location and from any public or private storage provider, with simple HTTPS protocols, regardless of the same object being filed in a sub folder on the NAS file system.
  • This further provides enterprise applications with a multi vendor storage solution that has all benefits of object storage.
  • implementations of the present application allow for mixing of storage nodes from multiple vendors, and provide functionality for users to select any respective ones of storage providers, including on-site and off-site, and to switch between storage providers at will.
  • block and file system storage is configured to meet the needs of an increasingly distributed and cloud- enabled computing ecosystem.
  • block-based storage blocks on disks are accessed via low-level storage protocols, such as SCSI commands, with little overhead and/or no additional abstraction layers. This provides an extremely fast way to access data on disks, and various high-level tasks, such as multi-user access, sharing, locking and security, can be deferred to operating systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un stockage distribué sécurisé et une transmission de contenu électronique fournis sur au moins un réseau de communication. Au moins un fichier de données est reçu et analysé en une pluralité de segments, chacun des segments ayant une taille respective. Ensuite, chacun de la pluralité de segments est divisé en une pluralité de tranches, chacune des tranches ayant une taille respective. Une pluralité de fragments de données sont codés, chaque fragment de données comprenant une portion d'au moins deux des tranches, aucune portion ne comprenant une tranche entière. Les fragments de données sont conditionnés avec au moins des métadonnées, et chacun des paquets est attribué à des nœuds de stockage distants respectifs. Chacun des paquets est transmis au nœud de stockage distant attribué respectivement.
PCT/US2019/023689 2018-03-22 2019-03-22 Gestion et sécurité de données dans un système de stockage distribué Ceased WO2019183547A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IL277520A IL277520A (en) 2018-03-22 2020-09-22 Information management and security in a distributed hosting system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862646396P 2018-03-22 2018-03-22
US62/646,396 2018-03-22

Publications (1)

Publication Number Publication Date
WO2019183547A1 true WO2019183547A1 (fr) 2019-09-26

Family

ID=67987569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/023689 Ceased WO2019183547A1 (fr) 2018-03-22 2019-03-22 Gestion et sécurité de données dans un système de stockage distribué

Country Status (2)

Country Link
IL (1) IL277520A (fr)
WO (1) WO2019183547A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111064774A (zh) * 2019-11-29 2020-04-24 苏州浪潮智能科技有限公司 一种分布式数据存储的方法及装置
CN114745410A (zh) * 2022-03-04 2022-07-12 电子科技大学 一种远程堆管理方法及远程堆管理系统
CN115408478A (zh) * 2022-09-02 2022-11-29 西湖大学 一种共享实验仪器的数据存储和管理系统、方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250497A1 (en) * 2007-01-05 2010-09-30 Redlich Ron M Electromagnetic pulse (EMP) hardened information infrastructure with extractor, cloud dispersal, secure storage, content analysis and classification and method therefor
US20170272209A1 (en) * 2016-03-15 2017-09-21 Cloud Crowding Corp. Distributed Storage System Data Management And Security

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250497A1 (en) * 2007-01-05 2010-09-30 Redlich Ron M Electromagnetic pulse (EMP) hardened information infrastructure with extractor, cloud dispersal, secure storage, content analysis and classification and method therefor
US20170272209A1 (en) * 2016-03-15 2017-09-21 Cloud Crowding Corp. Distributed Storage System Data Management And Security

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111064774A (zh) * 2019-11-29 2020-04-24 苏州浪潮智能科技有限公司 一种分布式数据存储的方法及装置
CN111064774B (zh) * 2019-11-29 2022-12-27 苏州浪潮智能科技有限公司 一种分布式数据存储的方法及装置
CN114745410A (zh) * 2022-03-04 2022-07-12 电子科技大学 一种远程堆管理方法及远程堆管理系统
CN114745410B (zh) * 2022-03-04 2023-03-21 电子科技大学 一种远程堆管理方法及远程堆管理系统
CN115408478A (zh) * 2022-09-02 2022-11-29 西湖大学 一种共享实验仪器的数据存储和管理系统、方法
CN115408478B (zh) * 2022-09-02 2023-03-21 西湖大学 一种共享实验仪器的数据存储和管理系统、方法

Also Published As

Publication number Publication date
IL277520A (en) 2020-11-30

Similar Documents

Publication Publication Date Title
US11777646B2 (en) Distributed storage system data management and security
US20220368457A1 (en) Distributed Storage System Data Management And Security
US10956601B2 (en) Fully managed account level blob data encryption in a distributed storage environment
CN106164899B (zh) 从分布式存储系统的高效数据读取
US9811405B2 (en) Cache for file-based dispersed storage
JP6522008B2 (ja) 散在ストレージ・ネットワークにおける多世代記憶されたデータの読取り
US8185614B2 (en) Systems, methods, and apparatus for identifying accessible dispersed digital storage vaults utilizing a centralized registry
US9661075B2 (en) Defragmenting slices in dispersed storage network memory
US8965956B2 (en) Integrated client for use with a dispersed data storage network
US20030188153A1 (en) System and method for mirroring data using a server
US20190007208A1 (en) Encrypting existing live unencrypted data using age-based garbage collection
US20250094094A1 (en) Using Metadata Servers in a Distributed Storage System
US10067831B2 (en) Slice migration in a dispersed storage network
US12164379B2 (en) Recovering a data segment using locally decodable code segments
WO2019183547A1 (fr) Gestion et sécurité de données dans un système de stockage distribué
Sengupta et al. Planning for optimal multi-site data distribution for disaster recovery
Woitaszek Tornado codes for archival storage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19770418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 27/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19770418

Country of ref document: EP

Kind code of ref document: A1