[go: up one dir, main page]

WO2004021223A1 - Techniques d'equilibrage d'utilisation de capacite dans un environnement de stockage - Google Patents

Techniques d'equilibrage d'utilisation de capacite dans un environnement de stockage Download PDF

Info

Publication number
WO2004021223A1
WO2004021223A1 PCT/US2003/027039 US0327039W WO2004021223A1 WO 2004021223 A1 WO2004021223 A1 WO 2004021223A1 US 0327039 W US0327039 W US 0327039W WO 2004021223 A1 WO2004021223 A1 WO 2004021223A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage unit
file
storage
storage units
units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2003/027039
Other languages
English (en)
Inventor
Albert Leung
Giovanni Paliska
Bruce Greenblatt
Claudia Chandra
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arkivio Inc
Original Assignee
Arkivio Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arkivio Inc filed Critical Arkivio Inc
Priority to AU2003260124A priority Critical patent/AU2003260124A1/en
Publication of WO2004021223A1 publication Critical patent/WO2004021223A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms

Definitions

  • the present invention relates generally to management of storage environments and more particularly to techniques for automatically balancing storage capacity utilization in a storage environment.
  • an administrator administering the environment has to perform several tasks to ensure availability and efficient accessibility of data.
  • an administrator has to ensure that there are no outages due to lack of availability of storage space on any server, especially servers running critical applications.
  • the administrator thus has to monitor space utilization on the various servers. Presently, this is done either manually or using software tools that generate alarms/alerts when certain capacity thresholds associated with the storage units are reached or exceeded.
  • Hierarchical Storage Management (HSM) applications are used to migrate data among a hierarchy of storage devices.
  • files may be migrated from online storage to near-online storage, and from near-online storage to offline storage to manage storage utilization.
  • HSM Hierarchical Storage Management
  • a stub file or tag file is left in place of migrated file on the original storage location.
  • the stub file occupies less storage space than the migrated file and may comprise metadata related to the migrated file.
  • the stub file may also comprise information that can be used to determine the target location ofthe migrated file.
  • a migrated file may be remigrated to another destination storage location.
  • HSM applications In a HSM application, an administrator can set up rules/policies for migrating the files from expensive storage forms to less expensive forms of storage. While HSM applications eliminate some ofthe manual tasks that were previously performed by the administrator, the administrator still has to specifically identify the data (e.g., the file(s)) to be migrated, the storage unit from which to migrate the files (referred to as the "source storage unit"), and the storage unit to which the files are to be migrated (referred to as the "target storage unit"). As a result, the task of defining HSM policies can become quite complex and cumbersome in storage environments comprising a large number of storage units. The problem is further aggravated in storage environments in which storage units are continually being added or removed.
  • source storage unit the storage unit from which to migrate the files
  • target storage unit the storage unit to which the files are to be migrated
  • a first group of storage units from the plurality of storage units is monitored.
  • a first signal is received indicative of a condition.
  • Responsive to the first signal a first storage unit is determined from the first group of storage units from which data is to be moved. Data from the first storage unit is moved to one or more other storage units in the first group of storage units until the condition is resolved.
  • FIG. 1 is a simplified block diagram of a storage environment that may incorporate an embodiment ofthe present invention
  • Fig. 2 is a simplified block diagram of storage management system (SMS) according to an embodiment ofthe present invention
  • FIG. 3 depicts three managed groups according to an embodiment ofthe present invention.
  • FIG. 4 is a simplified high-level flowchart depicting a method of balancing storage capacity utilization for a managed group of storage units according to an embodiment ofthe present invention
  • FIG. 5 is a simplified flowchart depicting a method of selecting a file for a move operation according to an embodiment o the present invention
  • FIG. 6 is a simplified flowchart depicting a method of selecting a file for a move operation according to an embodiment ofthe present invention wherein multiple placement rules are configured;
  • Fig. 7 is a simplified flowchart depicting a method of selecting a target volume from a managed group of volumes according to an embodiment ofthe present invention
  • Fig. 8 is a simplified block diagram showing modules that may be used to implement an embodiment ofthe present invention.
  • Fig. 9 depicts examples of placement rules according to an embodiment ofthe present invention.
  • migration of a file involves moving the file (or a data portion ofthe file) from its original storage location on a source storage unit to a target storage unit.
  • a stub or tag file may be stored on the source storage unit in place ofthe migrated file.
  • the stub file occupies less storage space than the migrated file and generally comprises metadata related to the migrated file.
  • the stub file may also comprise information that can be used to determine the target storage location ofthe migrated file.
  • remigration of a file involves moving a previously migrated file from its present storage location to another storage location.
  • the stub file information or information stored in a database corresponding to the remigrated file may be updated to reflect the storage location to which the file is remigrated.
  • moving a file from a source storage unit to a target storage unit is intended to include migrating the file from the source storage unit to the target storage unit, or remigrating a file from the source storage unit to the target storage unit, or simply changing the location of a file from one storage location to another storage location.
  • Movement of a file may have varying levels of impact on the end user. For example, in case of migration and remigration operations, the movement of a file is transparent to the end user. The use of techniques such as symbolic links in UNIX, Windows shortcuts may make the move somewhat transparent to the end user. Movement of a file may also be accomplished without leaving any stub, tag file, links, shortcuts, etc.
  • FIG. 1 is a simplified block diagram of a storage environment 100 that may incorporate an embodiment ofthe present invention.
  • Storage environment 100 depicted in Fig. 1 is merely illustrative of an embodiment incorporating the present invention and does not limit the scope ofthe invention as recited in the claims.
  • One of ordinary skill in the art would recognize other variations, modifications, and alternatives.
  • storage environment 100 comprises a plurality of physical storage devices 102 for storing data.
  • Physical storage devices 102 may include disk drives, tapes, hard drives, optical disks, RAID storage structures, solid state storage devices, SAN storage devices, NAS storage devices, and other types of devices and storage media capable of storing data.
  • the term "physical storage unit" is intended to refer to any physical device, system, etc. that is capable of storing information or data.
  • Physical storage units 102 may be organized into one or more logical storage units/devices 104 that provide a logical view of underlying disks provided by physical storage units 102.
  • Each logical storage unit e.g., a volume
  • a unique identifier e.g., a number, name, etc.
  • a single physical storage unit may be divided into several separately identifiable logical storage units.
  • a single logical storage unit may span storage space provided by multiple physical storage units 102.
  • a logical storage unit may reside on non-contiguous physical partitions.
  • Various communication protocols may be used to facilitate communication of information via the communication links, including TCP/IP, HTTP protocols, extensible markup language (XML), wireless application protocol (WAP), Fiber Channel protocols, protocols under development by industry standard organizations, vendor-specific protocols, customized protocols, and others.
  • SMS 110 may also monitor the file system in order to collect information about the files such as file size information, access time information, file type information, etc.
  • the monitoring may also be performed using agents installed on the various servers 106 for monitoring the storage units assigned to the servers and the file system.
  • the information collected by the agents may be forwarded to SMS 110 for processing according to the teachings ofthe present invention.
  • the information collected by SMS 110 may be stored in a memory or disk location accessible to SMS 110.
  • the information may be stored in a database 112 accessible to SMS 110.
  • the information stored in database 112 may include information 114 related to storage policies and rules configured for the storage environment, information 116 related to the various monitored storage units, information 118 related to the files stored in the storage environment, and other types of information 120.
  • Various formats may be used for storing the information.
  • Database 112 provides a repository for storing the information and may be a relational database, directory services, etc. As described below, the stored information may be used to perform capacity utilization balancing according to an embodiment ofthe present invention.
  • SMS 110 includes a processor 202 that communicates with a number of peripheral devices via a bus subsystem 204.
  • peripheral devices may include a storage subsystem 206, comprising a memory subsystem 208 and a file storage subsystem 210, user interface input devices 212, user interface output devices 214, and a network interface subsystem 216.
  • the input and output devices allow a user, such as the administrator, to interact with SMS 110.
  • Network interface subsystem 216 provides an interface to other computer systems, networks, servers, and storage units.
  • Network interface subsystem 216 serves as an interface for receiving data from other sources and for transmitting data to other sources from SMS 110.
  • Embodiments of network interface subsystem 216 include an Ethernet card, a modem (telephone, satellite, cable, ISDN, etc.), (asynchronous) digital subscriber line (DSL) units, and the like.
  • User interface input devices 212 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a barcode scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and other types of input devices.
  • pointing devices such as a mouse, trackball, touchpad, or graphics tablet
  • audio input devices such as voice recognition systems, microphones, and other types of input devices.
  • use ofthe term "input device” is intended to include all possible types of devices and mechanisms for inputting information to SMS 110.
  • User interface output devices 214 may include a display subsystem, a printer, a fax machine, or non- visual displays such as audio output devices, etc.
  • the display subsystem may be a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), or a projection device.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • output device is intended to include all possible types of devices and mechanisms for outputting information from SMS 110.
  • Storage subsystem 206 may be configured to store the basic programming and data constructs that provide the functionality ofthe present invention.
  • software code modules implementing the functionality ofthe present invention may be stored in storage subsystem 206. These software modules may be executed by processor(s) 202.
  • Storage subsystem 206 may also provide a repository for storing data used in accordance with the present invention. For example, the information gathered by SMS 110 may be stored in storage subsystem 206.
  • Storage subsystem 206 may also be used as a migration repository to store data that is moved from a storage unit.
  • Storage subsystem 206 may also be used to store data that is moved from another storage unit.
  • Storage subsystem 206 may comprise memory subsystem 208 and file/disk storage subsystem 210.
  • Memory subsystem 208 may include a number of memories including a main random access memory (RAM) 218 for storage of instructions and data during program execution and a read only memory (ROM) 220 in which fixed instructions are stored.
  • File storage subsystem 210 provides persistent (non- volatile) storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a Compact Disk Read Only Memory (CD-ROM) drive, an optical drive, removable media cartridges, and other like storage media.
  • CD-ROM Compact Disk Read Only Memory
  • Bus subsystem 204 provides a mechanism for letting the various components and subsystems of SMS 110 communicate with each other as intended. Although bus subsystem 204 is shown schematically as a single bus, alternative embodiments ofthe bus subsystem may utilize multiple busses.
  • SMS 110 can be of various types including a personal computer, a portable computer, a workstation, a network computer, a mainframe, a kiosk, or any other data processing system. Due to the ever-changing nature of computers and networks, the description of SMS 110 depicted in Fig. 2 is intended only as a specific example for purposes of illustrating the preferred embodiment ofthe computer system. Many other configurations having more or fewer components than the system depicted in Fig. 2 are possible.
  • the administrator may only specify a group of storage units to be managed (referred to as the "managed group"). For a specified group of storage units to be managed, embodiments ofthe present invention automatically determine when capacity utilization balancing is to be performed. Embodiments ofthe present invention also automatically identify the source storage unit, the file(s) to be moved, and the one or more target storage units to which the selected file(s) are to be moved.
  • each managed group may include one or more storage units.
  • the storage units in a managed group may be assigned or coupled to one server or to multiple servers.
  • a particular storage unit can be a part of multiple managed groups.
  • Multiple managed groups may be defined for a storage environment.
  • Fig. 3 depicts three managed groups according to an embodiment ofthe present invention.
  • the first managed group 301 includes four volumes, namely, Nl, N2, N3, and N4. Volumes VI and V2 are assigned to server SI, and volumes V3 and V4 are assigned to server S2. Accordingly, managed group 301 comprises volumes assigned to multiple servers.
  • the second managed group 302 includes three volumes, namely, V4 and N5 assigned to server S2, and N6 assigned to server S3. Volume V4 is part of managed groups 301 and 302.
  • Managed group 303 includes volumes V7 and V8 assigned to server S4. Various other managed groups may also be specified.
  • embodiments ofthe present invention automatically form managed groups based upon the servers or hosts that manage the storage units.
  • all storage units that are allocated to a server or host and/or volumes allocated to a ⁇ AS host may be grouped into one managed group.
  • all volumes coupled to a server or host are grouped into one managed volume.
  • the managed group may also include S AN volumes that are managed by the server or host.
  • an administrator may define volume groups by selecting storage units to be included in a group. For example, a user interface may be displayed on SMS 100 that displays a list of storage units in the storage environment that are available for selection. A user may then form managed groups by selecting one or more of the displayed storage units.
  • managed groups may be automatically formed based upon criteria specified by the administrator. According to this technique, an administrator may define criteria for a managed group and a storage unit is included in a managed group if it satisfies the criteria specified for that managed group. The criteria generally relate to attributes ofthe storage units.
  • criteria for specifying a group of volumes may include a criterion related to volume capacity, a criterion related to cost of storage, a criterion related to the manufacturer ofthe storage device, a criterion related to device type, a criterion related to the performance characteristics ofthe storage device, and the like.
  • the administrator may specify the volume capacity criterion by specifying an upper bound and/or a lower bound. For example, in order to configure a "large" volumes managed group, the administrator may set a lower bound condition of 500 GB and an upper bound condition of 2 TB. Only those volumes that fall within the range identified by the lower bound and the upper bound are included in the "large" volumes managed group. The administrator may set up a managed volume group for "expensive" volumes by specifying a lower bound of $2 per GB and an upper bound of $5 per GB. Only those volumes that fall within the specified cost range are then included in the "expensive" volumes managed group.
  • the administrator may set up a managed group by specifying that storage units manufactured by a particular manufacturer or storage units having a particular model number are to be included in the managed group.
  • the administrator may also specify a device type for forming a managed group.
  • the device type may be selectable from a list of device types including SCSI, Fibre Channel, IDE, NAS, etc.
  • a storage unit is then included in a managed group if its device type matches the administrator-specified device type(s).
  • Device-based groups may also be configured in which all volumes allocated from the same device, regardless of whether those volumes are assigned to a single server or multiple servers in a network, are grouped into one group.
  • Fig. 4 is a simplified high-level flowchart 400 depicting a method of balancing storage capacity utilization for a managed group of storage units according to an embodiment ofthe present invention. The method depicted in Fig.
  • Flowchart 400 depicted in Fig. 4 is merely illustrative of an embodiment ofthe present invention and is not intended to limit the scope ofthe present invention. Other variations, modifications, and alternatives are also within the scope ofthe present invention.
  • the processing depicted in Fig. 4 assumes that the storage units are in the form of volumes. It should be apparent that the processing can also be applied to other types of storage units.
  • embodiments ofthe present invention continuously or periodically monitor and gather information on the memory usage ofthe storage units in a storage environment.
  • the gathered information may be used to detect the over-capacity condition.
  • the over-capacity condition may also be detected using other techniques known to those skilled in the art.
  • the used storage capacity ofthe most full volume in the managed group of volumes is 82% (i.e., the volume is experiencing an overcapacity condition) and the used storage capacity ofthe least full volume in the managed group of volumes is 71%.
  • the managed group of volumes is considered balanced since (82 -71) ⁇ 12.
  • a volume, from the managed group of volumes, from which data is to be moved i.e., the source volume
  • the identity ofthe source volume depends on the type of condition detected in step 402. For example, if an overcapacity condition was detected in step 402, then the volume that is experiencing the overcapacity condition is determined to be the source volume in step 406. If the condition in step 402 was triggered because the difference in used capacity of any two volumes (e.g., the least full volume and the most full volume) in the managed group of volumes usage exceeds the band threshold value, then the fullest volume is determined to be the source volume in step 406. Other techniques may also be used to determine the source volume from the managed group of volumes.
  • Various techniques may be used for selecting the file to be moved from the source volume. According to one technique, the largest file stored on the source volume is selected. According to another technique, the least recently accessed file may be selected to be moved. Other file attributes such as age ofthe file, type ofthe file, etc. may also be used to select a file to be moved.
  • step 504 data value scores (DVSs) are then calculated for the files stored on the source volume selected in step 406 in Fig. 4 (step 504).
  • the file with the highest DVS is then selected for the move operation (step 506).
  • the processing depicted in Fig. 5 is performed the first time that a file is to be selected. During this first pass, the files may be ranked based upon their DVSs calculated in step 506. The ranked list of files is then available for subsequent selections ofthe files during subsequent passes ofthe flowchart depicted in Fig. 4. The highest ranked and previously unselected file is then selected during each pass.
  • migrated files are moved before original files.
  • two separate ranked lists are created based upon the DVSs associated with the files (or based upon file size): one list comprising migrated files ranked based upon their DVSs, and the other comprising original files ranked based upon their DVSs.
  • files from the ranked migrated files list are selected before selection of files from the ranked original files list (i.e., files from the original files list are not selected until the files on the migrated files list have been selected and moved).
  • file groups may be configured for the storage environment.
  • a file is included in a file group if the file satisfies criteria specified for the file group.
  • the file group criteria may be specified by the administrator. For example, an administrator may create file groups based upon a business value associated with the files. The administrator may group files that are deemed important or critical for the business into one file group (a "more important" file group) and the other files may be grouped into a second group (a "less important" file group). Other criteria may also be used for defining file groups including file size, file type, file owner or group of owners, last modified time ofthe file, last access time of a file, etc.
  • the file groups may be created by the administrator or automatically by a storage policy engine.
  • the file groups may also be prioritized relative to each other depending upon the files included in the file groups. Based upon the priorities associated with the file groups, files from a certain file group may be selected for the move operation in step 506 before files from another group.
  • the move operation may be configured such that files from the "less important" file group are moved before files from the "more important” file group. Accordingly, in step 506, files from the "less important” file group are selected for the move operation before files from the "more important” file group.
  • the DVSs associated with the files may determine the order in which the files are selected for the move operation.
  • FIG. 6 is a simplified flowchart 600 depicting a method of selecting a file for a move operation according to an embodiment ofthe present invention wherein multiple placement rules are configured.
  • the processing depicted in Fig. 6 is performed in step 408 of the flowchart depicted in Fig. 4.
  • the processing in Fig. 6 may be performed by software modules executed by a processor, hardware modules, or combinations thereof.
  • the processing is performed by a policy management engine (PME) executing on SMS 110.
  • PME policy management engine
  • the processing depicted in Fig. 6 is performed the first time that a file is to be selected during the first pass ofthe flowchart depicted in Fig. 4.
  • the files may be ranked based upon their DVSs in step 610.
  • the ranked list of files is then available for subsequent selections ofthe files during subsequent passes ofthe flowchart depicted in Fig. 4.
  • the highest ranked and previously unselected file is then selected during each subsequent pass.
  • files that contain migrated data are selected for the move operation before files that contain original data (i.e., files that have not been migrated).
  • a migrated file comprises data that has been migrated (or remigrated) from its original storage location by applications such as HSM applications.
  • applications such as HSM applications.
  • a stub or tag file is left in the original storage location ofthe migrated file identifying the migrated location ofthe file.
  • An original file represents a file that has not been migrated or remigrated.
  • Fig. 7 is a simplified flowchart 700 depicting a method of selecting a target volume from a managed group of volumes according to an embodiment ofthe present invention.
  • the processing depicted in Fig. 7 is performed in step 410 ofthe flowchart depicted in Fig. 4.
  • the processing in Fig. 7 may be performed by software modules executed by a processor, hardware modules, or combinations thereof.
  • the processing is performed by a policy management engine (PME) executing on SMS 110.
  • PME policy management engine
  • Flowchart 700 depicted in Fig. 7 is merely illustrative of an embodiment ofthe present invention and is not intended to limit the scope ofthe present invention. Other variations, modifications, and alternatives are also within the scope ofthe present invention.
  • a placement rule to be used for determining a target volume from the managed group of target volumes is determined (step 702).
  • that single placement rule is selected in step 702.
  • the placement rule selected in step 702 corresponds to the placement rule that generated the DVS associated with the selected file.
  • a storage value score (SVS) (or “relative storage value score” RSVS) is generated for each volume in the managed group of volumes (step 704).
  • the SVS for a volume indicates the degree of suitability of storing the selected file on that volume.
  • the SVS may not be calculated for the source volume in step 704.
  • Various techniques may be used for calculating the SVSs. According to an embodiment ofthe present invention, the SVSs maybe calculated using techniques described in U.S. Patent Application 10/232,875 filed August 30, 2002 (Attorney Docket No. 21154- 000210US), and described below.
  • the SVSs are referred to as relative storage value scores (RSVSs) in U.S. Patent Application 10/232,875.
  • the volume with the highest SVS score is then selected as the target volume (step 706).
  • Fig. 8 is a simplified block diagram showing modules that may be used to implement an embodiment ofthe present invention.
  • the modules depicted in Fig. 8 may be implemented in software, hardware, or combinations thereof.
  • the modules include a user interface module 802, a policy management engine (PME) module 804, a storage momtor module 806, and a file I/O driver module 808.
  • PME policy management engine
  • User interface module 802 allows a user (e.g., an administrator) to interact with the storage management system.
  • An administrator may provide rules/policy information for managing storage environment 812, information identifying the managed groups of storage units, thresholds information, selection criteria, etc., via user interface module 802.
  • the information provided by the user may be stored in memory and/or disk storage 810.
  • Information related to storage environment 812 may be output to the user via user interface module 802.
  • the information related to the storage environment that is output may include status information about the capacity ofthe various storage units in the storage environment, the status of utilized-capacity balancing operations, error conditions, and other information related to the storage system.
  • User interface module 802 may also provide interfaces that allow a user to define the managed groups of storage units using one or more techniques described above.
  • File I/O driver module 808 is configured to intercept file system calls received from consumers of data stored by storage environment 812. For example, file I/O driver module 808 is configured to intercept any file open call (which can take different forms in different operating systems) received from an application, user, or any data consumer. When file I/O driver module 808 determines that a requested file has been migrated from its original location to a different location, it may suspend the file open call and perform the following operations: (1) File I/O driver 808 may determine the actual location ofthe requested data file in storage environment 812. This can be done by looking up from the file header or stub file that is stored in the original location. Alternatively, if the file location information is stored in a persistent storage location (e.g., a database managed by PME module 804), file I/O driver 808 may determine the actual remote location ofthe file from that persistent location;
  • a persistent storage location e.g., a database managed by PME module 804
  • File I/O driver 808 then resumes the file open call so that the application can resume with the restored data.
  • File I/O driver 808 may also create stub or tag files.
  • the "location constraint information" for a particular placement rule specifies one or more constraints associated with storing information on a storage unit based upon the particular placement rule.
  • Location constraint information generally specifies parameters associated with a storage unit that need to be satisfied for storing information on the storage unit.
  • the location constraint information may be left empty or may be set to NULL to indicate that no constraints are applicable for the placement rule. For example, no constraints have been specified for placement rule 908-3 depicted in Fig. 9.
  • the constraint information maybe set to LOCAL (e.g., location constraint information for placement rules 908-1 and 908-6). This indicates that the file is to be stored on a local storage unit that is local to the device used to create the file and is not to be moved or migrated to another storage unit.
  • LOCAL location constraint information for placement rules 908-1 and 908-6.
  • a placement rule is not eligible for selection if the constraint information is set to LOCAL, and a DVS of 0 (zero) is assigned for that specific placement rule.
  • a specific storage unit group, or a specific device may be specified in the location constraint information for storing the data file.
  • constraints or requirements may also be specified (e.g., constraints related to file size, availability, etc.).
  • the constraints specified by the location constraint information are generally hard constraints implying that a file cannot be stored on a storage unit that does not satisfy the location constraints.
  • a numerical score (referred to as the Data Value Score or DVS) can be generated for a file for each placement rule.
  • DVS Data Value Score
  • the DVS generated for the file and the placement rule indicates the level of suitability or applicability ofthe placement rule for that file.
  • the value ofthe DVS calculated for a particular file using a particular placement rule is based upon the characteristics ofthe particular file. For example, according to an embodiment ofthe present invention, for a particular file, higher scores are generated for placement rules that are deemed more suitable or relevant to the particular file.
  • the file_selection_score (also referred to as the "data characteristics score") for a placement rule is calculated based upon the file selection criteria information ofthe placement rule and the data_usage_score for the placement rule is calculated based upon the data usage criteria information specified for the placement rule.
  • the file selection criteria information and the data usage criteria information specified for the placement rule may comprise one or more clauses or conditions involving one or more parameters connected by Boolean connectors (see Fig. 9). Accordingly, calculation ofthe file_selection_score involves calculating numerical values for the individual clauses that make up the file selection criteria information for the placement rule and then combining the individual clause scores to calculate the file_selection_score for the placement rule. Likewise, calculation ofthe data_usage_score involves calculating numerical values for the individual clauses specified for the data usage criteria information for the placement rule and then combining the individual clause scores to calculate the data_usage_score for the placement rule.
  • the following rales are used to combine scores generated for the individual clauses to calculate a file_selectionjscore or data_usage_score:
  • Rule 1 For an N-way AND operation (i.e., for N clauses connected by an AND connector), the resultant value is the sum of all the individual values calculated for the individual clauses divided by N.
  • Rule 2 For an N-way OR operation (i.e., for N clauses connected by an OR connector), the resultant value is the largest value calculated for the N clauses.
  • rule 3 According to an embodiment ofthe present invention, the file_selection_score and the data_usage_score are between 0 and 1 (both inclusive).
  • the value for each individual clause specified in the file selection criteria is calculated using the following guidelines:
  • a score of 1 is assigned if the parameter criteria are met, else a score of 0 is assigned.
  • a score of 1 is assigned for placement rule 908- 4 depicted in Fig. 9, if the file for which the DVS is calculated is of type "Email Files", then a score of 1 is assigned for the clause.
  • the file_selection_score for placement rule 308-4 is also set to 1 since it comprises only one clause. However, if the file is not an email file, then a score of 0 is assigned for the clause and accordingly the file_selection_score is also set to 0.
  • the Score is reset to 0 if it is negative.
  • the Score is set to 1 if the parameter inequality is satisfied.
  • the Score is reset to 0 if it is negative.
  • the file_selection_score is then calculated based on the individual scores for the clauses in the file selection criteria information using Rules 1, 2, and 3, as described above.
  • the file_selection_score represents the degree of matching (or suitability) between the file selection criteria information for a particular placement rale and the file for which the score is calculated. It should be evident that various other techniques may also be used to calculate the file_selection_score in alternative embodiments ofthe present invention.
  • the score for each clause specified in the data usage criteria information for a placement rale is scored using the following guidelines:
  • the score for the clause is set to 1 if the parameter condition ofthe clause is met.
  • Dateoata Relevant date information for the data file.
  • Date R uie Relevant date information in the rale.
  • the Score is reset to 0 if it is negative.
  • the data_usage_score represents the degree of matching (or suitability) between the data usage criteria information for a particular placement rule and the file for which the score is calculated.
  • the DVS is then calculated based upon the file_selection_score and data_usage_score.
  • the DVS for a placement rale thus quantifies the degree of matching (or suitability) between the conditions specified in the file selection criteria information and the data usage criteria information for the placement rale and the characteristics ofthe file for which the score is calculated. According to an embodiment ofthe present invention, higher scores are generated for placement rules that are deemed more suitable (or are more relevant) for the file.
  • the rales are initially ranked based upon DVSs calculated for the placement rales. According to an embodiment ofthe present invention, if two or more placement rules have the same DVS value, then the following tie-breaking rales may be used: [0123] (a) The placement rules are ranked based upon priorities assigned to the placement rules by a user (e.g., system administrator) ofthe storage environment.
  • a user e.g., system administrator
  • (c) If neither (a) nor (b) are able to break the tie between placement rales, some other criteria may be used to break the tie. For example, according to an embodiment ofthe present invention, the order in which the placement rules are encountered may be used to break the tie. In this embodiment, a placement rule that is encountered earlier is ranked higher than a subsequent placement rale. Various other criteria may also be used to break ties. It should be evident that various other techniques may also be used to rank the placement rules in alternative embodiments ofthe present invention.
  • All files that meet all the selection criteria for movement are assigned a DVS of 1, as calculated from the above steps.
  • the files are then ranked again by recalculating the DVS using another equation.
  • the new DVS score equation is defined as:
  • DVS • file_size/last_access_time where: file_size is the size ofthe file.
  • last_access_time is the last time that the file was accessed.
  • this DVS calculation ranks the files based on their impacts to the overall system when they are moved from the source volume, with a higher score representing a lower impact.
  • h oving a larger file is more effective to balance capacity utilization and moving a file that has not been accessed recently reduces the chances that the file will be recalled.
  • various other techniques may also be used to rank files that have a DVS of 1 in alternative embodiments ofthe present invention.
  • a SVS for a storage unit is calculated using the following steps:
  • a "Bandwidth_factor" variable is set to zero (0) if the bandwidth supported by the storage unit for which the score is calculated is less than the bandwidth requirement, if any, specified in the location constraints criteria specified for the placement rale for which the score is calculated.
  • the location constraint criteria for placement rale 908-2 depicted in Fig. 9 specifies that the bandwidth ofthe storage unit should be greater than 40 MB. Accordingly, if the bandwidth supported by the storage unit is less than 40 MB, then the "Bandwidth_factor" variable is set to 0.
  • Bandwidth_factor • ((Bandwidth supported by the storage unit) - (Bandwidth required by the location constraint ofthe selected placement rule)) + K where K is set to some constant integer. According to an embodiment ofthe present invention, K is set to 1. Accordingly, the value of Bandwidth_factor is set to a non-negative value.
  • the desired_threshold_% for a storage device is usually set by a system administrator.
  • the current_usage_% value is monitored by embodiments ofthe present invention.
  • the "cost" value may be set by the system administrator.
  • the formula for calculating SVS shown above is representative of one embodiment ofthe present invention and is not meant to reduce the scope ofthe present invention.
  • the availability of a storage unit may also be used to determine the SVS for the device.
  • availability of a storage unit indicates the amount of time that the storage unit is available during those time periods when it is expected to be available.
  • the value of SVS for a storage unit is directly proportional to the availability ofthe storage unit.
  • STEP 3 Various adjustments may be made to the SVS calculated according to the above steps. For example, in some storage environments, the administrator may want to group "similar" files together in one storage unit. In other environments, the administrator may want to distribute files among different storage units.
  • the SVS may be adjusted to accommodate the policy adopted by the administrator. Performance characteristics associated with a network that is used to transfer data from the storage devices may also be used to adjust the SVSs for the storage units. For example, the access time (i.e., the time required to provide data stored on a storage unit to a user) of a storage unit may be used to adjust the SVS for the storage unit.
  • the throughput of a storage unit may also be used to adjust the SVS value for the storage unit.
  • the SVS value is calculated such that it is directly proportional to the desirability ofthe storage unit for storing the file.
  • a higher SVS value represents a more desirable storage unit for storing a file.
  • the SVS value is directly proportional to the available capacity percentage. Accordingly, a storage unit with higher available capacity is more desirable for storing a file.
  • the SVS value is inversely proportional to the cost of storing data on the storage unit. Accordingly, a storage unit with lower storage costs is more desirable for storing a file.
  • the SVS value is directly proportional to the bandwidth requirement. Accordingly, a storage unit supporting a higher bandwidth is more desirable for storing the file. SVS is zero if the bandwidth requirements are not satisfied.
  • the SVS formula for a particular storage unit combines the various storage unit characteristics to generate a score that represents the degree of desirability of storing data on the particular storage unit.
  • SVS is zero (0) if the value of Bandwidth_factor is zero.
  • Bandwidth_factor is set to zero if the bandwidth supported by the storage unit is less than the bandwidth requirement, if any, specified in the location constraints criteria information specified for the selected placement rule. Accordingly, if the value of SVS for a particular storage unit is zero (0) it implies that bandwidth supported by the storage unit is less than the bandwidth required by the placement rale, or the storage unit is already at or exceeds the desired capacity threshold.
  • SVS is zero (0) if the desired_threshold_% is equal to the current_usage_%.
  • the SVS for a storage unit is positive, it indicates that the storage unit meets both the bandwidth requirements (i.e., Bandwidth_factor is non zero) and also has enough capacity for storing the file (i.e., desired_threshold_% is greater than the current_usage_%).
  • Bandwidth_factor is non zero
  • desired_threshold_% is greater than the current_usage_%).
  • the higher the SVS value the more suitable (or desirable) the storage unit is for storing a file.
  • the storage unit with the highest positive RSVS is the most desirable candidate for storing the file.
  • the SVS for a particular storage unit thus provides a measure for determining the degree of desirability for storing data on the particular storage unit relative to other storage unit for a particular placement rale being processed. Accordingly, the SVS is also referred to as the relative storage value score (RSVS).
  • the SVS in conjunction with the placement rales and their rankings is used to determine an optimal storage location for storing the data to be moved
  • the SVS for a particular storage unit may be negative if the storage unit meets the bandwidth requirements but the storage unit's usage is above the intended threshold (i.e., current_usage_% is greater than the desired_threshold_%).
  • the relative magnitude ofthe negative value indicates the degree of over-capacity ofthe storage unit.
  • the closer the SVS is to zero (0) and the storage unit has capacity for storing the data the more desirable the storage unit is for storing the data file.
  • the over-capacity of a storage unit having SVS of -0.9 is more than the over-capacity of a second storage unit having RSVS -0.1. Accordingly, the second storage unit is a more attractive candidate for storing the data file as compared to the first storage unit. Accordingly, the SVS, even if negative, can be used in ranking the storage units relative to each other for purposes of storing data.
  • the SVS for a particular storage unit thus serves as a measure for determining the degree of desirability or suitability ofthe particular storage unit for storing data relative to other storage devices.
  • a storage unit having a positive SVS value is a better candidate for storing the data file than a storage unit with a negative SVS value, since a positive value indicates that the storage unit meets the bandwidth requirements for the data file and also possesses sufficient capacity for storing the data file.
  • a storage unit with a higher positive SVS is a more desirable candidate for storing the data file than a storage unit with a lower SVS value, i.e., the storage unit having the highest positive SVS value is the most desirable storage unit for storing the data file.
  • a storage unit with a positive SVS value is not available, then storage units with negative SVS values are more desirable than devices with an SVS value of zero (0).
  • the rationale here is that it is better to select a storage unit that satisfies the bandwidth requirements (even though the storage unit is over capacity) than a storage unit that does not meet the bandwidth requirements (i.e., has a SVS of zero).
  • a storage unit with a higher SVS value i.e., SVS closer to 0
  • the storage unit with the highest SVS value is the most desirable candidate for storing the data file.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention permet e déterminer automatiquement lorsqu'un équilibrage d'utilisation de capacité doit être réalisé dans l'environnement stockage concernant un groupe d'unités de stockage. On détermine, dans le groupe d'unités de stockage, une unité de stockage source à partir de laquelle doivent être déplacées des données afin d'équilibrer l'utilisation de la capacité. L'équilibrage de capacité utilisée est réalisée par déplacement de fichiers de données de l'unité de stockage source vers une ou plusieurs unités de stockage cibles du groupe d'unités de stockage. Les unités de stockage d'un groupe peuvent être attribuées à un ou à plusieurs serveurs. Le premier groupe géré (301) comprend quatre volumes (V1, V2, V3, et V4). Les volumes (V1) et (V2) sont attribués au serveur (S1), et les volumes (V3) et (V4) sont attribués au serveur (S2). Le second groupe géré (302) comprend trois volumes (V4) et (V5) attribués au serveur (S2), et (V6) attribué au serveur (S3). Le groupe géré (303) comprend (V7) et (V8) attribués au serveur (S4).
PCT/US2003/027039 2002-08-30 2003-08-27 Techniques d'equilibrage d'utilisation de capacite dans un environnement de stockage Ceased WO2004021223A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003260124A AU2003260124A1 (en) 2002-08-30 2003-08-27 Techniques for balancing capacity utilization in a storage environment

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US40758702P 2002-08-30 2002-08-30
US40745002P 2002-08-30 2002-08-30
US60/407,450 2002-08-30
US60/407,587 2002-08-30

Publications (1)

Publication Number Publication Date
WO2004021223A1 true WO2004021223A1 (fr) 2004-03-11

Family

ID=31981511

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2003/027039 Ceased WO2004021223A1 (fr) 2002-08-30 2003-08-27 Techniques d'equilibrage d'utilisation de capacite dans un environnement de stockage
PCT/US2003/027040 Ceased WO2004021224A1 (fr) 2002-08-30 2003-08-27 Optimisation de capacite de stockage reposant sur des couts de stockage de donnees

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2003/027040 Ceased WO2004021224A1 (fr) 2002-08-30 2003-08-27 Optimisation de capacite de stockage reposant sur des couts de stockage de donnees

Country Status (2)

Country Link
AU (2) AU2003260124A1 (fr)
WO (2) WO2004021223A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092977B2 (en) 2001-08-31 2006-08-15 Arkivio, Inc. Techniques for storing data based upon storage policies
US7509316B2 (en) 2001-08-31 2009-03-24 Rocket Software, Inc. Techniques for performing policy automated operations
CN111752489A (zh) * 2020-06-30 2020-10-09 重庆紫光华山智安科技有限公司 Kubernetes中PVC模块的扩容方法及相关装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012035574A1 (fr) * 2010-09-14 2012-03-22 Hitachi, Ltd. Appareil serveur et son procédé de commande pour la migration de fichier sur la base d'un quota d'utilisateurs et d'un quota de fichiers
US12141455B2 (en) 2022-03-10 2024-11-12 Google Llc Soft capacity constraints for storage assignment in a distributed environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5276867A (en) * 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
US5333315A (en) * 1991-06-27 1994-07-26 Digital Equipment Corporation System of device independent file directories using a tag between the directories and file descriptors that migrate with the files
US5367698A (en) * 1991-10-31 1994-11-22 Epoch Systems, Inc. Network file migration system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5276867A (en) * 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
US5333315A (en) * 1991-06-27 1994-07-26 Digital Equipment Corporation System of device independent file directories using a tag between the directories and file descriptors that migrate with the files
US5367698A (en) * 1991-10-31 1994-11-22 Epoch Systems, Inc. Network file migration system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092977B2 (en) 2001-08-31 2006-08-15 Arkivio, Inc. Techniques for storing data based upon storage policies
US7509316B2 (en) 2001-08-31 2009-03-24 Rocket Software, Inc. Techniques for performing policy automated operations
CN111752489A (zh) * 2020-06-30 2020-10-09 重庆紫光华山智安科技有限公司 Kubernetes中PVC模块的扩容方法及相关装置
CN111752489B (zh) * 2020-06-30 2022-06-17 重庆紫光华山智安科技有限公司 Kubernetes中PVC模块的扩容方法及相关装置

Also Published As

Publication number Publication date
AU2003262964A1 (en) 2004-03-19
WO2004021224A1 (fr) 2004-03-11
AU2003260124A1 (en) 2004-03-19

Similar Documents

Publication Publication Date Title
US20040054656A1 (en) Techniques for balancing capacity utilization in a storage environment
US20040039891A1 (en) Optimizing storage capacity utilization based upon data storage costs
US7509316B2 (en) Techniques for performing policy automated operations
US11287974B2 (en) Systems and methods for storage modeling and costing
US7092977B2 (en) Techniques for storing data based upon storage policies
US7454446B2 (en) Techniques for storing data based upon storage policies
CA2458908A1 (fr) Techniques de stockage de donnees fondees sur les modalites de stockage
US7203711B2 (en) Systems and methods for distributed content storage and management
US9542415B2 (en) Modifying information lifecycle management rules in a distributed system
JP5265023B2 (ja) ストレージシステム及びその制御方法
US20030115204A1 (en) Structure of policy information for storage, network and data management applications
US20030110263A1 (en) Managing storage resources attached to a data network
US20100125715A1 (en) Storage System and Operation Method Thereof
US20130124734A1 (en) System and method for allocation of organizational resources
US7702962B2 (en) Storage system and a method for dissolving fault of a storage system
US20100299489A1 (en) Managing Data Storage Systems
US20060010289A1 (en) Volume management system and method
WO2004021123A2 (fr) Procedes destines a restreindre les rappels dans des applications de gestion memoire
WO2004109556A1 (fr) Fonctionnement sur des fichiers transferes sans rappeler des donnees
AU2008206570A1 (en) Systems and methods for analyzing information technology systems using collaborative intelligence
WO2004021223A1 (fr) Techniques d'equilibrage d'utilisation de capacite dans un environnement de stockage
US7353358B1 (en) System and methods for reporting storage utilization

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP