[go: up one dir, main page]

WO2018057881A1 - Different hierarchies of resource data objects for managing system resources - Google Patents

Different hierarchies of resource data objects for managing system resources Download PDF

Info

Publication number
WO2018057881A1
WO2018057881A1 PCT/US2017/052943 US2017052943W WO2018057881A1 WO 2018057881 A1 WO2018057881 A1 WO 2018057881A1 US 2017052943 W US2017052943 W US 2017052943W WO 2018057881 A1 WO2018057881 A1 WO 2018057881A1
Authority
WO
WIPO (PCT)
Prior art keywords
policy
resource
request
hierarchy
policies
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2017/052943
Other languages
French (fr)
Inventor
Brian Collins
Zachary Mohamed Shalla
Marvin Michael Theimer
John Petry
Michael Hart
Serge Hairanian
Anders Samuelsson
Salvador Salazar SEPULVEDA
Ji Luo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amazon Technologies Inc
Original Assignee
Amazon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/275,219 external-priority patent/US11675774B2/en
Priority claimed from US15/276,708 external-priority patent/US10489424B2/en
Priority claimed from US15/276,714 external-priority patent/US10545950B2/en
Priority claimed from US15/276,711 external-priority patent/US10454786B2/en
Application filed by Amazon Technologies Inc filed Critical Amazon Technologies Inc
Publication of WO2018057881A1 publication Critical patent/WO2018057881A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Definitions

  • Access privileges may be defined for one or multiple users with respect to certain system components in a resource management system so that when access requests from the users directed to the certain system components are received, the resource management system may indicate to the system components which requests may or not be performed based on the defined access privileges.
  • resource management systems reduce the costs associated with modify or enforcing actions or behaviors of system components by reducing the number of changes that have to be implemented directly at system components.
  • the ability of resource management systems to cope with growing numbers of system components in order to define and apply appropriate actions or behaviors for the system components may become less efficient without further capabilities to optimally manage system components.
  • FIG. 1 is a logical block diagram illustrating different hierarchies of resource data objects for managing system resources, according to some embodiments.
  • FIG. 2 is a logical block diagram illustrating a provider network that implements a resource management service that provides different hierarchies of resource data objects for managing provider network resources, according to some embodiments.
  • FIG. 3 is a logical block diagram illustrating a resource management service and a hierarchical data store, according to some embodiments.
  • FIG. 4 is a logical block diagram illustrating interactions between clients and a resource management service and between a resource management service and other services, according to some embodiments.
  • FIGS. 5 A and 5B are logical illustrations of directory structures that may store resource data objects, hierarchies of resource data objects, access locks for hierarchies, and draft copies of bulk edits to hierarchies of resource data objects in a hierarchical data store, according to some embodiments.
  • FIG. 6 illustrates interactions to manage hierarchies at a resource management service, according to some embodiments.
  • FIG. 7 illustrates interactions to manage policies within hierarchies at a resource management service, according to some embodiments.
  • FIG. 8 is a high-level flowchart illustrating methods and techniques to implement maintaining different hierarchies of resource data objects for managing system resources, according to some embodiments.
  • FIG. 9 is a high-level flowchart illustrating methods and techniques to handle a policy lookup request for a resource data object, according to some embodiments.
  • FIG. 10 is a high-level flowchart illustrating methods and techniques to handle a request to add a resource data object, according to some embodiments.
  • FIG. 1 1 is a logical block diagram illustrating atomic application of multiple updates to a hierarchical data structure, according to some embodiments.
  • FIG. 12 illustrates interactions to perform a bulk edit at a storage engine that atomically applies multiple changes to a hierarchical data structure, according to some embodiments.
  • FIG. 13 is a high-level flowchart illustrating methods and techniques to perform atomic application of multiple updates to a hierarchical data structure, according to some embodiments.
  • FIG. 14 is a logical block diagram illustrating multi-party updates to a distributed system, according to some embodiments.
  • FIG. 15 illustrates interactions to submit agreement requests for updates to organizations, according to some embodiments.
  • FIG. 16 illustrates a state diagram for agreement requests, according to some embodiments.
  • FIG. 17 is a high-level flowchart illustrating methods and techniques to implement multi-party updates to a distributed system, according to some embodiments.
  • FIG. 18 is a logical block diagram illustrating remote policy validation for managing distributed system resources, according to some embodiments.
  • FIG. 19 is a logical block diagram illustrating a policy manager for resource management service policies applicable to provider network resources, according to some embodiments.
  • FIG. 20 illustrates interactions to manage policy types and policies in resource management service, according to some embodiments.
  • FIG. 21 illustrates interactions to attach policies to resource data objects, according to some embodiments.
  • FIG. 22 illustrates an example graphical user interface for creating and editing policies, according to some embodiments.
  • FIG. 23 is a high-level flowchart illustrating methods and techniques to implement remote policy validation for managing distributed system resources, according to some embodiments.
  • FIG. 24 is a high-level flowchart illustrating methods and techniques to implement policy validation at a remote validation agent, according to some embodiments.
  • FIG. 25 is an example computer system, according to various embodiments.
  • Various components may be described as “configured to” perform a task or tasks.
  • “configured to” is a broad recitation generally meaning “having structure that" performs the task or tasks during operation.
  • the component can be configured to perform the task even when the component is not currently performing that task (e.g., a computer system may be configured to perform operations even when the operations are not currently being performed).
  • “configured to” may be a broad recitation of structure generally meaning “having circuitry that" performs the task or tasks during operation.
  • the component can be configured to perform the task even when the component is not currently on.
  • the circuitry that forms the structure corresponding to "configured to” may include hardware circuits.
  • this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors.
  • a determination may be solely based on those factors or based, at least in part, on those factors.
  • Managing system resources often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources. For example, security policies, such as access rights or permitted actions for system resources, may be defined and enforced for users of the system resources.
  • security policies such as access rights or permitted actions for system resources
  • data describing the resources of a system may be maintained that also describes these permitted behaviors.
  • data objects describing system resources may be maintained to identify policies that indicate the permitted behaviors for the system resources.
  • a hierarchy or structure of the resource data objects may be implemented.
  • a tree structure may be implemented that arranges the resource data objects in groups, directories, or other sets of resource data objects which apply those policies inherited along the path of the tree structure from the resource data object to the root of the tree structure.
  • policies applied to parent nodes e.g., the groups, directories, or other set of resource data objects
  • child nodes e.g., the resource data objects in the groups, directories, or sets.
  • a structure of resource data objects that arranges the resource data objects into groups based on resource type (e.g., servers, network routers, storage devices, user accounts, etc.) may provide an optimal structure for applying policies common to one resource type, but make it difficult to apply policies to the various different resource types that are utilized as part of a single department, function, or business unit within an organization (e.g., as the department may have some servers, some network routers, some storage devices, and some user accounts which would have to be individually identified within the larger groups of servers, network routers, storage devices, and user accounts in order to apply the same policy).
  • resource type e.g., servers, network routers, storage devices, user accounts, etc.
  • multiple hierarchies of the same resource data objects may be maintained so that policies may be optimally applied to different arrangements of the same resource data objects.
  • a different hierarchy of resource data objects that groups resource data objects by department may allow for a policy applied to the department node in the hierarchy to have the policy inherited and thus applied for each of those resource data objects in the same department (and not apply the policy to those resource data objects not in the department).
  • FIG. 1 is a logical block diagram illustrating different hierarchies of resource data objects for managing system resources, according to some embodiments.
  • Data store 110 may store a collection of resource data objects 112.
  • Resource data objects 112 may describe resources 142 implemented as part of system 140.
  • resource data objects 112 may be files, data structures, records, or other data that describe physical system resources, such as computing devices (e.g., servers), networking devices, or storage devices, or virtual system resources, such as user accounts, user data (e.g., data objects such as database tables, data volumes, data files, etc.), user resource allocations (e.g., allocated resource bandwidth, capacity, performance, or other usage of system resources as determined by credits or budgets), virtual computing, networking, and storage resources (e.g., compute instances, clusters, or nodes), or any other component, function or process operating in system 140.
  • computing devices e.g., servers
  • networking devices e.g., servers
  • storage devices e.g., servers
  • virtual system resources such as user accounts, user data (e.g., data objects such as database tables, data volumes, data files, etc.), user resource allocations (e.g., allocated resource bandwidth, capacity, performance, or other usage of system resources as determined by credits or budgets), virtual computing, networking, and storage resources (e
  • Various controls, actions, configurations, operations, or other definitions of the behavior of resources 142 may be managed by applying policies 150 to one or more of the resource data objects so that when various operations are performed by or on behalf of resources 142 in system 140, a lookup operation may be performed to determined which policies are applied to the resource data object corresponding to a given resource.
  • management of resources 142 may be separately described and maintained for resources 142, allowing for the behaviors of resources 142 applied by policies to be easily applied, configured, changed, or enforced with respect to individual resources 142, without having to modify the resources 142 directly to enforce policies.
  • FIG. 1 illustrates that different hierarchies 120 of resource data objects 112 may be maintained.
  • hierarchy of resource data objects 120a is configured to maintain the groupings of resource data objects 112 differently than in hierarchy of resource data objects 120b.
  • hierarchy of resources 120a arranges user accounts represented by the resource data objects 112 by user title in an organization (e.g., senior vice-president, vice- president, director, manager, team lead, etc.).
  • Hierarchy 120a may be accessed and configured to apply different data access policies to user accounts based on user title (e.g., granting user accounts with higher titles greater data access and user accounts with lower titles lesser data access) by applying the different data access policies to different groups within the hierarchy (e.g., by applying a particular data access policy to a group with a particular user title, all user accounts that are members of the group as maintained in hierarchy 120a would inherit the application of that data access policy).
  • user title e.g., granting user accounts with higher titles greater data access and user accounts with lower titles lesser data access
  • hierarchy of resources 120b may arrange user accounts by business unit or function (e.g., product category A, product category B, engineering, finance, legal, etc.) so that by applying a cost allocation policy to the different business units or functions, the costs incurred by user accounts grouped in the same department (e.g., vice-president, director, manager, team lead, etc. in product category B) may be deducted or obtained from a specific budget or monetary account.
  • business unit or function e.g., product category A, product category B, engineering, finance, legal, etc.
  • hierarchy 120b may be easily updated to apply particular cost allocation policies to different business units or functions (e.g., by applying a particular data access policy to a group with a particular business unit or function, all user accounts that are members of the group as maintained in hierarchy 120a would inherit the application of that cost allocation policy).
  • Maintaining different hierarchies 120 allows for the application of policies 150 to be more efficiently optimized. In large scale systems, such as provider network 200 discussed below with regard to FIG. 2, hundreds of thousands or millions of resources may be managed.
  • Optimized arrangement of the different resources in different hierarchies allows for more efficient application of policies to the resources described by the resource data objects in the different hierarchies, as noted in the example scenario given above.
  • policy lookup mechanisms for the resources may be automated so that changes or updates to policies may be applied to the hierarchies of the resource data objects, and enforced upon demand for resources when lookup operations for the resources are performed.
  • Hierarchies may also allow for the management of resources to be more easily distributed to different users. For example, access to hierarchies may be limited to specific users, so that users that manage system resources using one hierarchy may not have to understand other arrangements of resource data objects or other policies applied in other hierarchies, effectively providing isolation between hierarchies. In this way, modifications to hierarchies (e.g., such as changes to the arrangement of resource data objects or application of policies) may be made concurrently without interfering with other resource management changes. For instance, security changes may be made to a security hierarchy while changes to a cost allocation hierarchy are made without encountering conflicts (e.g., read or write locks on resource data objects that prevent changes from being performed).
  • conflicts e.g., read or write locks on resource data objects that prevent changes from being performed.
  • access to sensitive management information may be limited by restricting the users able to view or change a hierarchy, so that users without access permission for a hierarchy may not view or make changes to the hierarchy.
  • client 130a may present identification credentials that grant permission to access hierarchy 120a
  • client 130b may present identification credentials that grant permission to access hierarchy 120b.
  • client 130b may be denied access to hierarchy 120a as the presented identification credentials may not be granted access to hierarchy 120a.
  • the application of policies or arrangement of resources data objects may be limited by the type or creator of the hierarchy. For instance, security policies may only be applied to a hierarchy created by a user with the appropriate credentials for managing resource security policies. In some embodiments, certain policies or types of policies may be subject to application limitations. For instance, only one instance of a cost allocation policy may be applied at one out of multiple hierarchies (so that other hierarchies may not have conflicting cost allocation policies applied). [0039] Please note, FIG.
  • resource data objects may be assigned to or members of different groups, which are also nodes in a hierarchy. Different arrangements of groups, containers, or other collections of resource data objects may be implemented for each hierarchy. In some embodiments, not all resource data objects may be present in every hierarchy.
  • the specification first describes an example of a provider network implementing multiple different resources as part of offering different services to clients of the provider network.
  • the provider network may also implement a resource management service that maintains different hierarchies of resource data objects for managing provider network resources corresponding to the resource data objects, according to various embodiments. Included in the description of the example resource management service are various aspects of the example resource management service along with the various interactions between the resource management service, other services in the provider network, and clients of the provider network.
  • the specification then describes a flowchart of various embodiments of methods for maintaining different hierarchies of resource data objects for managing provider network resources.
  • the specification describes an example system that may implement the disclosed techniques. Various examples are provided throughout the specification.
  • FIG. 2 is a logical block diagram illustrating a provider network that implements a resource management service that provides different hierarchies of resource data objects for managing provider network resources, according to some embodiments.
  • Provider network 200 may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients 270.
  • Provider network 200 may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system 2500 described below with regard to FIG. 25), needed to implement and distribute the infrastructure and services offered by the provider network 200.
  • provider network 200 may implement computing service(s) 210, networking service(s) 220, storage service(s) 230, resource management service 240 (which is discussed in detail below with regard to FIGS. 3 - 7), and/or any other type of network based services 250 (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services as well as services for operating the services offered by provider network 200, including deployment service 252, billing service 254, access management service 256, and resource tag service 258).
  • Clients 270 may access these various services offered by provider network 200 via network 260.
  • network-based services may themselves communicate and/or make use of one another to provide different services.
  • various ones of computing service(s) 210, networking service(s) 220, storage service(s) 230, and/or other service(s) 250 may lookup policies applied to resource data objects in different hierarchies maintained as part of resource management service 240 describing resources in the services in order to enforce behaviors, actions, configurations, or controls indicated in the policies.
  • the components illustrated in FIG. 2 may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques.
  • the components of FIG. 2 may be implemented by a system that includes a number of computing nodes (or simply, nodes), each of which may be similar to the computer system embodiment illustrated in FIG. 25 and described below.
  • the functionality of a given service system component e.g., a component of the resource management service or a component of the computing service
  • a given node may implement the functionality of more than one service system component (e.g., more than one storage service system component).
  • Computing service(s) 210 may provide computing resources to client(s) 270 of provider network 200. These computing resources may in some embodiments be offered to clients in units called "instances," such as virtual or physical compute instances or storage instances.
  • a virtual compute instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor) or machine image.
  • a number of different types of computing devices may be used singly or in combination to implement compute instances, in different embodiments, including general purpose or special purpose computer servers, storage devices, network devices and the like.
  • clients 270 or other any other user may be configured (and/or authorized) to direct network traffic to a compute instance.
  • Compute instances may operate or implement a variety of different platforms, such as application server instances, JavaTM virtual machines (JVMs), general purpose or special-purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms) suitable for performing client 270 applications, without for example requiring the client 270 to access an instance.
  • compute instances have different types or configurations based on expected uptime ratios.
  • the uptime ratio of a particular compute instance may be defined as the ratio of the amount of time the instance is activated, to the total amount of time for which the instance is reserved. Uptime ratios may also be referred to as utilizations in some implementations.
  • a client may decide to reserve the instance as a Low Uptime Ratio instance, and pay a discounted hourly usage fee in accordance with the associated pricing policy. If the client expects to have a steady-state workload that requires an instance to be up most of the time, the client may reserve a High Uptime Ratio instance and potentially pay an even lower hourly usage fee, although in some embodiments the hourly fee may be charged for the entire duration of the reservation, regardless of the actual number of hours of use, in accordance with pricing policy.
  • An option for Medium Uptime Ratio instances, with a corresponding pricing policy may be supported in some embodiments as well, where the upfront costs and the per-hour costs fall between the corresponding High Uptime Ratio and Low Uptime Ratio costs.
  • Compute instance configurations may also include compute instances with a general or specific purpose, such as computational workloads for compute intensive applications (e.g., high-traffic web applications, ad serving, batch processing, video encoding, distributed analytics, high-energy physics, genome analysis, and computational fluid dynamics), graphics intensive workloads (e.g., game streaming, 3D application streaming, server-side graphics workloads, rendering, financial modeling, and engineering design), memory intensive workloads (e.g., high performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis), and storage optimized workloads (e.g., data warehousing and cluster file systems). Size of compute instances, such as a particular number of virtual CPU cores, memory, cache, storage, as well as any other performance characteristic. Configurations of compute instances may also include their location, in a particular data center, availability zone, geographic, location, etc.... and (in the case of reserved compute instances) reservation term length.
  • compute intensive applications e.g., high-traffic web applications, ad serving, batch processing
  • Networking service(s) 220 may implement various networking resources to configure or provide virtual networks, such as virtual private networks (VPNs), among other resources implemented in provider network 200 (e.g., instances of computing service(s) 210 or data stored as part of storage service(s) 230) as well as control access with external systems or devices.
  • VPNs virtual private networks
  • networking service(s) 220 may be configured to implement security groups for compute instances in a virtual network. Security groups may enforce one or more network traffic policies for network traffic at members of the security group. Membership in a security group may not be related to physical location or implementation of a compute instance. The number of members or associations with a particular security group may vary and may be configured.
  • Networking service(s) 220 may manage or configure the internal network for provider network 200 (and thus may be configured for implementing various resources for a client 270).
  • an internal network may utilize IP tunneling technology to provide a mapping and encapsulating system for creating an overlay network on network and may provide a separate namespace for the overlay layer and the internal network layer.
  • the IP tunneling technology provides a virtual network topology; the interfaces that are presented to clients 270 may be attached to the overlay network so that when a client 270 provides an IP address that they want to send packets to, the IP address is run in virtual space by communicating with a mapping service (or other component or service not illustrated) that knows where the IP overlay addresses are.
  • Storage service(s) 230 may be one or more different types of services that implement various storage resources to provide different types of storage.
  • storage service(s) 230 may be an object or key -value data store that provides highly durable storage for large amounts of data organized as data objects.
  • storage service(s) 230 may include an archive long-term storage solution that is highly-durable, yet not easily accessible, in order to provide low-cost storage.
  • storage service(s) 230 may provide virtual block storage for other computing devices, such as compute instances implemented as part of virtual computing service 210.
  • a virtual block-based storage service may provide block level storage for storing one or more data volumes mapped to particular clients, providing virtual block-based storage (e.g., hard disk storage or other persistent storage) as a contiguous set of logical blocks.
  • Storage service(s) 230 may replicate stored data across multiple different locations, fault tolerant or availability zones, or nodes in order to provide redundancy for durability and availability for access.
  • storage service(s) 230 may include resources implementing many different types of databases and/or database schemas. Relational and non-relational databases may be implemented to store data, as well as row-oriented or column-oriented databases.
  • a database service that stores data according to a data model in which each table maintained on behalf of a client contains one or more items, and each item includes a collection of attributes, such as a key value data store.
  • the attributes of an item may be a collection of name-value pairs, in any order, and each attribute in an item may have a name, a type, and a value.
  • Some attributes may be single valued, such that the attribute name is mapped to a single value, while others may be multi-value, such that the attribute name is mapped to two or more values.
  • storage service(s) 230 may implement a hierarchical data storage service, such as hierarchical data store 350 in FIG. 3 discussed below.
  • a hierarchical data storage service may store, manage, and maintain hierarchical data structures, such as a directory structure discussed below with regard to FIG. 5A.
  • Clients of a hierarchical data storage service may operate on any subset or portion of a hierarchical data structure maintained in the data storage service with transactional semantics and/or may perform path-based traversals of hierarchical data structures. Such features allow clients to access hierarchical data structures in many ways.
  • clients may utilize transactional access requests to perform multiple operations concurrently, affecting different portions (e.g., nodes) of the hierarchical data structure (e.g., reading parts of the hierarchical data structure, adding a node, and indexing some of the node's attributes, while imposing the requirement that the resulting updates of the operations within the transaction are isolated, consistent, atomic and durably stored).
  • portions e.g., nodes
  • the hierarchical data stored in a hierarchical data storage service may be multiple hierarchies of resource data objects on behalf of resource management service 240.
  • provider network 200 may implement various other service(s) 250, including deployment service 252.
  • Deployment service 252 may include resources to instantiate, deploy, and scale other resources (from other network-based service, such as computing service(s) 210, networking service(s) 220, and/or storage service(s) 230) to implement a variety of different services, applications, or systems.
  • deployment service 252 may execute pre-defined deployment schemes which may be configured based, at least in part, on policies applied to resources launched by the deployment service 252 (e.g., a policy that describes the hardware and software configuration of virtual compute instance launched on behalf of particular user account).
  • Provider network 200 may also implement billing service 254 which may implement components to coordinate the metering and accounting of client usage of network-based services, such as by tracking the identities of requesting clients, the number and/or frequency of client requests, the size of data stored or retrieved on behalf of clients, overall resource bandwidth used by clients, class/type/number of resources requested by clients, or any other measurable client usage parameter.
  • Billing service 254 may maintain a database of usage data that may be queried and processed by external systems for reporting and billing of client usage activity. Similar to deployment service 252, policies applied to resource data objects in hierarchies managed by resource management service 240 may indicate payment accounts, budgets, or responsible parties for which the usage data is to be reported and/or billed.
  • Provider network may also implement access management service 256, which may implement user authentication and access control procedures defined for different resources (e.g., instances, user accounts, data volumes, etc.) as described by policies applied to resource data objects in hierarchies at resource management service 240.
  • provider network 200 may implement components configured to ascertain whether the client associated with the access is authorized to configured or perform the requested task.
  • Authorization may be determined such by, for example, evaluating an identity, password or other credential against credentials associated with the resources, or evaluating the requested access to the provider network 200 resource against an access control list for the particular resource. For example, if a client does not have sufficient credentials to access the resource, the request may be rejected, for example by returning a response to the requesting client indicating an error condition.
  • Provider network 200 may also implement resource tag service 258, which may manage resource attributes for resources of other services (e.g., computing service(s) 210, networking service(s) 220, and/or storage service(s) 230).
  • Resource attributes may be a tag, label, set of metadata, or any other descriptor or information corresponding to a provider network resource, implemented at one of various network-based services of the provider network. Attributes may be represented in various ways, such as a key-value pair, multiple values, or any other arrangement of information descriptive of the resource.
  • Resource attributes for a resource may be maintained as part of resource metadata for the resources at network-based services.
  • Network-based services may create resource metadata and/or attributes when a resource is created by a client.
  • Resource tag service 258 may lookup policies for different resources to determine which resource attributes are to be maintained for the different resources, in some embodiments.
  • clients 270 may encompass any type of client configurable to submit network-based services requests to provider network 200 via network 260, including requests for directory services (e.g., a request to create or modify a hierarchical data structure to be stored in directory storage service 220, etc.).
  • requests for directory services e.g., a request to create or modify a hierarchical data structure to be stored in directory storage service 220, etc.
  • a given client 270 may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser.
  • a client 270 may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of persistent storage resources to store and/or access one or more hierarchical data structures to perform techniques like organization management, identity management, or rights/authorization management.
  • an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client 270 may be an application configured to interact directly with network-based services platform 200.
  • HTTP Hypertext Transfer Protocol
  • client 270 may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network- based services architecture.
  • REST Representational State Transfer
  • a client 270 may be configured to provide access to network- based services to other applications in a manner that is transparent to those applications.
  • client 270 may be configured to integrate with an operating system or file system to provide storage in accordance with a suitable variant of the storage models described herein.
  • the operating system or file system may present a different storage interface to applications, such as a conventional file system hierarchy of files, directories and/or folders.
  • applications may not need to be modified to make use of the storage system service model.
  • the details of interfacing to provider network 200 may be coordinated by client 270 and the operating system or file system on behalf of applications executing within the operating system environment.
  • Clients 270 may convey network-based services requests (e.g., access requests directed to hierarchies in resource management service 240) to and receive responses from network-based services platform 200 via network 260.
  • network 260 may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients 270 and platform 200.
  • network 260 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet.
  • Network 260 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks.
  • LANs local area networks
  • WANs wide area networks
  • both a given client 270 and network-based services platform 200 may be respectively provisioned within enterprises having their own internal networks.
  • network 260 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client 270 and the Internet as well as between the Internet and network-based services platform 200. It is noted that in some embodiments, clients 270 may communicate with network-based services platform 200 using a private network rather than the public Internet.
  • hardware e.g., modems, routers, switches, load balancers, proxy servers, etc.
  • software e.g., protocol stacks, accounting software, firewall/security software, etc.
  • FIG. 3 is a logical block diagram illustrating a resource management service and a hierarchical data store, according to some embodiments.
  • Resource management service 240 may manage the application of policies to resource data objects for resources in provider network 200.
  • provider network 200 may offer services to a variety of different customers, a collection or set of resource data objects that are managed together may identified as an organization (although various other terms including entity, domain, or any other identifier for the collection of resource data objects may also be used).
  • Resource management service 240 may provide various capabilities to clients of resource management service 240 to create and manage respective organizations which includes the resource data objects describing the resources of provider network 200 which are associated with one or more customers of the provider network, including managing which resource data objects (and thus their corresponding resources) are members of an organization.
  • Resource management service 240 may allow for the creation and management of multiple different hierarchies of the resources in an organization. These resources may be further subdivided and assigned into groups (which also may be subdomains, directories, sub-entities, sets, etc.). Groups may consist of any resource that can have a policy applied to it. Resource management service 240 may allow clients to author policies and apply them to the organization, to different groups, or directly to resource data objects.
  • Resource management service 240 may implement interface 310, which may provide a programmatic and/or graphical user interface for clients to request the performance of various operations for managing system resources via an organization.
  • the various requests described below with regard to FIGS. 6 and 7 may be formatted according to an Application Programming Interface (API) and submitted via a command line interface or a network-based site interface (e.g., website interface).
  • API Application Programming Interface
  • Other requests that may be submitted via interface 310 may be requests to create an organization, update an organization (e.g., by adding other resources, inviting other user accounts to join the organization.
  • an organization may be be treated as a resource owned or controlled by the user account that created it, and that account by default may have access permissions to the organization. The user account could then delegate permissions to other user accounts or users using cross-account access or transfer ownership of the organization, in cases where control needs to move to a delegated group or the owner needs to leave the organization.
  • Resource management service 240 may implement organization management 320, which may handle the creation of organizations, the updates to or modifications of organizations, the delegation of access permissions to organizations, as well as the arrangement of resource data objects within hierarchies maintained for the organization. For example, upon creation an organization may include a single hierarchy providing an arrangement of resource data objects (e.g., as members of various groups and/or groups within groups, etc.). Resource management 320 may handle the various requests to create additional hierarchies, update hierarchies, or delete hierarchies, as discussed below with regard to FIG. 6. Organization management 320 may also handle requests to add resource data objects to an organization, as discussed below with regard to FIG. 10.
  • organization management 320 may identify which hierarchies a new resource data object should be added to and the location within the hierarchy that the resource data object should be added.
  • organization management may coordinate organization changes between multiple parties, such as adding user accounts to or removing user accounts from an organization and may implement multiparty agreement mechanisms to approve the change to the organization, implementing multi-account agreement management 322.
  • multi-account agreement management 322 may facilitate an authenticated 2-way handshake mechanism to confirm or deny a potential change to an organization.
  • Multi-account agreement management 322 may expose different mechanisms for multiparty agreements, as discussed below with regard to FIGS. 6 - 8, including emailed invitations, single use tokens, and shared secrets (domains/passwords).
  • Multi-account agreement management 322 may maintain state information and other tracking information to track the progress and approval or disapproval of proposed updates via agreement requests, as discussed below with regard to FIGS. 15-16.
  • policies may be authored or defined and then applied to various resource data objects, groups, or an entire hierarchy of an organization.
  • Resource management service 240 may implement policy management 330 to handle the authoring of policies as well the application of policies.
  • Many different types of policies may be applied in order to define different types of behaviors.
  • Some policy types for instance, may be related to specific behaviors, resources, or actors.
  • Billing related policies for instance, may have one or various types of billing policies.
  • Resource configuration policy types e.g., configuring operational configuration of resources, when deployed by deployment service 252 for instance.
  • Some policy types can define access controls to resources.
  • Policy management 330 may handle various requests to create a policy of one of many policy types, define policy types by authoring a policy schema, and the application of policies to resource data objects, groups, or entire hierarchies within an organization, such as those requests discussed below with regard to FIG. 7.
  • Policy management 330 may also handle lookup requests for resource data objects, groups, or organizations and perform policy application and conflict resolutions, as discussed below with regard to FIGS. 4 and 9.
  • policies can also be inherited in a chain from the organization down to a group, group of groups, or individual resource data object. If a policy is applied to a parent node in the hierarchy, then the child node (group, group of groups, or individual resource data object) may inherit the policy of the parent node. In this way, the policy applied to the parent node becomes the "default" policy, in the absence of any other policy applications.
  • different policies may have different inheritance semantics, which may have to be resolved.
  • access policies may follow the semantics of a set union, where ordering does not matter (e.g., everything is allowed unless explicitly excluded).
  • Billing policies in another scenario, may implement a "child wins/parent appends" inheritance model where a child policy may be executed, followed by a parent policy. In such scenarios, ordering of policies matters.
  • policy management 330 may be configured to resolve conflicting policies according to the appropriate inheritance semantics for the policy.
  • policy management 330 may implement policy validation (although in alternative embodiments validation may be delegated in part or in total to other components).
  • Validation of policies may include syntax validation. Syntax validation checks policies instances of policy types that are authored to determine whether the policy instance is syntactically correct so that the policy can be be parsed and evaluated by backend systems that lookup the policy. Syntactic validation may be performed, in some embodiments, when authored. In addition to syntactic validation, some policies may undergo semantic validation. Semantic validation may be performed to ensure that a resource or other information specified in a policy results in a policy that can be enforced.
  • semantic validation could determine whether an Accountld specified in a payer policy is an account in the organization that has a valid payment instrument.
  • policy management 330 may validate policy applications and organization changes, in order to ensure that the changes do not invalidate policies that are applied within the organization. For example, validation of changes to ensure that a payer for an organization doesn't leave the organization.
  • each policy may have different semantic validation logic, each policy may have a separately configurable semantic validator.
  • Resource management service 240 may implement historical versioning of hierarchies in organizations, in some embodiments. Some services, such as billing service 254, may require the ability to query for historically versioned data, such as which account was the payer of the organization at the end of the previous month (as the current payer may be different due to a change to a hierarchy).
  • billing service 254 may require the ability to query for historically versioned data, such as which account was the payer of the organization at the end of the previous month (as the current payer may be different due to a change to a hierarchy).
  • historical versioning 340 may store prior versions or track or record changes to hierarchies. These prior versions or changes may be associated with particular points in time (e.g., by assigning timestamps).
  • Historical versioning 340 may handle requests for policy lookups across particular ranges of time or at particular points in time. Historical versioning 340 may access the versioned data and return the appropriate policies for the specified time(s).
  • Hierarchy versions may be stored as part of organization data objects 352 in hier
  • Hierarchical data store 360 may provide data storage for organization data objects 362, including the resource data objects, policy data objects, and any other data describing the organization, including the multiple hierarchies of the resource data objects, as discussed below with regard to FIGS. 5A-5B.
  • the organization data objects 352 may be maintained within a single hierarchical data structure, though different hierarchies of resource data objects within the single hierarchical data structure may be provided for managing resource data objects, as discussed below with regard to FIG. 5A.
  • FIG. 4 is a logical block diagram illustrating interactions between clients and a resource management service and between a resource management service and other services, according to some embodiments.
  • clients may interact with resource management service 240 to manage resources.
  • client(s) 410 may submit various organization/policy management requests 412 (e.g., to modify a hierarchy by arranging resource data objects or applying/removing policies).
  • resource management service 240 may identify the appropriate updates to organization data to be made or to be read, and send organization data updates/reads 422 to hierarchical data storage 350.
  • Hierarchical data storage 350 may execute the received requests to change hierarchical data structures storing the organization data objects in accordance with the update request or retrieve the appropriate data read from the organization data objects according to the hierarchies, and return update acknowledgements/read data 424 to resource management service 240.
  • resource management service 240 may return the appropriate acknowledgments (e.g., indicating success or failure of the requests.
  • Service(s) 400 may perform policy lookups 402 with respect to resource data objects corresponding to resources under the control or responsibility of service(s) 400, in various embodiments.
  • an access control service such as access management service 256
  • resource management service 240 When launching new resources, network configuration information may be maintained in a policy that is applicable to the launched resource and may be retrieved by a policy lookup 402 from a service 400.
  • Policy lookups 402 may be requested via resource management service 240 or, in some embodiments, may be requested directly from the service to the hierarchical data store 350.
  • Latency sensitive services for instance, may implement local libraries, agents, or interpreters for the organization data maintained at hierarchical data store 350 in order to reduce the number of requests that have to be sent in order to perform a policy lookup.
  • FIG. 5 A is a logical illustration of directory structures that may store resource data objects and hierarchies of resource data objects in a hierarchical data store, according to some embodiments.
  • Organization data objects (including policy data objects, resource data objects, groups or groups of groups of data objects) may be maintained in one or multiple directory structures, in various embodiments.
  • organization 500 may utilize directory structure 502 to store the resources and policies that are part of the organization.
  • Index node 510 may provide information for performing a lookup to determine the location of a resource data object or policy data object.
  • Resources node 520 may group resources into various resources types 522 and 524 (e.g., user accounts, virtual compute instances, storage volumes, VPNs, load balancers, etc.) and within the resource types 522 and 524 may be found resource data objects 526 and 528 describing individual resources in the provider network.
  • policies node 530 may include different policy types 532 and 534 (which may be created by clients as discussed above). Individual instances of the policy types 536 and 538 may be policy instances applied to resource data objects, groups, groups of groups, or hierarchies.
  • Hierarchies node 540 may be the group of hierarchies maintained for organization 500, including hierarchy 550 and hierarchy 560. Within each hierarchy, groups, 552 and 554 or groups of groups, and/or any arrangement of resources included in the group of resources 520 may be linked (as illustrated by the dotted lines) to indicate membership in the group. Similar policies, such as policies 536 and 538 may be linked to hierarchies, groups or groups of groups, or individual resource data objects within the hierarchies.
  • Hierarchical data structures such as directory structures 502 and 504, may be stored, managed, and or represented in order to maintain organization 500.
  • nodes in a hierarchy e.g., the circle or square shapes
  • GUID globally unique identifier
  • attributes key, value pairs
  • links zero or more links to other nodes.
  • a group or directory may be one type of node which has zero or more child links to other nodes, either groups/directories or resource data objects/policy data objects.
  • Group nodes may have zero or one parent directory node, implying that directory nodes and links define a tree structure, in some embodiments, as depicted in FIG. 5 A.
  • Index 510, hierarchies 540, resources 520, policies 530, hierarchy 550 and 560, resource type 522 and 524, policy type 532 and 534, and group 552 and 554 may be group/directory nodes.
  • Node 500, organization node may be a root node that is the logical root multiple directory structures and may not be visible to clients of resource management service (which may access individual hierarchies).
  • Resource and policy nodes (represented by squares such as resource node) may be leaf nodes in a directory structure 410.
  • Leaf nodes may have a unique external Id (e.g., client specified) and client-defined attributes.
  • Leaf nodes can have more than one parent node so that resource data objects and policy data objects can be linked to multiple hierarchies. In some embodiments, all resource data objects are linked to all hierarchies (though in different arrangements as defined by a user), whereas in other embodiments, resource data objects may be linked to only some hierarchies.
  • a link may be a directed edge between two nodes defining a relationship between the two nodes.
  • links There may be many types of links, such as client visible link types and another link type for internal hierarchical data store operation.
  • a child link type may create a parent - child relationship between the nodes it connects. For example, child link can connect resource type node 522 to resource 526.
  • Child links may define the structure of directories (e.g., resources 520, policies 530, hierarchies 540). Child links may be named in order to define the path of the node that the link points to.
  • Another type of client visible link may be an attachment link.
  • An attachment link may apply a resource data object or policy data object to another node (e.g., group 552, hierarchy 550, etc.) as depicted by the dotted lines.
  • Nodes can have multiple attachments.
  • some attachment restrictions may be enforced, such as a restriction that not more than one policy node (e.g., policy 536) of policy type 532 can be attached to a same node.
  • a non-visible type of link or implied link type may also be implemented in some embodiments, a reverse link. Reverse links may be used for optimizing traversal of directory structures for common operations like look-ups (e.g., policy lookups).
  • data objects or nodes in organization 500 can be identified and found by the pathnames that describe how to reach the node starting from the logical root node 500, starting with the link labeled "/" and following the child links separated by path separator "/" until reaching the desired node.
  • resource 526 can be identified using the path: £ 7index510/resources520/resource526".
  • multiple paths may identify the node.
  • the following path can also be used to identify resource 526: '7hierarchies540/hierarchy550/group 552".
  • FIG. 5 A provides many examples of the possible ways in which policy data objects or lease data objects may be linked. As noted earlier, not all policies may be attached to all hierarchies or all resource data objects to all hierarchies and thus the illustrated links are not intended to be limiting. Similar, directory structures may be differently arrange so that a single directory structure or greater number of directory structures are utilized.
  • FIG. 5B is a logical illustration of directory structures that store access locks for hierarchies, and draft copies of bulk edits to hierarchies of resource data objects in a hierarchical data store, according to some embodiments.
  • directory structure 506 may maintain locks node 570.
  • Locks node 570 may have child nodes corresponding to each hierarchy in hierarchies 540, such as hierarchy 550 and hierarchy 560. If a hierarchy node is linked to locks 570, then a lookup upon the hierarchy node will be able to traverse the locks 570 structure, indicating that the hierarchy is available for read and write access. If, however, a node is not found when traversing locks 570, then it may be determined that the hierarchy is not available for write access.
  • Drafts node 570 may also logically point to or associate a directory structure 508 that separately maintains different drafts for bulk edit requests. For instance, although the draft directory structure 508 is logically linked with organization 500, a path-based traversal technique that identifies data by traversing from leaf nodes to the path would not view the logical link as part of a path, so that any path-based traversal would logically separate the drafts from other data stored in organization 500. Each draft node, such as draft 582 and draft 584 may link to a copied hierarchy (e.g., 586 and 588) upon which modifications are performed as part of a bulk edit.
  • a copied hierarchy e.g., 586 and 588
  • the link from copied hierarchy 586 is changed from draft 582 to the hierarchies node 540 and links removed from the original hierarchy (e.g., hierarchy 550 or 560) to hierarchies node 540.
  • the original hierarchy e.g., hierarchy 550 or 560
  • the old versions of hierarchies may remained unlinked until storage space for the old versions is reclaimed (e.g., as part of a background garbage collection process).
  • FIG. 6 illustrates interactions to manage hierarchies at a resource management service, according to some embodiments.
  • Clients may submit a create hierarchy request 612 via interface 310.
  • the creation request 612 may include a membership policy which provides for a default arrangement of resources data objects that are automatically added to the hierarchy (e.g., as a result of adding the resource to the organization) or the membership policy may be included as part of a separate request.
  • Resource management service 240 may create a hierarchy directory 614 in hierarchical data store 350, and then send requests to add resources to the hierarchy directory 616 (e.g., by adding links between resource data objects and the new hierarchy directory). Resource management service 240 may then acknowledge the hierarchy creation 618 to client 610.
  • Client 610 may submit a request to update the hierarchy 622.
  • Hierarchy update requests may include various requests to add a group, remove a group, add resources to a group, remove resource(s) from a group, add a group to a group, remove a group from a group, or any other arrangement modification to the hierarchy.
  • resource management service 240 may send an update hierarchy directory request 624 to perform one or more corresponding actions, such as requests to create group sub-directories, remove group sub-directories, add resource data object link(s), or remove resource data object link(s).
  • resource management service 240 may acknowledge the hierarchy update 626 (which may indicate success or failure).
  • Client 610 may submit a request to delete a hierarchy 632 to resource management service 240.
  • Resource management service 240 may send a request to delete the hierarchy directory (which may delete any group(s), or group(s) of groups within the hierarchy but not resource data objects or policy data objects which may only be linked to the deleted directory). Instead, the links may be removed (e.g., by hierarchical data store 360 when one of the linked nodes is deleted).
  • FIG. 7 illustrates interactions to manage policies within hierarchies at a resource management service, according to some embodiments.
  • Client 710 may submit a request to create a policy 712.
  • the creation request may include a policy definition or content, including an indication of policy type, so that validation of the policy can be performed, as discussed above.
  • Resource management service 240 may add a policy data object 714 representing the policy to hierarchical data store 350 (e.g., storing the policy data object as a new policy data object in the policy directory for the organization).
  • An acknowledgment 716 indicating policy creation success or failure may be returned from resource management service 240 to client 710
  • Client 710 may send a request to apply a policy to one or more resource data objects, groups, or hierarchies 722.
  • resource management service 240 may send a request to link the policy data object to the hierarchy directory(ies), group(s), or resource data object(s) 724. Resource management service 240 may then acknowledge the application 726 to client 710.
  • client 710 may send a request to remove the policy from one or more resource data objects, groups, or hierarchies. Resource management service 240 may then send a request to remove the link from the policy data object to the requested hierarchy director(ies), group(s) or resource data object(s).
  • Client 710 may send a request 742 to delete a policy.
  • Resource management service 240 may send a request to delete the policy data object 744 and acknowledge the policy deletion 746 to client 710.
  • FIGS. 2 - 7 have been described and illustrated in the context of a provider network implementing a resource management service for resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 2 - 7 may be easily applied to other resource management systems, components, or devices.
  • private systems and networks implementing multiple system resources may maintain multiple hierarchies of resource data objects for managing the behavior of the system resources.
  • FIGS. 2 - 7 are not intended to be limiting as to other embodiments of a system that may implement resource management system for system resources.
  • FIG. 8 is a high-level flowchart illustrating methods and techniques to implement maintaining different hierarchies of resource data objects for managing system resources, according to some embodiments.
  • Various different systems and devices may implement the various methods and techniques described below, either singly or working together.
  • a resource management service such as described above with regard to FIGS. 2 - 7 may be configured to implement the various methods.
  • a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
  • the resource data objects may identify policies applicable to the behavior of resources corresponding to the resource data objects in a system.
  • resource data objects may be maintained to describe system resources (e.g., unique identifier, capabilities, roles, availability, etc.).
  • the resource data objects may be separately arranged in different hierarchies so that policies applied to the resource data objects according to the hierarchies (e.g., by inheritance rules or direct application) may enforce the controls, actions, configurations, operations, or other definitions of the behavior of the corresponding resources.
  • the hierarchies may be maintained in a hierarchical data storage system, as described above with regard to FIG. 3. However, other types of data stores may be implemented to maintain the hierarchies (e.g., by maintaining the data objects, and relationships between the data objects that define the hierarchy so that the hierarchy can be determined).
  • a modification to a hierarchy may include a modification to the arrangement of a hierarchy. For example, resource data objects may be reassigned from one group to another, new groups or groups of groups may be created, groups or groups of groups may be deleted, or any other change to the relationships of resource data objects among the hierarchy may be performed, such as the requests discussed above with regard to FIG. 6.
  • a modification to a hierarchy may also include a change to the application of a policy within the hierarchy, by applying a new policy, removing a policy, changing the application of an existing policy, or changing the definition of a policy, such as the requests discussed above with regard to FIG. 7.
  • a check or determination may be made as to whether the modification is valid for the hierarchy. For example, limitations on policy application may be checked. If, the policy may only be applied once to the resource data object, group, or hierarchy, then it may be determined whether an instance of the policy has already been applied to resource data object, group, or hierarchy. If so, then, as indicated by the negative exit from 830, the request may be denied, as indicated at 850. Some modifications may not be permitted in certain hierarchies. For example, a security policy may not be applied in a hierarchy associated with human resources or finance, but only in a hierarchy associated with security.
  • certain organization modifications may not be allowed (e.g., adding a resource data object to more than one group in a hierarchy— although this may be allowed in other embodiments, or deleting resource data objects).
  • Authentication may be implemented in some embodiments, to determine the identity of a user account associated with a client submitting a modification. If the user account is not permitted to perform the modification to the hierarchy, then the modification may be invalid.
  • the modification to the hierarchy may be performed in accordance with request, as indicated at 840.
  • Modifying hierarchies may change the application of policies applied within the hierarchy. If, for instance, resource data objects are moved or reassigned, then different policies may be inherited by those moved resource data objects based on the different group assignments. If new policies are applied as result of the modification, then the new policy may be applied along with existing policies, or in the scenario where the modification is a policy removal, then removed policy is no longer included in those policies applied by the hierarchy.
  • modifications to one hierarchy may be isolated to that hierarchy, and thus may be made without modifying the application of policies identified in another hierarchy. In at least some embodiments, other modifications may be made to multiple, if not all, hierarchies, such as adding a resource data object to multiple different hierarchies as discussed below with regard to FIG. 10.
  • FIG. 9 is a high-level flowchart illustrating methods and techniques to handle a policy lookup request for a resource data object, according to some embodiments.
  • a policy lookup request may be received for a resource data object.
  • the request may include an identifier that uniquely identifies the resource and thus the resource data object for which the lookup operation is being performed.
  • those hierarch(ies) linked to the resource data object may be identified. For instance, in some scenarios, all hierarchies may be linked to the resource data object, whereas in other scenarios, only one or more hierarchies may include the specified resource data object.
  • the polic(ies) attached to the resource data object or inherited by the resource data object may be determined, as indicated at 930. For example, the path(s) from the resource data object to the root node of each of the identified hierarchies may be traversed, and all attached policies in the path may be identified.
  • conflict(s) can occur between policies determined for a resource data object.
  • Different hierarchies may apply different policies describing different access rights for the resource data object or different nodes in the path of the resource data object within a hierarchy may include the different policies describing different access rights, for example. If the policy types of any of the determined policies match for a resource data object, then a conflict may exist, as indicated by the positive exit from 940. Detected conflict(s) may be resolved between determined policies, as indicated at 960.
  • one of the conflicting policies may be elected over the other according to a precedence or inheritance model for policy applications (e.g., policies applied to child nodes in the hierarchy may supersede policies applied to parent nodes, or vice versa).
  • a knowledge base or other rules-based resolution technique may be implemented to evaluate the conflicting policies with respect to precedence or inheritance rules (including rules that modify conflicting policies) and may be configured to apply different inheritance or precedence rules for a policy type when the policy type is defined.
  • FIG. 10 is a high-level flowchart illustrating methods and techniques to handle a request to add a resource data object, according to some embodiments.
  • a request to add a resource data object to other resource data objects stored in the data store may be received, in various embodiments.
  • the request may specify a unique identifier, and other information descriptive of a corresponding resource in a system that includes the resources corresponding to the other resource data objects.
  • the resource data object may be added to the data store. For example, as the model in FIG. 5 A illustrates, the resource data object may be added to the resources directory in the appropriate resource type sub-directory.
  • the hierarch(ies) of the other resource data objects may be determined to include the additional resource data object. For example, not all hierarchies may maintain all resource data objects.
  • a membership policy for each hierarchy may specify which resource data objects are maintained in the hierarchy, and which are not.
  • locations in the hierarch(ies) may be determined for the resource data object, as indicated at 1040.
  • a default location e.g., directly linked to the hierarchy root node
  • the membership policy may specify a location based on an evaluation of the resource data object. For instance, if the resource type of the resource data object is a computing resource, place the resource data object in group A, or if the resource data object is a user account, place the resource data object in group B.
  • Hierarchical data structures provide an optimal way to organize data for a variety of different applications.
  • hierarchical data structures may be a tree, graph, or other hierarchy- based data structure maintained in data storage.
  • Directory storage may utilize hierarchical data structures to maintain different locations or paths to data that can be traced back to a single starting destination (e.g., a root directory).
  • Other systems may leverage the relationship information provided by hierarchical data structures to reason over data. For example, managing system resources often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources.
  • Security policies such as access rights or permitted actions for system resources, may be defined and enforced for users of the system resources, for instance.
  • Data describing the resources of a system may be maintained that also describes these permitted behaviors.
  • data objects describing system resources may be maintained to identify policies that indicate the permitted behaviors for the system resources.
  • a hierarchical data structure may be created for the resource data objects and policies. For instance, a tree structure may be implemented that arranges the resource data objects in groups, directories, or other sets of resource data objects which apply those policies inherited along the path of the tree structure from the resource data object to the root of the tree structure. In this way, policies applied to parent nodes (e.g., the groups, directories, or other set of resource data objects) may be inherited and applied to child nodes (e.g., the resource data objects in the groups, directories, or sets).
  • While some changes or updates to a hierarchical data structure may involve small numbers of discrete operations, in some scenarios large numbers or sets of updates may need to be performed together to effect changes. If the hierarchical data structure were to be accessed while the set of updates were not all complete, then broken links, incomplete information, contradictory information, or other errors may result. Atomic application of modifications to the hierarchical data structure may be implemented in various embodiments so that incomplete sets of updates are not visible when accessing the hierarchical data structure, preventing errors or other erroneous information that would result. Moreover, the hierarchical data structure may remain available for access during atomic application of the modifications so that utilization of the hierarchical data structure is not blocked for long running sets of modifications.
  • FIG. 1 1 is a logical block diagram illustrating atomic application of multiple updates to a hierarchical data structure, according to some embodiments.
  • hierarchical data structure 1110 is available for access 1120 so that information stored in hierarchical data structure as well as information determined by reasoning over the hierarchical data structure (e.g., following paths from child to parent nodes and applying inheritance rules) may be available.
  • a request may be made to perform a set of modifications atomically to a portion 1112 (as illustrated in FIG. 25) or the entire data structure 1110.
  • a separate copy 1114 of the identified portion of the hierarchical data structure may be created 1130. While the copy is created, hierarchical data structure 1110 including portion 1112 may still remain available for access. In some embodiments, remaining access may be read access and in some other embodiments both read and write access may remain.
  • operations 1140 to modify the copied portion 1114 may be performed.
  • the changes made to portion 1114 are not visible when hierarchical data structure 1110 is accessed (including portion 1112).
  • Modification operations may occur over a period of time from a request to initiate the atomic application of a set of updates to the request to commit the set of updates.
  • human timescale interactions e.g., allowing a user to start editing the portion, stop, redo, receive confirmation of updates, and other time variables
  • the set of updates may be committed to hierarchical data structure 1110 by atomically replacing 1150 portion 1112 with the modified copy 1114.
  • Atomic replacement does not allow for only partial copying of modifications, but instead ensures that the entire set of changes in the copy are inserted into hierarchical data structure 1110 (or as a result of a failure or error, none of copy 1114 is inserted).
  • portion 1114 of hierarchical data structure may be made available for read and write access, in those embodiments where write access was restricted upon initiating the application of the set of modifications.
  • FIG. 25 is provided as a logical illustration of atomic application of multiple updates to a hierarchical data structure, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a data store, system or clients or the number, type, or arrangements of hierarchies, performance of updates, copy operations or atomic replacements.
  • a portion of a hierarchical data structure that is identified for modification may include multiple, unconnected nodes or subtrees that are all modified as part of the same atomic update.
  • FIG. 12 illustrates interactions to perform a bulk edit at a storage engine that atomically applies multiple changes to a hierarchical data structure, according to some embodiments.
  • Client 1210 may be a client of resource management service 240 or any other client that utilizes a storage engine to atomically apply multiple updates to a hierarchical data structure.
  • Client 1210 may submit a request via interface 1202 to request a bulk edit for hierarch(ies) or portion of hierarch(ies) 1212.
  • a bulk edit request may be a request to atomically apply a set of modifications to a hierarchical data structure as discussed above with regard to FIG. 11 and below with regard to FIG. 13.
  • storage engine 1200 may send a request to remove a lock node for the hierarch(ies)614 in order to lock the hierarch(ies), blocking write requests to the hierarch(ies).
  • Storage engine 1200 may also send a request 1216 or multiple requests to create a copy of the hierarch(ies)616 at hierarchical data store 350 that is separate from the hierarch(ies).
  • Client 1210 may then submit various modification request(s) 1220 over a period of time (which may or may not be subject to a time limit).
  • Modification request(s) 1220 may correspond to various requests to update the hierarch(ies)as discussed above (e.g., various requests to add a group, remove a group, add resources to a group, remove resource(s) from a group, add a group to a group, remove a group from a group, or any other arrangement modification to the hierarch(ies), requests to add policies or remove policies from resources or groups or any other modification to the resource data objects, groups or other nodes in the hierarch(ies)).
  • Modification requests 1220 may correspond to API requests or over modifications permitted by interface 1202 (which may be like interface 310 in FIG. 3).
  • storage engine 1200 may send corresponding requests 1222 to perform operations(s) applying the modifications to the copy of the hierarch(ies).
  • Client 1210 may send a request 1230 to commit the bulk edit via interface 1202.
  • storage engine 1200 may perform various conflict checks (in some embodiments as discussed below with regard to FIG. 13).
  • Storage engine 1200 may submit a transaction that links the copy of the hierarch(ies)with the hierarchies node and removes the link from the hierarch(ies)to the hierarchies node.
  • Acknowledgment or failure of the transaction may be provided 1234 and in turn storage engine 1200 may indicate acknowledgment or failure of the commit 1238 to client 1210.
  • Storage engine 1200 may also add a lock node back to the hierarch(ies)to unlock the hierarch(ies)636 in order to allow write access to the hierarch(ies).
  • storage engine 1200 may be applicable to a storage engine managing any hierarchical data structure and may not be limited to a hierarchy of resource data objects. Not all interactions have been illustrated. For example, various acknowledgment indications may be provided for different requests that have not been depicted in FIG. 12.
  • FIGS. 11-12 have been described and illustrated in the context of a provider network implementing hierarchical data structures as part of a resource management service for managing resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 11-12 may be easily applied to other storage engines or data managers that manage hierarchical data structures.
  • a file directory system accessible to multiple users may allow for atomic application of multiple updates to a portion of a hierarchical data structure that represents a file directory structure.
  • FIGS. 11-12 are not intended to be limiting as to other embodiments of a system that may implement atomic application of multiple updates to a hierarchical data structure.
  • FIG. 11-12 are not intended to be limiting as to other embodiments of a system that may implement atomic application of multiple updates to a hierarchical data structure.
  • FIG. 13 is a high-level flowchart illustrating methods and techniques to perform atomic application of multiple updates to a hierarchical data structure, according to some embodiments.
  • Various different systems and devices may implement the various methods and techniques described below, either singly or working together.
  • a resource management service such as described above with regard to FIGS. 11-12 may be configured to implement the various methods with respect to hierarchical data structures like different hierarchies.
  • a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
  • a request may be received to perform modifications to a portion (or entirety) of a hierarchical data structure.
  • a hierarchical data structure may be a tree, graph, or other hierarchy -based data structure maintained in data storage, such as a hierarchical data store which stores data natively in a hierarchical data format.
  • Hierarchical data structures may be implemented for a variety of different systems and techniques. For example, hierarchical data structures may be implemented to provide a directory structure for file or other data management systems, a classification structure or other representation of data that is interpreted based on the hierarchical relationships within the hierarchical data structure. In one such example, such as discussed above with regard to FIGS.
  • the resource data objects may identify policies applicable to the behavior of resources corresponding to the resource data objects in a system.
  • resource data objects may be maintained to describe system resources (e.g., unique identifier, capabilities, roles, availability, etc.).
  • the resource data objects may be separately arranged in different hierarchies so that policies applied to the resource data objects according to the hierarchies (e.g., by inheritance rules or direct application) may enforce the controls, actions, configurations, operations, or other definitions of the behavior of the corresponding resources.
  • the hierarchies may be maintained in a hierarchical data storage system, as described above with regard to FIG. 3. However, other types of data stores may be implemented to maintain the hierarchies (e.g., by maintaining the data objects, and relationships between the data objects that define the hierarchy so that the hierarchy can be determined).
  • the request may specify the portion of the hierarchical data structure by including an identifier (e.g., node name, path, or identification number) in the request.
  • the request may specify a particular node by a node identification number indicating that the node and all children of the node in the hierarchical data structure are included in the portion for which modifications are to be performed.
  • multiple paths, nodes, or subdirectories of a hierarchical data structure may be identified as part of one portion for performing updates even though the multiple paths, nodes, or sub-directories of the hierarchical data structure may be unconnected except via a root node for the hierarchical data structure.
  • the request to perform modifications may be a request that merely initiates the start of allowing atomic application of multiple updates, as discussed with regard to other elements below. However, in some embodiments the request may also describe, indicate, or propose the various changes to be atomically applied to the specified portion of the hierarchical data structure. Individual modification requests (e.g., formatted according to an API for the performance of different modifications) may be included as part of a request payload, for instance. The request may be received via a programmatic interface such as API, and may be initiated by a command line interface or graphical user interface.
  • a programmatic interface such as API
  • a copy of the portion of the hierarchical data structure may be created that is separate from the hierarchical data structure, as indicated at 1320.
  • the nodes in the hierarchical data structure along with the relationships defined between the nodes may be read from the hierarchical data structure in the data store and written to a different location in the hierarchical data structure that does not link the copy to the hierarchical data structure from which it was obtained.
  • any lookup or analysis performed upon the hierarchical data structure does not discover, read, or obtain any information from the copy of the portion of the data structure, allowing for modifications to be performed on the copy without being accessible.
  • a pointer, address, or other location of the copy of the portion of the hierarchical data structure may be maintained in order to direct operations to modify the portion of the hierarchical data structure to the copy.
  • the original portion of the hierarchical data structure may remain available for read access.
  • other clients may wish to access the hierarchical data structure to perform lookup operations (e.g., policy lookups as described above with regard to FIG. 4).
  • Providing read access to the portion of the hierarchical data structure may allow for utilization of the hierarchical data structure to continue during the application of multiple modifications as the modifications are separately performed upon the copy.
  • write operations may be restricted or blocked entirely.
  • a locking mechanism as discussed above with regard to FIG. 5B may be implemented to identify a portion of a hierarchical data structure undergoing modification so that intervening or conflicting updates may not be performed.
  • write access may also be allowed for the original portion of the hierarchical data structure.
  • the write requests may be performed and then later merged as part of a conflict resolution scheme, as discussed below with regard to element 1350.
  • write requests received while modifications to the copy are ongoing may be replicated to the copy of the portion of the hierarchical data structure.
  • the writes may make changes to the copy which may be reflected in a graphical display of the copy by refreshing the display periodically to include the changes.
  • the writes may be replicated after receiving a request to commit, but before the modifications made as part of the modifications request are committed.
  • Conflict resolution between replicated writes and the hierarchy may be made as the replicated writes are received in some embodiments. For example, a user could approve or deny displayed conflicts from replicated writes as they are displayed via the GUI or may receive a conflict report upon a request to commit and approve or deny writes in response.
  • operation(s) to apply the modifications to the copy of the portion of the hierarchical data structure may be performed.
  • the modifications may be specified in the request received at element 1310.
  • additional, separately received requests that are identified or associated with the request at element 1310 e.g., that identify a bulk edit or other identifier associated the atomic application of modifications
  • may identify, describe, or instruct the performance of the modifications e.g., according to various API commands to perform different hierarchical data structure modifications, change the structure, change nodes, add nodes, remove nodes, update data or attributes associated with nodes, etc.
  • the requests may be received according to the same interface as the request to perform modifications at element 1310.
  • Requests for modifications may continue to be processed and applied to the copy of the hierarchical data structure until a request to commit the modifications to the portion of the hierarchical data structure is received, as indicated 1340.
  • the commit request may include a token, identifier, or other mechanism that corresponds to the initial request for modification (e.g., a bulk edit identifier) so that the commit request is matched with the appropriate copy of the portion of the hierarchical data structure (e.g., in the event that multiple requests for modification are being concurrently processed for the portion of the hierarchical data structure).
  • the portion of the hierarchical data structure may be atomically replaced with the copy of the portion of the hierarchical data structure that includes the modifications, as indicated at 1370.
  • Atomically replacing the original portion with the copy may be processed as a single transaction that is either performs or fails (e.g., due to errors or conflicts).
  • the data store maintaining the copy and the original portion may have a transaction mechanism (e.g., a transaction API) that allows for operations to effect the replacement to occur (e.g., submitting a transaction that reads all of the data from the copy and overwrites the data of the original portion).
  • the transaction may include actions to link the copy of the portion to a parent node of the original portion and remove a link between the original portion and the same parent node, so that the copy of the portion is grafted or inserted into the hierarchical data structure without having to read and re-write the entire copy over the original portion.
  • operations to allow write access to the updated hierarchical data structure that includes the copy may be performed as part atomically replacing the portion with the copy. For instance, the portion of the hierarchical data structure may be unlocked for write access.
  • write access may remain for the original portion of the hierarchical data structure, in some embodiments.
  • conflicts between the copy and the original portion can occur due to subsequent writes to the original portion.
  • a check for conflict between the copy and writes performed at the original portion of the hierarchical data structure may be performed.
  • a conflict may be detected in various ways. For example, when writes to the original portion contradict a modification made to the copy, a conflict may be detected.
  • a write that changes the relationship between two nodes in the original portion e.g., changing a node's parent node to another parent node in the portion
  • a write to the original portion may change an attribute for a node to have one value (e.g., valuel), while a modification changes the same attribute to have a different value (e.g., value2).
  • a write to the original may add a new node to a group of nodes with a same parent (e.g., add a resource data object to a group).
  • the modifications may add other nodes to the same parent, modify the parent values or remove other nodes from the same parent.
  • the write may not be considered a conflict.
  • the writes may be replicated to the copy of the hierarchy and commitment of the modifications to the copy may proceed.
  • the commitment request may be denied, as indicated at 1380.
  • the write may be rolled back and failed (e.g., by holding acknowledgement and/or performance of the write request until confirming that the write does not conflict with the copy).
  • conflict detection and resolution schemes may be implemented and thus the previous are not intended to be limiting.
  • multiple requests to perform atomic sets of modifications on a same portion of the hierarchical data structure may be received.
  • conflict between the commitment request and the other requests for atomic modifications to the same portion may be detected.
  • multiple requests to perform atomic application of modifications to the same portion of the hierarchical data structure may be allowed to initiate.
  • the first one of these sets of modifications that is successfully committed may prevent the remaining sets of modifications for other requests from committing.
  • each of these requests may obtain a timestamp or version number for the portion of the hierarchical data structure.
  • the version number or timestamp for the portion of the hierarchical data structure may change (e.g., by changing a version number or timestamp at a root node for the portion). If a commit request is received for a set of modifications to the portion, a check may be made prior to committing the modifications to see if the version number is the same as was first obtained. If the version number is not the same, then another set of modifications may have committed first, creating a conflict. As, indicated by the negative exit from 1360, if a conflict exists (e.g., another set of modifications for the portion has already committed), then the commitment request may be denied, as indicated at 1380.
  • Distributed systems include multiple different resources (e.g., both physical and virtual) that provide various different services, capabilities, and/or functionalities, typically on behalf of multiple different entities.
  • resources e.g., both physical and virtual
  • the changes may affect the way in which the distributed system operates for several of the entities that utilize the distributed system.
  • approval may be beneficial (or required) so that changes to the distributed system are not made without some notification of the changes to other entities that may be affected.
  • management decisions regarding various resources in a distributed system often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources.
  • Security policies such as access rights or permitted actions for system resources, for instance, may be defined and enforced for users of the system resources.
  • approval from more than the proposing entity may be desirable.
  • manual and/or informal approval mechanisms to effect changes to a distributed system can be implemented, these approval mechanisms are unable to scale for large distributed systems.
  • large scale distributed systems implementing thousands or hundreds of thousands of resources on behalf of thousands or hundreds of thousands of users, clients, or entities may make it difficult to discover, track, and obtain the approval of changes that may need to be made to a distributed system.
  • Implementing multi-party updates for a distributed system as discussed below, however, may coordinate the proposal, approval, and performance of updates to a distributed system in a scalable, traceable, and automated fashion.
  • FIG. 14 is a logical block diagram illustrating multi-party updates to a distributed system, according to some embodiments.
  • Proposer 1410 may submit proposed updates 1412 to agreement manager 1420.
  • the proposed updates may include any updates or changes to distributed system resources 1440 (e.g., hardware resources, such as various processing, storage, and/or networking hardware) or virtual resources (e.g., instances, volumes, user accounts, or control policies).
  • the proposed updates 1412 may be included in a request to agreement manager 1420 as executable instructions (e.g., API requests or executable scripts, code, or other executable data objects).
  • Agreement manager 1420 may determine an authorization scheme (e.g., a handshake mechanism) for approving the proposed updates.
  • an authorization scheme e.g., a handshake mechanism
  • An authorization scheme may be defined to include one or multiple satisfaction criteria for determining whether the proposed updates 1412 are approved.
  • the authorization scheme may also include the identity of approvers (e.g., by identifying user account names or other user account identifiers), or a mechanism for determining approvers (e.g., by identifying user account types or groups that include user accounts any of which may act as an approver).
  • proposer 1410 may submit an authorization scheme as part of proposal 1412 that identifies specific approvers 1430 (e.g., user accounts or other identities of stakeholders) to approve the proposed update(s) 1412.
  • Agreement manager 1420 may send proposal notification(s) 1422 to the identified approver(s) 1430. In turn, approvers 1430 may send a response indicating approval(s) or disapproval(s) 1432 to agreement manager. Agreement manager 1420 may evaluate the responses with respect to the authorization scheme. For example, if the authorization scheme requires that 4 of 6 approver(s) 1430 send an approval response, then agreement manager 1420 may determine whether 4 approval responses were received. If not, then agreement manager 1420 may send a rejection of the proposed amendments (not illustrated). If, however the authorization scheme for the proposed update(s) 1412 is satisfied, then agreement manager 1420 may direct the approved update(s) 1442 with respect to distributed system resources 1440. For example, agreement manager 1420 may send the API requests corresponding to the described updates (e.g., specified by a user in proposed updates 1412) to initiate performance of the updates, or execute a script or executable data object to perform the updates.
  • agreement manager 1420 may send the API requests corresponding to the described updates (e.g.,
  • FIG. 14 is provided as a logical illustration of multi-party updates to a distributed system, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a distributed system, proposer 1410, agreement manager 1420, or approvers 1430.
  • FIG. 15 illustrates interactions to submit agreement requests for updates to organizations, according to some embodiments.
  • Proposal client 1510 may be one of clients 270 in FIG. 2 above that allows a user to interact with a resource management system that implements multi-account agreement management 322.
  • Interface 310 may be a command line or graphical interface that formats requests according to a programmatic interface, such as API, multi-account agreement management 322.
  • Client 1510 may submit draft proposed agreement for organization updates request 1522 via interface 310 to multi-account agreement management.
  • Draft agreement 1522 may include proposed updates that are user specified (e.g., updates by API commands, executable scripts, code or other executable instructions) or draft agreement 1522 may be a request to propose a pre-defined set of updates (e.g., defined by resource management service 240, such as apply a policy, invite a user account to join an organization, launch a new provider network resource, etc.).
  • Draft agreement 1522 may include an authorization scheme that specifies approvers or a discovery mechanism for approvers (e.g., approver types, groups of possible user accounts that can approve, etc.). Changes can be made to the draft agreement request without triggering notifications to approvers.
  • agreement requests may be locked or otherwise unchangeable after submission 1532.
  • Client 1510 may submit a proposed agreement for approval 1532 to multi-account agreement management 322.
  • submission request 1532 may include an identifier for the draft proposed agreement request created above at 1522.
  • submission request 1532 may be the initial and only submission to multi-account agreement management 322 (e.g., without first creating a draft agreement request) and thus may identify update(s) (and an authorization scheme) in some instances.
  • Multi-account agreement management 322 may send notifications for the proposed agreement 1534 via interface 310 to approval client(s) 1520 (which may be clients 270 associated with user accounts identified as approvers).
  • Approval client(s) 1536 may send approval/disapproval responses for the proposed agreement 1536 which multi-account agreement management 322 may evaluate for approval of the proposed agreement according to the authorization scheme for the agreement request and send a response indicating acceptance or rejection of the proposed agreement 1538.
  • client 1510 may submit a modification to the proposed agreement 1542.
  • the modification may be a modification to the authorization scheme or the updates to be performed.
  • notifications of the proposed modification to the agreement 1544 may be sent to approval client(s) 1520.
  • approval client(s) 1520 may send approval/disapproval response for the modified agreement 1546.
  • proposal client 1510 may cancel the proposed agreement 1552.
  • multi-account agreement management 322 may send notifications of cancellation 1552 to approval client(s) 1520 and/or may ignore responses received from approval client(s) 1520 for the cancelled agreement request.
  • multi-account agreement management 322 may track the state of pending or outstanding agreement requests as well as previously performed or rejected agreement requests.
  • FIG. 16 illustrates a state diagram for agreement requests, according to some embodiments.
  • an agreement request may be initially enter a draft state 1610. Draft state 1610 may indicate that a proposing user account can add, change, or modify the agreement request. As illustrated in FIG. 16, a draft agreement request can be cancelled, moving the agreement request to cancelled state 1630.
  • the agreement request may enter proposed state 1620. From proposed state 1620, an agreement request can enter rejected state 1640 as a result of failing to satisfy the authorization scheme.
  • the agreement request may enter expired state 1650 as a result of failing to be approved before expiration conditions are satisfied (e.g., a within an expiration time limit).
  • the agreement request While in proposed state 1620, notifications for the agreement request may be provided, responses received and evaluated. If the authorization scheme for the agreement request is satisfied, then as illustrated in FIG. 16, the agreement request may enter the approved state 1660. In some embodiments, once an agreement request is approved, then the proposed updates may be automatically directed, initiated, or otherwise performed. However, in some embodiments, as illustrated in FIG. 16, approved agreement requests may still enter decline state 1680. For example, if the agreement request is an invitation to add a new user account to an organization, then the invited user account may decline the invitation to join the organization. In some embodiments, the proposer may abort the approved agreement request if, for instance, another change to the distributed system renders the proposed changes undesirable, as indicated by the change from approved state 1660 to cancelled state 1630.
  • a time period for execution of the proposed changes may be monitored and if the updates are not performed prior to the expiration of the time period, the agreement request may move from approved state 1660 to expired state 1650. If, however, the proposed changes are performed and/or successfully completed, then the performed state 1670 may be entered.
  • FIGS. 14 - 16 have been described and illustrated in the context of a provider network implementing a resource management service for resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 14 - 16 may be easily applied to other resource management systems, components, or devices for distributed systems. For example, control planes for data storage services, configuration management systems for apply changes to systems, or other managers or controllers for distributed systems. As such, FIGS. 14 - 16 are not intended to be limiting as to other embodiments of a system that may implement resource management system for system resources.
  • FIG. 17 is a high-level flowchart illustrating methods and techniques to implement multi-party updates to a distributed system, according to some embodiments.
  • Various different systems and devices may implement the various methods and techniques described below, either singly or working together.
  • a resource management service such as described above with regard to FIGS. 14 - 16 may be configured to implement the various methods.
  • a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
  • an agreement request proposing one or more updates to a distributed system may be received.
  • the agreement request may be specified according to an interface, such as API, and may include various other executable instructions, such as API requests indicating the proposed updates to the distributed system.
  • the agreement request may include requests to add a resource data object (e.g., user account or resource) to a group in a hierarchy by including an AddToGroup request in the agreement request.
  • resource data object e.g., user account or resource
  • an AddToGroup request in the agreement request.
  • executable instructions such as code, scripts, or other executable data objects may describe the updates to perform with respect to the distributed system.
  • Updates to a distributed system may include any changes to the number, arrangement, configuration, execution, operation, access, management, or any other modification to the distributed system.
  • updates may be updates to a hierarchy of resource data objects, such as an organization discussed above with regard to FIG. 5 A that manage the resources of a distributed system, such as updates to invite user accounts of a provider network to join the organization (e.g., by adding a corresponding resource data object to the organization including information describing the user account and applying policies to the user account dependent upon the location, such as the group assignment, of the user account in the organization) or to apply or attach policies to groups or data objects.
  • a hierarchy of resource data objects such as an organization discussed above with regard to FIG. 5 A that manage the resources of a distributed system, such as updates to invite user accounts of a provider network to join the organization (e.g., by adding a corresponding resource data object to the organization including information describing the user account and applying policies to the user account dependent upon the location, such as the group assignment, of the user account in the organization) or to
  • updates may describe different types of updates, such as updates to the organization and updates to add, launch, modify, halt, or create a new resource (e.g., a virtual compute instance or data storage volume) in the provider network in the same agreement request.
  • updates may be a request to execute a function, operation, task, workflow, or action defined and/or execute by a different resource in the distributed system than the resource (e.g., agreement manager) determining whether agreement is reached to perform the update.
  • a network-based service implemented as part of a provider network may execute user-specified functions upon invocation by an API call to the service, which would allow an update to describe the API call to the service which in turn invokes execution of a function.
  • An authorization scheme for the received agreement request may be determined, in various embodiments.
  • the agreement request may specify, identify, or otherwise comprise the authorization scheme.
  • various available authorization schemes may be implemented by an agreement manager which may be selected for processing an agreement request that identifies one of the multiple authorization schemes.
  • the authorization scheme may be defined or specified in the agreement request.
  • the authorization scheme may be defined to include one or multiple satisfaction criteria for determining whether the proposed updates in the agreement request are approved.
  • the authorization scheme may also include the identity of approvers (e.g., by identifying user account names or other user account identifiers), or a mechanism for determining approvers (e.g., by identifying user account types or groups that include user accounts any of which may act as an approver).
  • Authorization schemes may be implemented in different ways. For example, a nuclear key authorization scheme may be implemented that identifies an exact number of entities (e.g., user accounts) as well as the identity of specific entities (e.g., specific user accounts) that may approve the proposed changes.
  • entities e.g., user accounts
  • specific entities e.g., specific user accounts
  • the authorization scheme includes a requirement that 3 user accounts must each approve the proposed updates (e.g., user account A and user account B and user account C). If only two of the three user accounts (e.g., B and C) approve of the proposed updates, then the agreement request cannot satisfy the requirement, even if another user account, user account D, where to approve the proposed updates.
  • quorum-based approval techniques may be implemented as an authorization scheme so that a minimum number of approvers approve of the proposed updates (even if all approvers do not approve of the proposed amendments).
  • a quorum-based requirement for an authorization policy may require that 3 of 5 identified approvers provide approval for the proposed updates.
  • Another type of authorization requirement may be a veto-based requirement that allows for authorization of the proposed updates as long as none of the identified approvers (or a quorum of identified approvers) do not veto or otherwise reject the proposed updates within a certain time period (e.g., 24 hours).
  • Authorization schemes may include multiple requirements, in some embodiments.
  • an authorization scheme may include a requirement that a particular approver must approve the proposed updates and that at least one approver from multiple different groups of other approvers approve the proposed updates.
  • An authorization scheme could specify that a user account of a particular organization leader (e.g., manager, director, vice- president, etc.) approve of the updates and that 1 user account from a human resources (HR) group and 1 user account from a security group approve of the updates (combining quorum requirements with a specific approver requirement).
  • HR human resources
  • agreement requests may be limited by a throttling scheme imposed upon agreement requests submitted by a single user account, or a total number of agreement requests that may be outstanding (e.g., not yet approved or rejected) in a given time period. If a request from a user account exceeds a limit or threshold on the number agreement requests that can be outstanding or submitted for a user account in a time period, then as indicated by the negative exit from 1720, the agreement request may be rejected. Agreement requests may also not be allowed to proceed if resulting in duplicate updates. For example, data describing outstanding or completed updates to a distributed system may be performed.
  • approver(s) for the agreement request may be identified according to the authorization scheme for the request, in various embodiments. If the authorization scheme identifies specific approvers (e.g., specific user account ids or user names), then the identity of the approvers may be determined by accessing the authorization scheme. In some embodiments, the authorization scheme may provide a discovery mechanism to determine the approvers.
  • the authorization scheme may provide an attribute, condition, or other signature that can be compared with possible users to determine which users may be approvers.
  • the authorization request may specify that any user account associated with the team, organization or department may be an approver for the agreement request.
  • the requested updates may identify one, some, or all of the approvers. For instance, if the update is an update to the user account itself (e.g., changing group membership or joining an organization), then the approver may be the user account identified by the update.
  • notifications of the proposed update(s) may be sent to the identified approver(s), in some embodiments.
  • notifications may include plain text descriptions of proposed updates (e.g., plain text descriptions of included API calls, scripts, or executable data objects that are not human readable).
  • Notifications may also identify other approvers, expiration times (e.g., an approval deadline), the user account proposing the updates, and/or any other information that an approver may need to determine whether or not to approve the proposed updates.
  • Notifications may be sent via network communications to a client that is associated with the user account of the approver (e.g., send an approval email to a computer providing access to an email address associated with the user account, a message or communication portal, window, or display provided to the user account when the user account logs onto a network-based site, such as a user control panel provided as part of a service or provider network interface).
  • Responses of approval or disapproval may be sent back via the same communication or notification channel (e.g., via the same interface) or via a different communication channel.
  • an email or text notification sent via mail protocol or messaging protocol may include a link to a web interface, which can display approval or disapproval response controls so that the response is sent via network communication via the web interface.
  • notifications of proposed update(s) may not be sent to approvers. Instead, approvers may poll for periodically (or randomly request) a list of proposed updates for which the approver has been identified from an agreement manager, such as multi-account agreement management 322.
  • Agreement requests may be asynchronously processed, in various embodiments. Once notifications are sent to approvers, approval (or disapproval) responses may be processed as received until the proposed changes are approved according to the authorization scheme, disapproved, or expired. As indicated by the positive exit from 1750, when response(s) are received from the approver(s), a determination may be made as to whether the authorization scheme is satisfied, as indicated at 1752.
  • Response data such as the responding approver and the answer (e.g., approve or disapprove) may be maintained so that as responses for agreement requests arrive at different times, as well as data indicating those authorization requirements satisfied and outstanding so that an evaluation of the authorization scheme may be performed as requests are received.
  • quorum requirements may provide more notifications to approvers than may be required to satisfy the quorum, therefore once a quorum requirement is satisfied, the quorum requirement may be marked or stored as satisfied so that responses received from additional approvers in the quorum can be ignored for authorization scheme evaluation purposes.
  • agreement request may be subject to a default time expiration threshold (or an expiration threshold or condition defined by the authorization scheme).
  • the agreement request may be expired, as indicated by the positive exit from 1760, and the agreement request rejected, as indicated at 1780. For example, a 24 hour approval expiration date may deny agreement requests not approved within 24 hours of submission. If, however, the agreement request is not yet expired, then as indicated by the negative exit from 1760, the agreement request may remain outstanding or pending, waiting for approval or disapproval.
  • the proposed updates of an agreement scheme that is approved according to the authorization scheme may be performed to the distributed system.
  • the described API requests may be sent, the included script parsed and executed, or the executable data executed.
  • changes to the authorization scheme including changes to approvers can be made after submitting the agreement request.
  • a user may wish to add an additional approver (e.g., so that the additional approver is aware of the change).
  • a notification may be sent to the additional approver.
  • responses received from the removed approvers may be ignored for determining whether the authorization scheme is satisfied.
  • changes to the proposed updates may be made, in some embodiments. For example, update(s) may be added, removed, or modified for the agreement request.
  • updated notifications may be sent to approvers so that the approvers can approve the changed proposed updates.
  • Managing system resources often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources that are described in respective policies applied to the system resources. For example, security policies, such as access rights or permitted actions for system resources, may be defined and enforced for users of the system resources.
  • security policies such as access rights or permitted actions for system resources
  • data describing the resources of a system may be maintained that also describes these permitted behaviors.
  • data objects describing system resources may be maintained to identify policies that indicate the permitted behaviors for the system resources.
  • a hierarchy or structure of the resource data objects may be implemented.
  • a tree structure may be implemented that arranges the resource data objects in groups, directories, or other sets of resource data objects which apply those policies inherited along the path of the tree structure from the resource data object to the root of the tree structure.
  • policies applied to parent nodes e.g., the groups, directories, or other set of resource data objects
  • child nodes e.g., the resource data objects in the groups, directories, or sets.
  • policies may be implemented in many different scenarios to manage system resources. Given both the variety of types of management actions, configurations, controls or other definition of behaviors for resources as well as the many different types of resources that may be managed, determining whether created or applied policies are valid can quickly become unmanageable.
  • resource management systems provide a limited set of pre-defined policies which may be applied.
  • a limited set of pre-defined policies may be unable to adapt to new or changing conditions, resources, or scenarios where policies could be applied to manage resources.
  • large scale distributed systems like provider network 200 discussed above with regard to FIG. 2, may offer hundreds of services and run thousands of resources on behalf of users, which may be configured and/or operated in large number of combinations.
  • remote policy validation may allow for a resource manager to host, apply, and manage all kinds of policies without any pre-defined policy sets or limitations. Instead, users of the resource manager may craft custom policies particular to the individual needs or desires of the system resources to be managed, as validation of the policies may be remotely performed by remote validation agents implemented, configured, controlled, or directed by the users.
  • FIG. 18 is a logical block diagram illustrating remote policy validation for managing distributed system resources, according to some embodiments.
  • Distributed system resources 1840 may be physical system resources, such as computing devices (e.g., servers), networking devices, or storage devices, or virtual system resources, such as user accounts, user data (e.g., data objects such as database tables, data volumes, data files, etc.), user resource allocations (e.g., allocated resource bandwidth, capacity, performance, or other usage of system resources as determined by credits or budgets), virtual computing, networking, and storage resources (e.g., compute instances, clusters, or nodes), or any other component, function or process operating in a distributed system.
  • computing devices e.g., servers
  • networking devices e.g., servers
  • storage devices e.g., or storage devices
  • virtual system resources such as user accounts, user data (e.g., data objects such as database tables, data volumes, data files, etc.), user resource allocations (e.g., allocated resource bandwidth, capacity, performance, or other
  • these resources 1840 may be represented as resource data objects to which policies are applied (e.g., mapping, link, or otherwise associating).
  • a lookup operation may be performed, as discussed above with regard to FIG. 4, in order to determine which policies are associated with a given resource data object (e.g., by traversing a path that includes the resource data object).
  • Different policies may be created by be a client of the distributed system, such as client 1810.
  • client 1810 may send one or more requests 1812 to resource manager 1820 to create and apply policies to distributed system resources.
  • resource manager 1820 may ensure that the created/applied policies are valid, in various embodiments.
  • Validating policies may include evaluating policies for syntactic errors and semantic errors. Syntactic errors may be errors that indicate the format or composition of a policy is incorrect when compared with a schema or other set of syntax rules for the policy. For example, syntactic errors may be identified when a policy fails to include a data field, modifier, or other term that signals the location of a policy attribute (e.g., resource identifier).
  • Semantic errors may be errors that indicate whether the content of a policy that is meaningful, and thus enforceable. For example, a semantic error may occur when the policy identifies an operation to modify a resource that does not exist. The non-existent resource has no meaning, and therefore is a semantic error in the policy. Semantic validation may include validating based on business or operational logic or rules and thus may be specific to the policy type being validated.
  • remote policy validation may allow for a remote validation agent, specifically configured to validate a specific policy or policies to perform syntactic or semantic validation, such as remote validation agent(s) 1830.
  • Resource manager 1820 may identify remote validation agent(s) 1830 according to the policy or policy type of the policy. For instance, a policy type for the policy may be determined so that a remote validation agent 1830 that is associated with the policy type is identified.
  • remote validation agent(s) 1830 may be implemented as part of a resource that consumes the policy (e.g.
  • the policy or policy type may specifically identify remote validation agent(s) 1830 by including a network address or endpoint to which a validation request may be directed to (e.g., without any particular formatting or information for the policy) or may send the validation request to a pre-registered remote validation agent for the policy via an interface formatted to request and obtain certain information about the policy and whether the policy is valid.
  • resource manager 1820 may send a validation request 1822 including information for the policy to remote validation agent(s) 1830 to initiate validation.
  • the validation request 1822 may include a copy of the policy, or portions of the policy, which remote validation agent(s) 1830 may compare with a policy type schema for the policy.
  • remote validation agent(s) 1830 may only receive an identification of a policy as part of validation information and remote validation agent(s) 1830 may request further information (e.g., further validation content, such as data field values or a policy type schema) from resource manager 1820 or other source(s) (not illustrated).
  • further validation content such as data field values or a policy type schema
  • the invalid response may, in some embodiments, indicate the validation errors detected for policy, which may be provided to client 1810 (or other associated client, not illustrated) for correction.
  • Validated policies may be applied 1842 to distributed system resources.
  • the policy may be attached, associated, or otherwise linked to one or multiple resources so that when certain resource actions are initiated, the policy directs or controls the actions.
  • FIG. 18 is provided as a logical illustration of remote validation for managing distributed system resources, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a resource manager, remote validation agents, client or clients or the number, type, or arrangements of distributed system resources.
  • FIG. 19 is a logical block diagram illustrating a policy manager for resource management service policies applicable to provider network resources, according to some embodiments.
  • policy management 330 may handle policy creation, application, lookups, and validation for policies that are applied to resource data objects in an organization.
  • Policy management 330 may implement policy/creation handling 1910, in various embodiments, to process policy type and policy creation requests.
  • a client wants to introduce a new policy to allow various users of an organization in resource management service 240 to establish a payer account that identifies a user account that is financial responsible for service charges incurred in provider network 200.
  • the client may submit a request to create a new policy type named "Payer" and a new policy schema for the newly created policy type.
  • the policy schema may be specified in various formats both human readable and/or machine readable, such as JSON or XML.
  • Policy creation/application handling 1910 may store the "Payer" policy type (e.g., by storing the associated schema and metadata, including version information for the schema) along with other policy types 1942 in policy store 1940.
  • the stored schema may identify a remote policy validation agent and whether the validation is performed synchronously or asynchronously with respect to a validation request.
  • Policy store 1940 may persistently maintain policy types 1942 by persistently maintaining the policy schemas and corresponding metadata for the policy schemas.
  • Policy store 1940 may be implemented as a database or otherwise searchable/query-able storage system to provide access to other components of policy management 330 or resource management 240. In some embodiments, policy store may be separately implemented from policy management 330 or resource management 240 (e.g., as part of a storage service 230). Because policy store 1940 maintains metadata for policy types 1942, policy creation application handling 1910 may allow users to create new versions of policy schemas, identifying prior versions by schema version numbers.
  • the policy schema may be updated or replaced and the version number changed to indicate a later version (e.g., version 2.0).
  • version 2.0 e.g., version 2.0
  • multiple versions associated with different version numbers may be considered valid for policies, while clients may mark or indicate that some versions of a policy schema are obsolete (and should not be used).
  • Policy creation/application handling 1910 may also handle requests to create an instance of a policy type, a policy. For example, another client may create a new Payer policy based on the Payer policy type. The other client can submit the appropriate policy content to populate the new policy (as discussed below with regard to FIGS. 20 and 22).
  • policy validation handling 1930 may direct the performance of syntactic validation, either via a remote validation agent or through a remote validation agent implemented as part of policy management 330. If valid, the newly created policy may then be written as a new policy resource data object into the organization, as discussed above with regard to FIGS. 4 and 5A.
  • Policy management 330 may implement policy lookup handling 1920, in various embodiments, to handle lookup requests for policies (as discussed above with regard to FIG. 4).
  • policies can also be inherited in a chain from the organization down to a group, group of groups, or individual resource data object. If a policy is applied to a parent node in the hierarchy, then the child node (group, group of groups, or individual resource data object) may inherit the policy of the parent node. In this way, the policy applied to the parent node becomes the "default" policy, in the absence of any other policy applications.
  • different policies may have different inheritance semantics, which may have to be resolved.
  • access policies may follow the semantics of a set union, where ordering does not matter (e.g., everything is allowed unless explicitly excluded).
  • Billing policies in another scenario, may implement a "child wins/parent appends" inheritance model where a child policy may be executed, followed by a parent policy. In such scenarios, ordering of policies matters.
  • policy lookup handling 1920 may be configured to resolve conflicting policies according to the appropriate inheritance semantics for the policy.
  • policy management 330 may implement policy validation handling 1930 to direct syntactic and semantic validation of policies via remote validation agents.
  • validation of policies may include syntax validation. Syntax validation may evaluate whether a policy is syntactically correct with respect to the policy schema of the policy type for the policy so that the policy can be parsed and evaluated by backend systems that lookup the policy. Syntactic validation may be performed, in some embodiments, when authored, as discussed below with regard to FIG. 20. In addition to syntactic validation, some policies may undergo semantic validation. As noted above, semantic validation may be performed to ensure that policy content is meaningful, so that a resource or other information specified in the policy results in a policy that can be enforced.
  • semantic validation could determine whether a user account identifier specified in the "Payer" policy example discussed above is an account in the organization that has a valid payment instrument (e.g., a valid source of funds to pay for incurred expenses).
  • Policy validation handling 1930 may direct validation upon policy applications and resource or organization changes/modifications, in order to ensure that the changes do not invalidate policies that are applied within the organization. For example, validation of a modification to resource (e.g., a payer account leaving an organization or group) to ensure the modification does not invalidate the policy (e.g., that the payer account does not leave the organization or group without a valid payment instrument).
  • each policy may have different semantic validation logic, each policy may have a separately configurable remote validation agent.
  • policy validation handling 1930 may direct synchronous or asynchronous validation of policies.
  • a policy or policy schema
  • a policy may specify that validation for the policy is performed synchronously, so that the client that initiated the validation request (e.g., a client attempting to attach a policy or enforce a policy) waits for the validation result before continuing to operate.
  • the client may not wait for the validation result to continue operating.
  • policy validation handling 1930 may track the state of validation for a policy (e.g., "Validation Request Submitted,” “Validation Ongoing,” “Validation Success,” or “Validation Error”) and may provide the state of the validation for the policy to the client in response to requests (e.g., the client may periodically poll for the state of the validation).
  • policy validation handling 1930 may provide a recommendation to a policy creator (e.g., to a user account that created the policy) to change the policy validation behavior to synchronous or asynchronous depending on previous performance of validating the policy.
  • a change in validation behavior may offer better performance (e.g., by not tying resources using synchronous behavior waiting for a long running policy validation or by using asynchronous behavior, spending time releasing and polling for validation state when the validation completes quickly).
  • FIG. 20 illustrates interactions to manage policy types and policies in resource management service, according to some embodiments.
  • Client 2010, which may be a client 270 of provider network 200 as discussed above, may submit requests to resource management service 240 via interface 310.
  • Interface 310 may provide an API for requests from client 2010, which may be formatted and sent according to the API via a command line interface or graphical user interface, such as discussed below with regard to FIG. 20.
  • client 2010 may send a request to create, modify, or delete a policy type maintained at resource management service 240 (e.g., in policy store 1940).
  • a policy schema or changes to a policy schema may be specified (e.g., by including a schema data object or file).
  • Resource management service 240 may update the policy type in policy store 1940 in accordance with request (e.g., creating a new policy type and storing the policy schema and related metadata, updating the policy schema and metadata, or deleting the policy schema and metadata). Resource management service 240 may then acknowledge the completion of the request 2014.
  • Client 2010 may send a request to create or modify a policy 2022 to resource management service 240.
  • the created or modified policy may be an instance of a policy type. Multiple policies may be created for a single policy type so that policies may be configured differently for application to resources in different circumstances.
  • the creation request or update request 2022 may include policy content that defines actions taken (or not taken) in certain conditions. For instance, the creation request may specify a new policy for a resource launch policy type to describe the actions to be taken when a compute resource is launched in the provider network (e.g., a condition describing the type of compute instance to be launched, configuration action(s) to take for the compute instance to be launched).
  • resource management service 240 may request syntactic validation 2024 from a remote validation agent 2020.
  • Remote validation agent 2020 may be another resource implemented in the provider network, such as a virtual compute instance or server resource configured to handle validation requests from remote management service for the policy type.
  • the request for syntactic validation may include the policy schema for the policy (or remote validation agent 2020 may maintain or separately request the policy schema from resource management service 240) and the policy content to be validated (or an identifier so that the policy content may be retrieved from resource management service 240).
  • Remote validation agent 2020 may perform a syntactic validation by comparing the policy content of the created/updated policy with the policy schema for the policy type to determine whether the policy content violates any of the allowed structure (e.g., ordering of data fields) or content (e.g., data types or resource types— such as allowing a storage resource to be specified when the policy schema describes a computing resource). Remote validation agent 2020 may then provide syntactic validation results 2026 to resource management service 240 (e.g., indicating that the policy is valid or that error(s) are detected— and possibly include the detected error(s)). If validation fails, then failure indication 2028 may be provided to client 2010.
  • resource management service 240 e.g., indicating that the policy is valid or that error(s) are detected— and possibly include the detected error(s)
  • resource management service 240 may store the new policy object or update the existing policy object 2032 in hierarchical data store 350. Upon acknowledgment 2034 of successfully storing/updating the policy object in hierarchical data store 350, resource management service 240 may then acknowledge the success of the creation or modification request 2036.
  • Client 2010 may send a request to delete a policy 2042 to resource management service 240.
  • Resource management service 240 may send a corresponding request 2044 to delete the policy object from hierarchical data store 350.
  • acknowledgement 2046 of successful deletion of the policy data object resource management service 240 may send an acknowledgment of the policy deletion 2048 to client 2010.
  • FIG. 21 illustrates interactions to attach policies to resource data objects, according to some embodiments.
  • Client 2110 which may be the same or different from client 2010, may send a request to apply a policy 2112 (that has been created, as discussed above in FIG. 20) to a resource data object (e.g., to a group resource data object or an individual resource data object) to resource management service via interface 310.
  • Resource management service 240 may then identify a remote validation agent for the policy (e.g., as may be identified in the policy or policy schema) and send a validation request 2122 to remote validation agent 2120.
  • the policy or policy schema for the type of policy may include a network endpoint (e.g., a network address, such as an Internet Protocol (IP) address) to which the validation request is sent.
  • IP Internet Protocol
  • remote validation agent 2120 may be preregistered with resource management service 240 so that every time a policy of a policy type associated with remote validation agent 2120 is received, the validation request may be sent to remote validation agent 2120.
  • the semantic validation request 2122 and/or response may be formatted according to an API for validation requests and responses or may be an event or trigger indication configured by the policy or policy schema (e.g., an API request formatted according to an interface for remote validation agent 2120).
  • Remote validation agent 2120 similar to remote validation agent 2020 discussed above in FIG. 20, may be another resource implemented in the provider network, such as a virtual compute instance or server resource configured to handle validation requests from remote management service for the policy type.
  • Remote validation agent 2120 may perform semantic validation with respect to the policy and the attached resource data object. For example, if the attachment to the resource policy gives a resource, such as a compute instance, access to a storage resource, semantic validation may determine wither the identified resource data object is a compute instance, and whether or not an instance of that type is allowed to have access to the storage resource. Semantic validation may validate the content of the policy to determine whether or not the any of the actions or conditions defined in the policy violates any business or operational logic or rules or is otherwise unenforceable. Remote validation agent 2120 may send semantic validation results 2124 back to resource management service 240. If the semantic validation fails, then resource management service 240 may provide an indication of validation failure for the policy and reject the request to attach the policy to the resource data object. The indication 2114 may include validation error information so that corrections to the policy may be made, in some embodiments.
  • resource management service 240 may send a request to update a hierarchy in hierarchical data store 350 to link the policy to the data object 2132.
  • Hierarchical data store 350 may write the link to the stored hierarchy and return an update acknowledgement 2134.
  • resource management 240 may return an acknowledgement 2116 of the policy attachment to client 2110.
  • policy types and policies may user authored or specified.
  • custom policies may be both created, managed, and applied to resources in a distributed system, such as provider network 200, as well as being validated according to a custom policy schema for the policy and custom semantic validation rules (e.g., business logic specific to a resource or service implementing a resource to which the policy is applied).
  • FIG. 22 illustrates an example graphical user interface for creating and editing policies, according to some embodiments.
  • policy creation interface 2200 may be a graphical user interface hosted or provided by a network-based site (e.g., provider network website) or be a local GUI implemented at a client of provider network 200 (e.g., built on top of various APIs of provider network 200).
  • Policy creation interface 2200 may implement a policy selection area 2210 to display various options for triggering the creation or modification of policies or policy types.
  • select policy type 2222 may be a drop down list, search interface, or any other kind of selection user interface component that allows a user to identify an existing policy type.
  • users may also select an element to upload a policy type (e.g., create a new policy schema) which can then be selected to create a new policy of that policy type.
  • policy editor 2250 may display in edit interface 2254 the policy schema for editing (not illustrated). Selection of the policy type may populate one or more possible policy templates 2232 which may be examples of policies that can be configured or filled by a user via edit interface 2254.
  • a search interface for existing policy templates (or policy types) may be implemented so that users can identify a policy type or policy template that suits specified resource management needs (e.g., security, storage resources, deployment, networking, payment configuration, etc.).
  • Upload policy/template element 2234 may allow users to select a policy template or policy for upload (which may then be edited in edit interface 2254.
  • select existing policy element 2242 may allow users to select a previously created policy and make changes to the policy.
  • Policy editor 2250 may be implemented to provide various policy content editing features, such as a text editor like edit interface 2254. To apply changes, including the creation of a new policy, user interface element 2260 may be selected. Note however, that in at least some embodiments, policy type creation, policy creation, policy type update, or policy type updates may be performed via a series of user interface elements or windows (e.g., a policy type selection wizard, a policy type creation wizard, a policy type update wizard, a policy template selection wizard, a policy creation wizard, a policy edit wizard, etc.), or some other form or combination of graphical user interface elements and thus FIG. 22 is not intended to be limiting.
  • a series of user interface elements or windows e.g., a policy type selection wizard, a policy type creation wizard, a policy type update wizard, a policy template selection wizard, a policy creation wizard, a policy edit wizard, etc.
  • FIGS. 2 - 22 have been described and illustrated in the context of a provider network implementing a resource management service for resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 2 - 22 may be easily applied to other resource management systems, components, or devices.
  • private systems and networks implementing multiple system resources may maintain remote policy validation for managing the behavior of the system resources.
  • FIGS. 2 - 22 are not intended to be limiting as to other embodiments of a system that may implement resource management system for system resources.
  • FIG. 23 is a high-level flowchart illustrating methods and techniques to implement remote policy validation for managing distributed system resources, according to some embodiments.
  • Various different systems and devices may implement the various methods and techniques described below, either singly or working together.
  • a resource management service such as described above with regard to FIGS. 2 - 22 may be configured to implement the various methods.
  • a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
  • policies applicable to manage resource(s) in a distributed system may be maintained.
  • a hierarchical data store such as discussed above with regard to FIGS. 3 - 22, may be implemented to maintain resource data objects and policies for managing resources corresponding to the data objects.
  • the maintained policies may be made applicable to a resource by associating the policies with a resource (e.g., creating a link in the hierarchy between the policy and resource data object) so that when a policy consumer (e.g., a system, service, or control that manages the resource) checks to see whether policies are enforced against the resource, the associated policy is identified as applied to the resource.
  • a policy consumer e.g., a system, service, or control that manages the resource
  • a table indexed by resource id may be maintained that stores all policies applied to a resource in a row with the resource, so that when the policies associated with the resource need to be determined, the resource id of the resource may be looked up and applied policies read from the row.
  • a validation event may be detected for one of the policies, as indicated at 2320, in some embodiments.
  • a validation event may be triggered by a policy action (e.g., creation, application, or enforcement) of a policy that results in a validation of the policy. For example, as illustrated in FIG. 20, a validation event may occur when a policy is created. Similarly, as illustrated in FIG. 21, a validation event may occur when a policy is applied (e.g., attached) or enforced (e.g., by a policy consumer, such as another network service that implements the actions specified in the policy when the conditions specified in the policy are satisfied, as discussed above with regard to FIG. 4).
  • a validation event may also be triggered by a policy action resulting from a modification to (or an attempt to modify) a resource (e.g., adding or removing resources from a group or hierarchy).
  • a remote validation agent may be identified according to the policy, in some embodiments, as indicated at 2330.
  • a remote validation agent may be a remote validation agent implemented remotely (e.g., separated via a network communication) from a resource manager or other system, component, or device that maintains the policies for managing resources in a distributed system.
  • remote validation agents may be pre-registered to associate the remote validation agent with handling certain types of validation (e.g., syntactic and/or semantic), so that the remote validation agent implements a common interface (e.g., API) format for receiving a validation request and sending validation results.
  • a remote validation agent may only be specified by a network endpoint (e.g., in a policy or policy schema for the policy).
  • Validation information for the policy may be sent to the validation agent to initiate validation of the policy, as indicated at 2340.
  • Validation information may include policy content, a policy schema, information about the action triggering the validation event (e.g., if a request to apply the policy to a particular resource, validation information may include the identity of and/or information about the particular resource), or any other data for performing a validation.
  • validation information may include a request for a specific type of validation (e.g., semantic or syntactic) if both may be performed by the remote validation agent.
  • validation information may include an identifier of the policy, as discussed below with regard to FIG. 24, which the remote validation agent may then be used to obtain appropriate validation information (either from a resource manager or other source).
  • a validation result may be received from the remote validation agent, in some embodiments. If the validation result indicates that the policy is not valid, as indicated by the negative exit from 2360, then the policy action triggering the validation may be denied, as indicated at 2380. A denial or other failure indication may be provided to a requesting client to block, stop, or disallow the policy action. If the validation result indicates that the policy is valid, as indicated by the positive exit from 2360, then the policy action triggering the validation event with respect to resource(s) in the distributed system may be allowed, as indicated 2370. For example, the requested policy creation, application, or enforcement may proceed.
  • FIG. 24 is a high-level flowchart illustrating methods and techniques to implement policy validation at a remote validation agent, according to some embodiments.
  • a validation request for a policy may be received at a validation agent from a resource manager for a distributed system, in some embodiments.
  • the validation request may include validation information, which as noted above, may include a variety of information, such as policy content, a policy schema, information about the action triggering the validation event (e.g., if a request to apply the policy to a particular resource, validation information may include the identity of and/or information about the particular resource), validation type (e.g., syntactic or semantic) or any other data for performing a validation.
  • validation information may include a variety of information, such as policy content, a policy schema, information about the action triggering the validation event (e.g., if a request to apply the policy to a particular resource, validation information may include the identity of and/or information about the particular resource), validation type (e.g., syntactic or semantic
  • the validation information may not include all the information needed to perform the validation, as indicated at 2420, (e.g., if the validation request includes a policy identifier but no policy content). If not, then the remote validation agent may request additional information from one or more sources (e.g., policy content from the resource manager , information about resources identified in the policy from other network services, such as whether a specified resource id is valid or allowed to perform an action specified by the policy), as indicated at 2430.
  • sources e.g., policy content from the resource manager , information about resources identified in the policy from other network services, such as whether a specified resource id is valid or allowed to perform an action specified by the policy
  • the policy content may be evaluated to determine whether the policy is valid. For example, syntactic validation may evaluate whether a policy is syntactically correct with respect to a policy schema of a policy type for the policy so that the policy can be parsed and evaluated by backend systems that lookup the policy, whereas semantic validation may be performed to ensure that policy content is meaningful, and thus enforceable, so that a resource or other information specified in the policy results in a policy that can be enforced.
  • remote validation agent may be customized to perform validation based on knowledge that the resource manager does not have (e.g., whether identifiers included in a policy exist, whether the resources identified in the policy can be configured in a particular way, whether a user account can be authorized to access certain information, etc.), remote validation agent may also access or obtain the other information that the resource manager does not have (or understand) (some of which may be obtained as indicated at element 2430 discussed above), in order to perform the validation.
  • a validation result may be sent to the resource manager, as indicated at 2450. The result may identify errors in the event that the policy is determined to be invalid.
  • the methods described herein may in various embodiments be implemented by any combination of hardware and software.
  • the methods may be implemented by a computer system (e.g., a computer system as in FIG. 25) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors.
  • the program instructions may be configured to implement the functionality described herein (e.g., the functionality of various servers and other components that implement the directory storage service and/or storage services/systems described herein).
  • the various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
  • FIG. 25 is a block diagram illustrating a computer system configured to implement different hierarchies of resource data objects for managing system resources, according to various embodiments, as well as various other systems, components, services or devices described above.
  • computer system 2500 may be configured to implement various components of a resource management service, hierarchical data store, or other provider network services, in different embodiments.
  • Computer system 2500 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device.
  • Computer system 2500 includes one or more processors 2510 (any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory 2520 via an input/output (I/O) interface 2530.
  • Computer system 2500 further includes a network interface 2540 coupled to I/O interface 2530.
  • computer system 2500 may be a uniprocessor system including one processor 2510, or a multiprocessor system including several processors 2510 (e.g., two, four, eight, or another suitable number).
  • Processors 2510 may be any suitable processors capable of executing instructions.
  • processors 2510 may be general -purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2510 may commonly, but not necessarily, implement the same ISA.
  • the computer system 2500 also includes one or more network communication devices (e.g., network interface 2540) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.).
  • a client application executing on system 2500 may use network interface 2540 to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the resource management or other systems implementing multiple hierarchies for managing system resources described herein.
  • a server application executing on computer system 2500 may use network interface 2540 to communicate with other instances of the server application (or another server application) that may be implemented on other computer systems (e.g., computer systems 2590).
  • computer system 2500 also includes one or more persistent storage devices 2560 and/or one or more I/O devices 2580.
  • persistent storage devices 2560 may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device.
  • Computer system 2500 (or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices 2560, as desired, and may retrieve the stored instruction and/or data as needed.
  • computer system 2500 may host a storage system server node, and persistent storage 2560 may include the SSDs attached to that server node.
  • Computer system 2500 includes one or more system memories 2520 that are configured to store instructions and data accessible by processor(s) 2510.
  • system memories 2520 may be implemented using any suitable memory technology, (e.g., one or more of cache, static random access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory).
  • System memory 2520 may contain program instructions 2525 that are executable by processor(s) 2510 to implement the methods and techniques described herein.
  • program instructions 2525 may be encoded in platform native binary, any interpreted language such as JavaTM byte-code, or in any other language such as C/C++, JavaTM, etc., or in any combination thereof.
  • program instructions 2525 include program instructions executable to implement the functionality of a hierarchy storage nodes that maintain versions of hierarchical data structures or components of a transaction log store that maintain transaction logs for hierarchical data structures, in different embodiments.
  • program instructions 2525 may implement multiple separate clients, server nodes, and/or other components.
  • program instructions 2525 may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, SolarisTM, MacOSTM, WindowsTM, etc. Any or all of program instructions 2525 may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments.
  • a non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
  • a non-transory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/DIRECTORY STORAGE SERVICE 220-ROM coupled to computer system 2500 via I/O interface 2530.
  • a non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system 2500 as system memory 2520 or another type of memory.
  • program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2540.
  • a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2540.
  • system memory 2520 may include data store 2545, which may be configured as described herein.
  • data store 2545 may be configured as described herein.
  • the information described herein as being stored by the hierarchy storage nodes or transaction log store described herein may be stored in data store 2545 or in another portion of system memory 2520 on one or more nodes, in persistent storage 2560, and/or on one or more remote storage devices 2570, at different times and in various embodiments.
  • system memory 2520 e.g., data store 2545 within system memory 2520
  • persistent storage 2560, and/or remote storage 2570 may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, configuration information, and/or any other information usable in implementing the methods and techniques described herein.
  • I/O interface 2530 may be configured to coordinate I/O traffic between processor 2510, system memory 2520 and any peripheral devices in the system, including through network interface 2540 or other peripheral interfaces.
  • I/O interface 2530 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2520) into a format suitable for use by another component (e.g., processor 2510).
  • I/O interface 2530 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • I/O interface 2530 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface 2530, such as an interface to system memory 2520, may be incorporated directly into processor 2510.
  • Network interface 2540 may be configured to allow data to be exchanged between computer system 2500 and other devices attached to a network, such as other computer systems 2590 (which may implement embodiments described herein), for example.
  • network interface 2540 may be configured to allow communication between computer system 2500 and various I/O devices 2550 and/or remote storage 2570.
  • Input/output devices 2550 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems 2500. Multiple input/output devices 2550 may be present in computer system 2500 or may be distributed on various nodes of a distributed system that includes computer system 2500.
  • similar input/output devices may be separate from computer system 2500 and may interact with one or more nodes of a distributed system that includes computer system 2500 through a wired or wireless connection, such as over network interface 2540.
  • Network interface 2540 may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface 2540 may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface 2540 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
  • computer system 2500 may include more, fewer, or different components than those illustrated in FIG. 25 (e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.)
  • a network-based service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network.
  • a network-based service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL).
  • WSDL Web Services Description Language
  • Other systems may interact with the network- based service in a manner prescribed by the description of the network-based service's interface.
  • the network-based service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations.
  • API application programming interface
  • a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network- based services request.
  • a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP).
  • SOAP Simple Object Access Protocol
  • a network-based services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the network-based service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP).
  • URL Uniform Resource Locator
  • HTTP Hypertext Transfer Protocol
  • network-based services may be implemented using Representational State Transfer (“RESTful”) techniques rather than message-based techniques.
  • RESTful Representational State Transfer
  • a network-based service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message.
  • a system comprising:
  • a hierarchical data store that stores different hierarchies of a plurality of resource data objects, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in the system;
  • Clause 3 The system as recited in clause 1, wherein the modification to the one hierarchy is performed in response to a determination that the request is received from a client authorized to access the one hierarchy.
  • Clause 4 The system as recited in clause 1, wherein the system is a provider network that implements a plurality of different network-based services, wherein the resources are implemented as part of the different network-based services, and wherein the policy lookup requests are received from different ones of the network-based services.
  • a method comprising:
  • the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resources identified by other ones of the hierarchies.
  • modifying the one hierarchy according to the request further comprises determining that adding the new policy is permitted in the hierarchy, wherein adding the new policy is performed in response to determining that adding the new policy is permitted in the hierarchy.
  • Clause 13 The method as recited in clause 5, wherein the data store is a separate hierarchical data store, wherein modifying the one hierarchy according to the request comprises sending one or more requests to the hierarchical data store to modify the hierarchy in the separate hierarchical data store.
  • a non-transitory, computer-readable storage medium storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
  • the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resource data objects identified by other ones of the hierarchies when processing subsequent policy lookup requests with respect to the different hierarchies.
  • Clause 15 The non-transitory, computer-readable storage medium as recited in clause 14, wherein, in modifying the one hierarchy according to the request, the program instructions cause the one or more computing devices to implement:
  • the program instructions cause the one or more computing devices to implement changing the arrangement of at least one of the resource data objects within the hierarchy.
  • Clause 18 The non-transitory, computer-readable storage medium as recited in clause 14, wherein the program instructions cause the one or more computing devices to further implement maintaining a historical version of the hierarchy prior to the modification as part of a plurality of respective historical versions maintained for the hierarchies.
  • Clause 19 The non-transitory, computer-readable storage medium as recited in clause 18, wherein the program instructions cause the one or more computing devices to further implement processing one or more other policy lookup requests for different resource data objects with respect to the hierarchies at one or more different points in time based, at least in part, on the historical versions maintained for the hierarchies.
  • Clause 20 The non-transitory, computer-readable storage medium as recited in clause 14, wherein the resources are implemented as part of the different network-based services, wherein the one or more computing devices implement a resource management service as part of the provider network, and wherein the policy lookup requests are received from different ones of the network-based services.
  • a system comprising:
  • a hierarchical data store that stores a hierarchical data structure
  • Clause 24 The system as recited in clause 21, wherein the storage engine is implemented as part of a resource management service for a provider network, wherein the provider network implements a plurality of different network-based services, wherein the hierarchical data structure comprises resource data objects that identify policies applicable to the behavior of resources corresponding to the resource data objects, and wherein the resources are implemented as part of the different network-based services.
  • a method comprising:
  • the hierarchical data structure comprises resource data objects that identify policies applicable to the behavior of a plurality of resources in a system that correspond to the resource data objects;
  • blocking write access to the portion of the hierarchical data structure comprises removing a lock indication for the portion of the data structure from a lock structure in the hierarchical data structure;
  • allowing write access to the copy of the portion in the hierarchical data structure comprises adding the lock indication for the portion of the data structure back to the lock structure in the hierarchical data structure.
  • Clause 28 The method as recited in clause 25, wherein the portion of the hierarchical data structure remains available for write access, and wherein the method further comprises: prior to committing the modifications to the portion of the hierarchical data structure, replicating one or more writes received for the portion of the hierarchical data structure to the copy of the hierarchical data structure.
  • Clause 29 The method as recited in clause 25, wherein the request to perform the modifications to the portion of the hierarchical data structure is one of a plurality of received requests to perform respective modifications to the same portion of the hierarchical data structure.
  • Clause 30 The method as recited in clause 29, wherein atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure is further performed in response to determining that the modifications to commit do not conflict with the plurality of received requests for the same portion of the hierarchical data structure.
  • atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure comprises removing a link from the portion of the hierarchical data structure from the parent node; and subsequent to atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, reclaiming storage space in one or more storage devices maintaining the unlinked portion of the hierarchical data structure.
  • Clause 32 The method as recited in clause 25, wherein the request to perform the modifications to the portion of the hierarchical data structure and the request to commit the modifications are received via a programmatic interface, and wherein the method further comprises:
  • Clause 33 The method as recited in clause 32, wherein the one or more computing devices implement a system resource manager for the system.
  • a non-transitory, computer-readable storage medium storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
  • the hierarchical data structure comprises resource data objects that identify policies applicable to the behavior of a plurality of resources in a system that correspond to the resource data objects; creating a copy of the portion of the hierarchical data structure that is separate from the hierarchical data structure, wherein the portion of the hierarchical data structure remains available for read access;
  • the program instructions cause the one or more computing devices to further implement: prior to committing the modifications to the portion of the hierarchical data structure, replicating one or more writes received for the portion of the hierarchical data structure to the copy of the hierarchical data structure.
  • program instructions cause the one or more computing devices to further implement:
  • Clause 37 The non-transitory, computer-readable storage medium as recited in clause 34, wherein the program instructions cause the one or more computing devices to further implement:
  • Clause 38 The non-transitory, computer-readable storage medium as recited in clause 34, wherein the request to perform the modifications to the portion of the hierarchical data structure, the one or more requests identifying the modifications to perform to the portion of the hierarchical data structure, and the request to commit the modifications are received via a graphical user interface.
  • Clause 39 The non-transitory, computer-readable storage medium as recited in clause 34, wherein, in atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, the program instructions cause the one or more computing devices to implement:
  • a system comprising:
  • a plurality of compute nodes comprising at least one processor and a memory that implement a distributed system, wherein the distributed system is operated on behalf of a plurality of user accounts;
  • the agreement manager configured to:
  • Clause 42 The system as recited in clause 41, wherein the agreement request identifies the authorization scheme for the agreement request, and wherein to determine the authorization scheme, the agreement manager is configured to parse the agreement request to discover the identified authorization scheme.
  • Clause 43 The system as recited in clause 41, wherein the authorization scheme comprises a requirement that the at least one user account approve of the proposed updates.
  • Clause 44 The system as recited in clause 41, wherein the distributed system is a provider network, wherein the updates describe updates to a hierarchical data structure maintained for the provider network comprising a plurality of resource data objects that identify policies applicable to the behavior of resources implemented at one or more network-based services in the provider network corresponding to the resource data objects.
  • Clause 47 The method as recited in clause 45, wherein the authorization scheme comprises a requirement that the at least one approver approve of the proposed updates.
  • Clause 48 The method as recited in clause 45, wherein the authorization scheme comprises one or more quorum requirements for the identified approvers, and wherein evaluating the one or more responses received from the at least one user account identified for approval comprises verifying that the responses indicate approval of a respective minimum number of approvers identified for the one or more quorum requirements.
  • Clause 50 The method as recited in clause 45, further comprising: receiving another agreement request proposing one or more other updates to the hierarchical data structure;
  • Clause 51 The method as recited in clause 45, further comprising:
  • Clause 53 The method as recited in clause 45, wherein the distributed system is a provider network, wherein the resources implemented as part of one or more network-based services in the provider network, and wherein the agreement request and the responses are received via an interface of the provider network.
  • Clause 54 A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
  • Clause 55 The non-transitory, computer-readable storage medium as recited in clause 54, wherein the agreement request comprises one or more instructions to perform the one or more updates to the distributed system and wherein directing performance of the one or more updates to the distributed system comprises executing the one or more instructions in the agreement request.
  • Clause 56 The non-transitory, computer-readable storage medium as recited in clause 54, wherein the authorization scheme comprises one or more quorum requirements for the identified approvers, and wherein, in evaluating the one or more responses received from the at least one user account identified for approval, the program instructions cause the one or more computing devices to implement verifying that the responses indicate approval of a respective minimum number of approvers identified for the one or more quorum requirements.
  • Clause 57 The non-transitory, computer-readable storage medium as recited in clause 54, wherein the hierarchical data structure identifies different groups of user accounts for the plurality of user accounts, and wherein the one or more quorum requirements correspond to different ones of the groups of user accounts.
  • Clause 58 The non-transitory, computer-readable storage medium as recited in clause 54, wherein the program instructions cause the one or more computing devices to further implement:
  • Clause 60 The non-transitory, computer-readable storage medium as recited in clause 54, wherein the distributed system is a provider network, wherein the updates describe updates to a hierarchical data structure maintained for the provider network comprising a plurality of resource data objects that identify policies applicable to the behavior of resources implemented at one or more network-based services in the provider network corresponding to the resource data objects, and wherein the agreement request and the responses are received via an interface of the provider network.
  • a system comprising:
  • a data store that maintains a hierarchy of resource data objects, wherein the hierarchy of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in the system;
  • Clause 62 The system as recited in clause 61, wherein the validation of the policy performed at the remote validation agent is semantic validation, and wherein the system resource manager is further configured to:
  • Clause 63 The system as recited in clause 61, wherein the data store is a hierarchical data store, and wherein to apply the policy to the one resource data object, the system resource manager is configured to link a policy data object for the policy in the hierarchical data store to the one resource data object.
  • Clause 64 The system as recited in clause 61, wherein the system is a provider network that implements a plurality of different network-based services, wherein the resources are implemented as part of the different network-based services, and wherein the system resource manager is implemented as another one of the network-based services.
  • Clause 66 The method as recited in clause 65, wherein the validation of the policy initiated at the remote validation agent is a semantic policy evaluation that evaluates content of the policy to determine whether the policy is enforceable.
  • the one or more computing devices implement a resource manager for the distributed system
  • Clause 69 The method as recited in clause 68, wherein at least one of the one or more sources is the resource manager.
  • Clause 70 The method as recited in clause 65, wherein the policy validation event is triggered in response to an attempt to modify of one of the resources, and wherein the policy action allows or denies the modification to the resource.
  • Clause 71 The method as recited in clause 65, wherein the policy indicates one of synchronous or asynchronous processing behavior for the validation of the policy.
  • Clause 72 The method as recited in clause 65, wherein the policy is associated with a network endpoint that identifies the remote validation agent, wherein the validation information is sent to the network endpoint to initiate the validation at the remote validation agent.
  • Clause 73 The method as recited in clause 65, wherein the validation result indicates that the policy is valid, and wherein allowing or denying a policy action with respect to the that triggered the policy validation event according to the received validation result comprises:
  • a non-transitory, computer-readable storage medium storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
  • detecting a policy validation event for a policy applicable to manage one or more resources in a distributed system wherein respective resource data objects corresponding to a plurality of resources in the distributed system including the one or more resources are maintained in a hierarchical data structure in a hierarchical data store, wherein the respective resource data objects identify policies including the policy applicable to the resources in the distributed system; identifying a remote validation agent according to the policy;
  • Clause 75 The non-transitory, computer-readable storage medium as recited in clause 74, wherein the policy is one of a plurality of policy types, and wherein the validation of the policy initiated at the remote validation agent is a syntactic policy evaluation that evaluates the policy with respect to a policy schema for the one policy type to determine whether the policy conforms to the policy schema.
  • Clause 76 The non-transitory, computer-readable storage medium as recited in clause 74, wherein the validation of the policy initiated at the remote validation agent is a semantic policy evaluation that evaluates content of the policy to determine whether the policy is enforceable.
  • Clause 77 The non-transitory, computer-readable storage medium as recited in clause 74, wherein the policy action is an action to create the policy, wherein the validation result indicates that the policy is valid, and wherein, in allowing or denying a policy action that triggered the policy validation event according to the received validation result, the program instructions cause the one or more computing devices to implement:
  • Clause 78 The non-transitory, computer-readable storage medium as recited in clause 74, wherein the detecting a policy validation event, the identifying the remote validation agent, the receiving the validation result, and the allowing or denying the policy action are performed by a resource manager for the distributed system, and wherein the program instructions cause the one or more computing devices to further implement:
  • the remote validation agent evaluates, by the remote validation agent, the policy based, at least in part, on the validation information to determine whether the policy is valid; and sending, by the remote validation agent, the validation result to the resource manager indicating whether the policy is valid.
  • Clause 79 The non-transitory, computer-readable storage medium as recited in clause 74, wherein the policy action is an action to enforce the policy, wherein the validation result indicates that the policy is valid, and wherein, in allowing or denying a policy action that triggered the policy validation event according to the received validation result, the program instructions cause the one or more computing devices to implement:
  • Clause 80 The non-transitory, computer-readable storage medium as recited in clause 74, wherein the distributed system is a provider network that implements a plurality of different network-based services, wherein the one or more resources are implemented as part of the different network-based services, and wherein the detecting a policy validation event, the identifying the remote validation agent, the receiving the validation result, and the allowing or denying the policy action are performed by a resource manager for the distributed system implemented as another one of the network-based services.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Resource data objects describing resources in a system may be maintained in multiple different hierarchies for applying policies to manage the resources. Lookup requests may access the different hierarchies to determine which policies are applicable to a given resource based on the policies identified in each of the hierarchies. Modifications to hierarchies may be performed in isolation so that the application of policies in other hierarchies is unchanged by modifications to a different hierarchy. Access restrictions may be enforced with respect to hierarchies so that different users may be permitted access to different hierarchies for system resource management.

Description

DIFFERENT HIERARCHIES OF RESOURCE DATA OBJECTS FOR MANAGING
SYSTEM RESOURCES
BACKGROUND
[0001] Large systems with many users often require complex management schemes in order to ensure that both users and system components are appropriately utilized for performing operations. Instead of reconfiguring or redesigning system components each time changes in the appropriate actions or behaviors taken by system components on behalf of users are to be implemented, resource management systems have been developed to allow for the separate management of actions and behaviors that may be performed by system components. Access privileges, for instance, may be defined for one or multiple users with respect to certain system components in a resource management system so that when access requests from the users directed to the certain system components are received, the resource management system may indicate to the system components which requests may or not be performed based on the defined access privileges. In this way, resource management systems reduce the costs associated with modify or enforcing actions or behaviors of system components by reducing the number of changes that have to be implemented directly at system components. However, as the size of systems continues to increase, the ability of resource management systems to cope with growing numbers of system components in order to define and apply appropriate actions or behaviors for the system components may become less efficient without further capabilities to optimally manage system components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] FIG. 1 is a logical block diagram illustrating different hierarchies of resource data objects for managing system resources, according to some embodiments.
[0003] FIG. 2 is a logical block diagram illustrating a provider network that implements a resource management service that provides different hierarchies of resource data objects for managing provider network resources, according to some embodiments.
[0004] FIG. 3 is a logical block diagram illustrating a resource management service and a hierarchical data store, according to some embodiments.
[0005] FIG. 4 is a logical block diagram illustrating interactions between clients and a resource management service and between a resource management service and other services, according to some embodiments.
[0006] FIGS. 5 A and 5B are logical illustrations of directory structures that may store resource data objects, hierarchies of resource data objects, access locks for hierarchies, and draft copies of bulk edits to hierarchies of resource data objects in a hierarchical data store, according to some embodiments.
[0007] FIG. 6 illustrates interactions to manage hierarchies at a resource management service, according to some embodiments.
[0008] FIG. 7 illustrates interactions to manage policies within hierarchies at a resource management service, according to some embodiments.
[0009] FIG. 8 is a high-level flowchart illustrating methods and techniques to implement maintaining different hierarchies of resource data objects for managing system resources, according to some embodiments.
[0010] FIG. 9 is a high-level flowchart illustrating methods and techniques to handle a policy lookup request for a resource data object, according to some embodiments.
[0011] FIG. 10 is a high-level flowchart illustrating methods and techniques to handle a request to add a resource data object, according to some embodiments.
[0012] FIG. 1 1 is a logical block diagram illustrating atomic application of multiple updates to a hierarchical data structure, according to some embodiments.
[0013] FIG. 12 illustrates interactions to perform a bulk edit at a storage engine that atomically applies multiple changes to a hierarchical data structure, according to some embodiments.
[0014] FIG. 13 is a high-level flowchart illustrating methods and techniques to perform atomic application of multiple updates to a hierarchical data structure, according to some embodiments.
[0015] FIG. 14 is a logical block diagram illustrating multi-party updates to a distributed system, according to some embodiments.
[0016] FIG. 15 illustrates interactions to submit agreement requests for updates to organizations, according to some embodiments.
[0017] FIG. 16 illustrates a state diagram for agreement requests, according to some embodiments.
[0018] FIG. 17 is a high-level flowchart illustrating methods and techniques to implement multi-party updates to a distributed system, according to some embodiments.
[0019] FIG. 18 is a logical block diagram illustrating remote policy validation for managing distributed system resources, according to some embodiments.
[0020] FIG. 19 is a logical block diagram illustrating a policy manager for resource management service policies applicable to provider network resources, according to some embodiments. [0021] FIG. 20 illustrates interactions to manage policy types and policies in resource management service, according to some embodiments.
[0022] FIG. 21 illustrates interactions to attach policies to resource data objects, according to some embodiments.
[0023] FIG. 22 illustrates an example graphical user interface for creating and editing policies, according to some embodiments.
[0024] FIG. 23 is a high-level flowchart illustrating methods and techniques to implement remote policy validation for managing distributed system resources, according to some embodiments.
[0025] FIG. 24 is a high-level flowchart illustrating methods and techniques to implement policy validation at a remote validation agent, according to some embodiments.
[0026] FIG. 25 is an example computer system, according to various embodiments.
[0027] While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words "include," "including," and "includes" indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words "have," "having," and "has" also indicate open-ended relationships, and thus mean having, but not limited to. The terms "first," "second," "third," and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated.
[0028] Various components may be described as "configured to" perform a task or tasks. In such contexts, "configured to" is a broad recitation generally meaning "having structure that" performs the task or tasks during operation. As such, the component can be configured to perform the task even when the component is not currently performing that task (e.g., a computer system may be configured to perform operations even when the operations are not currently being performed). In some contexts, "configured to" may be a broad recitation of structure generally meaning "having circuitry that" performs the task or tasks during operation. As such, the component can be configured to perform the task even when the component is not currently on. In general, the circuitry that forms the structure corresponding to "configured to" may include hardware circuits.
[0029] Various components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase "configured to." Reciting a component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f), interpretation for that component.
[0030] "Based On." As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase "determine A based on B." While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B.
[0031] The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.
DETAILED DESCRIPTION
[0032] Various embodiments of different hierarchies of resource data objects for managing system resources are described herein. Managing system resources often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources. For example, security policies, such as access rights or permitted actions for system resources, may be defined and enforced for users of the system resources. In various embodiments, data describing the resources of a system may be maintained that also describes these permitted behaviors. For example, data objects describing system resources may be maintained to identify policies that indicate the permitted behaviors for the system resources. In order to apply the same policies to multiple resource data objects, a hierarchy or structure of the resource data objects may be implemented. For example, a tree structure may be implemented that arranges the resource data objects in groups, directories, or other sets of resource data objects which apply those policies inherited along the path of the tree structure from the resource data object to the root of the tree structure. In this way, policies applied to parent nodes (e.g., the groups, directories, or other set of resource data objects) may be inherited and applied to child nodes (e.g., the resource data objects in the groups, directories, or sets).
[0033] Applying policies to a single structure of resource data objects, however, limits that effectiveness of the single structure to apply policies to resource data objects that could be grouped differently than the groups, directories, or sets defined by the single structure. For example, a structure of resource data objects that arranges the resource data objects into groups based on resource type (e.g., servers, network routers, storage devices, user accounts, etc.) may provide an optimal structure for applying policies common to one resource type, but make it difficult to apply policies to the various different resource types that are utilized as part of a single department, function, or business unit within an organization (e.g., as the department may have some servers, some network routers, some storage devices, and some user accounts which would have to be individually identified within the larger groups of servers, network routers, storage devices, and user accounts in order to apply the same policy). In various embodiments, multiple hierarchies of the same resource data objects may be maintained so that policies may be optimally applied to different arrangements of the same resource data objects. Consider the example given above, instead of individually identifying the servers, network routers, storage devices, and user accounts in order to apply the same policy in the larger groups of servers, network routers, storage devices, and user accounts, a different hierarchy of resource data objects that groups resource data objects by department may allow for a policy applied to the department node in the hierarchy to have the policy inherited and thus applied for each of those resource data objects in the same department (and not apply the policy to those resource data objects not in the department).
[0034] FIG. 1 is a logical block diagram illustrating different hierarchies of resource data objects for managing system resources, according to some embodiments. Data store 110 may store a collection of resource data objects 112. Resource data objects 112 may describe resources 142 implemented as part of system 140. For example, resource data objects 112 may be files, data structures, records, or other data that describe physical system resources, such as computing devices (e.g., servers), networking devices, or storage devices, or virtual system resources, such as user accounts, user data (e.g., data objects such as database tables, data volumes, data files, etc.), user resource allocations (e.g., allocated resource bandwidth, capacity, performance, or other usage of system resources as determined by credits or budgets), virtual computing, networking, and storage resources (e.g., compute instances, clusters, or nodes), or any other component, function or process operating in system 140. Various controls, actions, configurations, operations, or other definitions of the behavior of resources 142 may be managed by applying policies 150 to one or more of the resource data objects so that when various operations are performed by or on behalf of resources 142 in system 140, a lookup operation may be performed to determined which policies are applied to the resource data object corresponding to a given resource. In this way, management of resources 142 may be separately described and maintained for resources 142, allowing for the behaviors of resources 142 applied by policies to be easily applied, configured, changed, or enforced with respect to individual resources 142, without having to modify the resources 142 directly to enforce policies.
[0035] FIG. 1 illustrates that different hierarchies 120 of resource data objects 112 may be maintained. For example, hierarchy of resource data objects 120a is configured to maintain the groupings of resource data objects 112 differently than in hierarchy of resource data objects 120b. Consider a scenario where hierarchy of resources 120a arranges user accounts represented by the resource data objects 112 by user title in an organization (e.g., senior vice-president, vice- president, director, manager, team lead, etc.). Hierarchy 120a may be accessed and configured to apply different data access policies to user accounts based on user title (e.g., granting user accounts with higher titles greater data access and user accounts with lower titles lesser data access) by applying the different data access policies to different groups within the hierarchy (e.g., by applying a particular data access policy to a group with a particular user title, all user accounts that are members of the group as maintained in hierarchy 120a would inherit the application of that data access policy). However, in order to apply cost allocation policies (e.g., policies that define the budgets, monetary accounts or funds, or other resources expended to perform various operations), hierarchy of resources 120b may arrange user accounts by business unit or function (e.g., product category A, product category B, engineering, finance, legal, etc.) so that by applying a cost allocation policy to the different business units or functions, the costs incurred by user accounts grouped in the same department (e.g., vice-president, director, manager, team lead, etc. in product category B) may be deducted or obtained from a specific budget or monetary account. Because user accounts in hierarchy 120b may be arranged by business unit, hierarchy 120b may be easily updated to apply particular cost allocation policies to different business units or functions (e.g., by applying a particular data access policy to a group with a particular business unit or function, all user accounts that are members of the group as maintained in hierarchy 120a would inherit the application of that cost allocation policy). [0036] Maintaining different hierarchies 120 allows for the application of policies 150 to be more efficiently optimized. In large scale systems, such as provider network 200 discussed below with regard to FIG. 2, hundreds of thousands or millions of resources may be managed.
Optimized arrangement of the different resources in different hierarchies allows for more efficient application of policies to the resources described by the resource data objects in the different hierarchies, as noted in the example scenario given above. In turn policy lookup mechanisms for the resources may be automated so that changes or updates to policies may be applied to the hierarchies of the resource data objects, and enforced upon demand for resources when lookup operations for the resources are performed.
[0037] Hierarchies may also allow for the management of resources to be more easily distributed to different users. For example, access to hierarchies may be limited to specific users, so that users that manage system resources using one hierarchy may not have to understand other arrangements of resource data objects or other policies applied in other hierarchies, effectively providing isolation between hierarchies. In this way, modifications to hierarchies (e.g., such as changes to the arrangement of resource data objects or application of policies) may be made concurrently without interfering with other resource management changes. For instance, security changes may be made to a security hierarchy while changes to a cost allocation hierarchy are made without encountering conflicts (e.g., read or write locks on resource data objects that prevent changes from being performed). Moreover, access to sensitive management information (e.g., security policies or cost allocation information) may be limited by restricting the users able to view or change a hierarchy, so that users without access permission for a hierarchy may not view or make changes to the hierarchy. For example, client 130a may present identification credentials that grant permission to access hierarchy 120a, while client 130b may present identification credentials that grant permission to access hierarchy 120b. However, if client 130b were to present the same identification credentials to access hierarchy 120a, client 130b may be denied access to hierarchy 120a as the presented identification credentials may not be granted access to hierarchy 120a.
[0038] In some embodiments, the application of policies or arrangement of resources data objects may be limited by the type or creator of the hierarchy. For instance, security policies may only be applied to a hierarchy created by a user with the appropriate credentials for managing resource security policies. In some embodiments, certain policies or types of policies may be subject to application limitations. For instance, only one instance of a cost allocation policy may be applied at one out of multiple hierarchies (so that other hierarchies may not have conflicting cost allocation policies applied). [0039] Please note, FIG. 1 is provided as a logical illustration of maintaining different hierarchies of resource data objects, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a data store, system or clients or the number, type, or arrangements of hierarchies or resource data objects. For example, in some embodiments, resource data objects may be assigned to or members of different groups, which are also nodes in a hierarchy. Different arrangements of groups, containers, or other collections of resource data objects may be implemented for each hierarchy. In some embodiments, not all resource data objects may be present in every hierarchy.
[0040] The specification first describes an example of a provider network implementing multiple different resources as part of offering different services to clients of the provider network. The provider network may also implement a resource management service that maintains different hierarchies of resource data objects for managing provider network resources corresponding to the resource data objects, according to various embodiments. Included in the description of the example resource management service are various aspects of the example resource management service along with the various interactions between the resource management service, other services in the provider network, and clients of the provider network. The specification then describes a flowchart of various embodiments of methods for maintaining different hierarchies of resource data objects for managing provider network resources. Next, the specification describes an example system that may implement the disclosed techniques. Various examples are provided throughout the specification.
[0041] FIG. 2 is a logical block diagram illustrating a provider network that implements a resource management service that provides different hierarchies of resource data objects for managing provider network resources, according to some embodiments. Provider network 200 may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients 270. Provider network 200 may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system 2500 described below with regard to FIG. 25), needed to implement and distribute the infrastructure and services offered by the provider network 200. In some embodiments, provider network 200 may implement computing service(s) 210, networking service(s) 220, storage service(s) 230, resource management service 240 (which is discussed in detail below with regard to FIGS. 3 - 7), and/or any other type of network based services 250 (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services as well as services for operating the services offered by provider network 200, including deployment service 252, billing service 254, access management service 256, and resource tag service 258). Clients 270 may access these various services offered by provider network 200 via network 260. Likewise network-based services may themselves communicate and/or make use of one another to provide different services. For example, various ones of computing service(s) 210, networking service(s) 220, storage service(s) 230, and/or other service(s) 250 may lookup policies applied to resource data objects in different hierarchies maintained as part of resource management service 240 describing resources in the services in order to enforce behaviors, actions, configurations, or controls indicated in the policies.
[0042] In various embodiments, the components illustrated in FIG. 2 may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components of FIG. 2 may be implemented by a system that includes a number of computing nodes (or simply, nodes), each of which may be similar to the computer system embodiment illustrated in FIG. 25 and described below. In various embodiments, the functionality of a given service system component (e.g., a component of the resource management service or a component of the computing service) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one storage service system component).
[0043] Computing service(s) 210 may provide computing resources to client(s) 270 of provider network 200. These computing resources may in some embodiments be offered to clients in units called "instances," such as virtual or physical compute instances or storage instances. A virtual compute instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor) or machine image. A number of different types of computing devices may be used singly or in combination to implement compute instances, in different embodiments, including general purpose or special purpose computer servers, storage devices, network devices and the like. In some embodiments clients 270 or other any other user may be configured (and/or authorized) to direct network traffic to a compute instance.
[0044] Compute instances may operate or implement a variety of different platforms, such as application server instances, Java™ virtual machines (JVMs), general purpose or special-purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms) suitable for performing client 270 applications, without for example requiring the client 270 to access an instance. In some embodiments, compute instances have different types or configurations based on expected uptime ratios. The uptime ratio of a particular compute instance may be defined as the ratio of the amount of time the instance is activated, to the total amount of time for which the instance is reserved. Uptime ratios may also be referred to as utilizations in some implementations. If a client expects to use a compute instance for a relatively small fraction of the time for which the instance is reserved (e.g., 30% - 35% of a year- long reservation), the client may decide to reserve the instance as a Low Uptime Ratio instance, and pay a discounted hourly usage fee in accordance with the associated pricing policy. If the client expects to have a steady-state workload that requires an instance to be up most of the time, the client may reserve a High Uptime Ratio instance and potentially pay an even lower hourly usage fee, although in some embodiments the hourly fee may be charged for the entire duration of the reservation, regardless of the actual number of hours of use, in accordance with pricing policy. An option for Medium Uptime Ratio instances, with a corresponding pricing policy, may be supported in some embodiments as well, where the upfront costs and the per-hour costs fall between the corresponding High Uptime Ratio and Low Uptime Ratio costs.
[0045] Compute instance configurations may also include compute instances with a general or specific purpose, such as computational workloads for compute intensive applications (e.g., high-traffic web applications, ad serving, batch processing, video encoding, distributed analytics, high-energy physics, genome analysis, and computational fluid dynamics), graphics intensive workloads (e.g., game streaming, 3D application streaming, server-side graphics workloads, rendering, financial modeling, and engineering design), memory intensive workloads (e.g., high performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis), and storage optimized workloads (e.g., data warehousing and cluster file systems). Size of compute instances, such as a particular number of virtual CPU cores, memory, cache, storage, as well as any other performance characteristic. Configurations of compute instances may also include their location, in a particular data center, availability zone, geographic, location, etc.... and (in the case of reserved compute instances) reservation term length.
[0046] Networking service(s) 220 may implement various networking resources to configure or provide virtual networks, such as virtual private networks (VPNs), among other resources implemented in provider network 200 (e.g., instances of computing service(s) 210 or data stored as part of storage service(s) 230) as well as control access with external systems or devices. For example, networking service(s) 220 may be configured to implement security groups for compute instances in a virtual network. Security groups may enforce one or more network traffic policies for network traffic at members of the security group. Membership in a security group may not be related to physical location or implementation of a compute instance. The number of members or associations with a particular security group may vary and may be configured.
[0047] Networking service(s) 220 may manage or configure the internal network for provider network 200 (and thus may be configured for implementing various resources for a client 270). For example, an internal network may utilize IP tunneling technology to provide a mapping and encapsulating system for creating an overlay network on network and may provide a separate namespace for the overlay layer and the internal network layer. Thus, in this example, the IP tunneling technology provides a virtual network topology; the interfaces that are presented to clients 270 may be attached to the overlay network so that when a client 270 provides an IP address that they want to send packets to, the IP address is run in virtual space by communicating with a mapping service (or other component or service not illustrated) that knows where the IP overlay addresses are.
[0048] Storage service(s) 230 may be one or more different types of services that implement various storage resources to provide different types of storage. For example, storage service(s) 230 may be an object or key -value data store that provides highly durable storage for large amounts of data organized as data objects. In some embodiments, storage service(s) 230 may include an archive long-term storage solution that is highly-durable, yet not easily accessible, in order to provide low-cost storage. In some embodiments, storage service(s) 230 may provide virtual block storage for other computing devices, such as compute instances implemented as part of virtual computing service 210. For example, a virtual block-based storage service may provide block level storage for storing one or more data volumes mapped to particular clients, providing virtual block-based storage (e.g., hard disk storage or other persistent storage) as a contiguous set of logical blocks. Storage service(s) 230 may replicate stored data across multiple different locations, fault tolerant or availability zones, or nodes in order to provide redundancy for durability and availability for access.
[0049] In some embodiments, storage service(s) 230 may include resources implementing many different types of databases and/or database schemas. Relational and non-relational databases may be implemented to store data, as well as row-oriented or column-oriented databases. For example, a database service that stores data according to a data model in which each table maintained on behalf of a client contains one or more items, and each item includes a collection of attributes, such as a key value data store. In such a database, the attributes of an item may be a collection of name-value pairs, in any order, and each attribute in an item may have a name, a type, and a value. Some attributes may be single valued, such that the attribute name is mapped to a single value, while others may be multi-value, such that the attribute name is mapped to two or more values.
[0050] In some embodiments, storage service(s) 230 may implement a hierarchical data storage service, such as hierarchical data store 350 in FIG. 3 discussed below. A hierarchical data storage service may store, manage, and maintain hierarchical data structures, such as a directory structure discussed below with regard to FIG. 5A. Clients of a hierarchical data storage service may operate on any subset or portion of a hierarchical data structure maintained in the data storage service with transactional semantics and/or may perform path-based traversals of hierarchical data structures. Such features allow clients to access hierarchical data structures in many ways. For instance, clients may utilize transactional access requests to perform multiple operations concurrently, affecting different portions (e.g., nodes) of the hierarchical data structure (e.g., reading parts of the hierarchical data structure, adding a node, and indexing some of the node's attributes, while imposing the requirement that the resulting updates of the operations within the transaction are isolated, consistent, atomic and durably stored). As discussed below, in at least some embodiments, the hierarchical data stored in a hierarchical data storage service may be multiple hierarchies of resource data objects on behalf of resource management service 240.
[0051] In various embodiments, provider network 200 may implement various other service(s) 250, including deployment service 252. Deployment service 252 may include resources to instantiate, deploy, and scale other resources (from other network-based service, such as computing service(s) 210, networking service(s) 220, and/or storage service(s) 230) to implement a variety of different services, applications, or systems. For example, deployment service 252 may execute pre-defined deployment schemes which may be configured based, at least in part, on policies applied to resources launched by the deployment service 252 (e.g., a policy that describes the hardware and software configuration of virtual compute instance launched on behalf of particular user account).
[0052] Provider network 200 may also implement billing service 254 which may implement components to coordinate the metering and accounting of client usage of network-based services, such as by tracking the identities of requesting clients, the number and/or frequency of client requests, the size of data stored or retrieved on behalf of clients, overall resource bandwidth used by clients, class/type/number of resources requested by clients, or any other measurable client usage parameter. Billing service 254 may maintain a database of usage data that may be queried and processed by external systems for reporting and billing of client usage activity. Similar to deployment service 252, policies applied to resource data objects in hierarchies managed by resource management service 240 may indicate payment accounts, budgets, or responsible parties for which the usage data is to be reported and/or billed.
[0053] Provider network may also implement access management service 256, which may implement user authentication and access control procedures defined for different resources (e.g., instances, user accounts, data volumes, etc.) as described by policies applied to resource data objects in hierarchies at resource management service 240. For example, for a given network-based services request to access a particular compute instance, provider network 200 may implement components configured to ascertain whether the client associated with the access is authorized to configured or perform the requested task. Authorization may be determined such by, for example, evaluating an identity, password or other credential against credentials associated with the resources, or evaluating the requested access to the provider network 200 resource against an access control list for the particular resource. For example, if a client does not have sufficient credentials to access the resource, the request may be rejected, for example by returning a response to the requesting client indicating an error condition.
[0054] Provider network 200 may also implement resource tag service 258, which may manage resource attributes for resources of other services (e.g., computing service(s) 210, networking service(s) 220, and/or storage service(s) 230). Resource attributes may be a tag, label, set of metadata, or any other descriptor or information corresponding to a provider network resource, implemented at one of various network-based services of the provider network. Attributes may be represented in various ways, such as a key-value pair, multiple values, or any other arrangement of information descriptive of the resource. Resource attributes for a resource may be maintained as part of resource metadata for the resources at network-based services. Network-based services may create resource metadata and/or attributes when a resource is created by a client. However, a client may wish to modify, remove, and/or add new resources attributes to the resource metadata in order to provide greater flexibility for automating various interactions within the resources utilizing resource metadata. Resource tag service 258 may lookup policies for different resources to determine which resource attributes are to be maintained for the different resources, in some embodiments.
[0055] Generally speaking, clients 270 may encompass any type of client configurable to submit network-based services requests to provider network 200 via network 260, including requests for directory services (e.g., a request to create or modify a hierarchical data structure to be stored in directory storage service 220, etc.). For example, a given client 270 may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client 270 may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of persistent storage resources to store and/or access one or more hierarchical data structures to perform techniques like organization management, identity management, or rights/authorization management. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client 270 may be an application configured to interact directly with network-based services platform 200. In some embodiments, client 270 may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network- based services architecture.
[0056] In some embodiments, a client 270 may be configured to provide access to network- based services to other applications in a manner that is transparent to those applications. For example, client 270 may be configured to integrate with an operating system or file system to provide storage in accordance with a suitable variant of the storage models described herein. However, the operating system or file system may present a different storage interface to applications, such as a conventional file system hierarchy of files, directories and/or folders. In such an embodiment, applications may not need to be modified to make use of the storage system service model. Instead, the details of interfacing to provider network 200 may be coordinated by client 270 and the operating system or file system on behalf of applications executing within the operating system environment.
[0057] Clients 270 may convey network-based services requests (e.g., access requests directed to hierarchies in resource management service 240) to and receive responses from network-based services platform 200 via network 260. In various embodiments, network 260 may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients 270 and platform 200. For example, network 260 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. Network 260 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client 270 and network-based services platform 200 may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network 260 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client 270 and the Internet as well as between the Internet and network-based services platform 200. It is noted that in some embodiments, clients 270 may communicate with network-based services platform 200 using a private network rather than the public Internet.
[0058] FIG. 3 is a logical block diagram illustrating a resource management service and a hierarchical data store, according to some embodiments. Resource management service 240 may manage the application of policies to resource data objects for resources in provider network 200. As provider network 200 may offer services to a variety of different customers, a collection or set of resource data objects that are managed together may identified as an organization (although various other terms including entity, domain, or any other identifier for the collection of resource data objects may also be used). Resource management service 240 may provide various capabilities to clients of resource management service 240 to create and manage respective organizations which includes the resource data objects describing the resources of provider network 200 which are associated with one or more customers of the provider network, including managing which resource data objects (and thus their corresponding resources) are members of an organization. Resource management service 240 may allow for the creation and management of multiple different hierarchies of the resources in an organization. These resources may be further subdivided and assigned into groups (which also may be subdomains, directories, sub-entities, sets, etc.). Groups may consist of any resource that can have a policy applied to it. Resource management service 240 may allow clients to author policies and apply them to the organization, to different groups, or directly to resource data objects.
[0059] Resource management service 240 may implement interface 310, which may provide a programmatic and/or graphical user interface for clients to request the performance of various operations for managing system resources via an organization. For example, the various requests described below with regard to FIGS. 6 and 7 may be formatted according to an Application Programming Interface (API) and submitted via a command line interface or a network-based site interface (e.g., website interface). Other requests that may be submitted via interface 310 may be requests to create an organization, update an organization (e.g., by adding other resources, inviting other user accounts to join the organization. In some embodiments, an organization may be be treated as a resource owned or controlled by the user account that created it, and that account by default may have access permissions to the organization. The user account could then delegate permissions to other user accounts or users using cross-account access or transfer ownership of the organization, in cases where control needs to move to a delegated group or the owner needs to leave the organization.
[0060] Resource management service 240 may implement organization management 320, which may handle the creation of organizations, the updates to or modifications of organizations, the delegation of access permissions to organizations, as well as the arrangement of resource data objects within hierarchies maintained for the organization. For example, upon creation an organization may include a single hierarchy providing an arrangement of resource data objects (e.g., as members of various groups and/or groups within groups, etc.). Resource management 320 may handle the various requests to create additional hierarchies, update hierarchies, or delete hierarchies, as discussed below with regard to FIG. 6. Organization management 320 may also handle requests to add resource data objects to an organization, as discussed below with regard to FIG. 10. For example, organization management 320 may identify which hierarchies a new resource data object should be added to and the location within the hierarchy that the resource data object should be added. In at least some embodiments, organization management may coordinate organization changes between multiple parties, such as adding user accounts to or removing user accounts from an organization and may implement multiparty agreement mechanisms to approve the change to the organization, implementing multi-account agreement management 322. For example, multi-account agreement management 322 may facilitate an authenticated 2-way handshake mechanism to confirm or deny a potential change to an organization. Multi-account agreement management 322 may expose different mechanisms for multiparty agreements, as discussed below with regard to FIGS. 6 - 8, including emailed invitations, single use tokens, and shared secrets (domains/passwords). When agreement is confirmed, organization management 320 may then perform the agreed upon changes to the organization. Multi-account agreement management 322 may maintain state information and other tracking information to track the progress and approval or disapproval of proposed updates via agreement requests, as discussed below with regard to FIGS. 15-16.
[0061] As noted above, policies may be authored or defined and then applied to various resource data objects, groups, or an entire hierarchy of an organization. Resource management service 240 may implement policy management 330 to handle the authoring of policies as well the application of policies. Many different types of policies may be applied in order to define different types of behaviors. Some policy types, for instance, may be related to specific behaviors, resources, or actors. Billing related policies, for instance, may have one or various types of billing policies. Resource configuration policy types (e.g., configuring operational configuration of resources, when deployed by deployment service 252 for instance. Some policy types can define access controls to resources. Policy management 330 may handle various requests to create a policy of one of many policy types, define policy types by authoring a policy schema, and the application of policies to resource data objects, groups, or entire hierarchies within an organization, such as those requests discussed below with regard to FIG. 7.
[0062] Policy management 330 may also handle lookup requests for resource data objects, groups, or organizations and perform policy application and conflict resolutions, as discussed below with regard to FIGS. 4 and 9. For example, policies can also be inherited in a chain from the organization down to a group, group of groups, or individual resource data object. If a policy is applied to a parent node in the hierarchy, then the child node (group, group of groups, or individual resource data object) may inherit the policy of the parent node. In this way, the policy applied to the parent node becomes the "default" policy, in the absence of any other policy applications. When there are multiple policies in the inheritance path, for example there is a policy applied at both the hierarchy and group level, then different policies may have different inheritance semantics, which may have to be resolved. In one scenario, access policies may follow the semantics of a set union, where ordering does not matter (e.g., everything is allowed unless explicitly excluded). Billing policies, in another scenario, may implement a "child wins/parent appends" inheritance model where a child policy may be executed, followed by a parent policy. In such scenarios, ordering of policies matters. Thus, policy management 330 may be configured to resolve conflicting policies according to the appropriate inheritance semantics for the policy.
[0063] In at least some embodiments, policy management 330 may implement policy validation (although in alternative embodiments validation may be delegated in part or in total to other components). Validation of policies may include syntax validation. Syntax validation checks policies instances of policy types that are authored to determine whether the policy instance is syntactically correct so that the policy can be be parsed and evaluated by backend systems that lookup the policy. Syntactic validation may be performed, in some embodiments, when authored. In addition to syntactic validation, some policies may undergo semantic validation. Semantic validation may be performed to ensure that a resource or other information specified in a policy results in a policy that can be enforced. For example, semantic validation could determine whether an Accountld specified in a payer policy is an account in the organization that has a valid payment instrument. In addition to semantically validating the policies themselves, policy management 330 may validate policy applications and organization changes, in order to ensure that the changes do not invalidate policies that are applied within the organization. For example, validation of changes to ensure that a payer for an organization doesn't leave the organization. As each policy may have different semantic validation logic, each policy may have a separately configurable semantic validator.
[0064] Resource management service 240 may implement historical versioning of hierarchies in organizations, in some embodiments. Some services, such as billing service 254, may require the ability to query for historically versioned data, such as which account was the payer of the organization at the end of the previous month (as the current payer may be different due to a change to a hierarchy). In order to provide historical versions of hierarchies (including the policies applied and resource data objects arranged), historical versioning 340 may store prior versions or track or record changes to hierarchies. These prior versions or changes may be associated with particular points in time (e.g., by assigning timestamps). Historical versioning 340 may handle requests for policy lookups across particular ranges of time or at particular points in time. Historical versioning 340 may access the versioned data and return the appropriate policies for the specified time(s). Hierarchy versions may be stored as part of organization data objects 352 in hierarchical data store 350, in some embodiments.
[0065] Hierarchical data store 360 may provide data storage for organization data objects 362, including the resource data objects, policy data objects, and any other data describing the organization, including the multiple hierarchies of the resource data objects, as discussed below with regard to FIGS. 5A-5B. The organization data objects 352 may be maintained within a single hierarchical data structure, though different hierarchies of resource data objects within the single hierarchical data structure may be provided for managing resource data objects, as discussed below with regard to FIG. 5A.
[0066] FIG. 4 is a logical block diagram illustrating interactions between clients and a resource management service and between a resource management service and other services, according to some embodiments. As noted above, clients may interact with resource management service 240 to manage resources. For example, client(s) 410 may submit various organization/policy management requests 412 (e.g., to modify a hierarchy by arranging resource data objects or applying/removing policies). In turn resource management service 240 may identify the appropriate updates to organization data to be made or to be read, and send organization data updates/reads 422 to hierarchical data storage 350. Hierarchical data storage 350 may execute the received requests to change hierarchical data structures storing the organization data objects in accordance with the update request or retrieve the appropriate data read from the organization data objects according to the hierarchies, and return update acknowledgements/read data 424 to resource management service 240. In turn, resource management service 240 may return the appropriate acknowledgments (e.g., indicating success or failure of the requests.
[0067] Service(s) 400 may perform policy lookups 402 with respect to resource data objects corresponding to resources under the control or responsibility of service(s) 400, in various embodiments. For example, an access control service, such as access management service 256, may lookup the access policies for a particular resource (e.g., compute instance or user account) in order to permit or deny an access request. When launching new resources, network configuration information may be maintained in a policy that is applicable to the launched resource and may be retrieved by a policy lookup 402 from a service 400. Policy lookups 402 may be requested via resource management service 240 or, in some embodiments, may be requested directly from the service to the hierarchical data store 350. Latency sensitive services, for instance, may implement local libraries, agents, or interpreters for the organization data maintained at hierarchical data store 350 in order to reduce the number of requests that have to be sent in order to perform a policy lookup.
[0068] FIG. 5 A is a logical illustration of directory structures that may store resource data objects and hierarchies of resource data objects in a hierarchical data store, according to some embodiments. Organization data objects (including policy data objects, resource data objects, groups or groups of groups of data objects) may be maintained in one or multiple directory structures, in various embodiments. For example, organization 500 may utilize directory structure 502 to store the resources and policies that are part of the organization. Index node 510 may provide information for performing a lookup to determine the location of a resource data object or policy data object. Resources node 520 may group resources into various resources types 522 and 524 (e.g., user accounts, virtual compute instances, storage volumes, VPNs, load balancers, etc.) and within the resource types 522 and 524 may be found resource data objects 526 and 528 describing individual resources in the provider network. Similarly, policies node 530 may include different policy types 532 and 534 (which may be created by clients as discussed above). Individual instances of the policy types 536 and 538 may be policy instances applied to resource data objects, groups, groups of groups, or hierarchies.
[0069] Organization 500 may also utilize directory structure 504 to maintain different hierarchies of resource data objects and policy data objects. Hierarchies node 540 may be the group of hierarchies maintained for organization 500, including hierarchy 550 and hierarchy 560. Within each hierarchy, groups, 552 and 554 or groups of groups, and/or any arrangement of resources included in the group of resources 520 may be linked (as illustrated by the dotted lines) to indicate membership in the group. Similar policies, such as policies 536 and 538 may be linked to hierarchies, groups or groups of groups, or individual resource data objects within the hierarchies.
[0070] Different types of hierarchical data structures, such as directory structures 502 and 504, may be stored, managed, and or represented in order to maintain organization 500. For example nodes in a hierarchy (e.g., the circle or square shapes) may have a globally unique identifier (GUID), zero or more attributes (key, value pairs), and zero or more links to other nodes. In some embodiments, a group or directory may be one type of node which has zero or more child links to other nodes, either groups/directories or resource data objects/policy data objects. Group nodes may have zero or one parent directory node, implying that directory nodes and links define a tree structure, in some embodiments, as depicted in FIG. 5 A. Index 510, hierarchies 540, resources 520, policies 530, hierarchy 550 and 560, resource type 522 and 524, policy type 532 and 534, and group 552 and 554 may be group/directory nodes. Node 500, organization node, may be a root node that is the logical root multiple directory structures and may not be visible to clients of resource management service (which may access individual hierarchies). Resource and policy nodes (represented by squares such as resource node) may be leaf nodes in a directory structure 410. Leaf nodes may have a unique external Id (e.g., client specified) and client-defined attributes. Leaf nodes can have more than one parent node so that resource data objects and policy data objects can be linked to multiple hierarchies. In some embodiments, all resource data objects are linked to all hierarchies (though in different arrangements as defined by a user), whereas in other embodiments, resource data objects may be linked to only some hierarchies.
[0071] In some embodiments, a link may be a directed edge between two nodes defining a relationship between the two nodes. There may be many types of links, such as client visible link types and another link type for internal hierarchical data store operation. In some embodiments, a child link type may create a parent - child relationship between the nodes it connects. For example, child link can connect resource type node 522 to resource 526. Child links may define the structure of directories (e.g., resources 520, policies 530, hierarchies 540). Child links may be named in order to define the path of the node that the link points to. Another type of client visible link may be an attachment link. An attachment link may apply a resource data object or policy data object to another node (e.g., group 552, hierarchy 550, etc.) as depicted by the dotted lines. Nodes can have multiple attachments. In some embodiments, some attachment restrictions may be enforced, such as a restriction that not more than one policy node (e.g., policy 536) of policy type 532 can be attached to a same node. A non-visible type of link or implied link type may also be implemented in some embodiments, a reverse link. Reverse links may be used for optimizing traversal of directory structures for common operations like look-ups (e.g., policy lookups).
[0072] In various embodiments, data objects or nodes in organization 500 can be identified and found by the pathnames that describe how to reach the node starting from the logical root node 500, starting with the link labeled "/" and following the child links separated by path separator "/" until reaching the desired node. For example, resource 526 can be identified using the path: £7index510/resources520/resource526". As some nodes may be children of multiple directory nodes, multiple paths may identify the node. For example, the following path can also be used to identify resource 526: '7hierarchies540/hierarchy550/group 552". Please note that the illustration in FIG. 5 A provides many examples of the possible ways in which policy data objects or lease data objects may be linked. As noted earlier, not all policies may be attached to all hierarchies or all resource data objects to all hierarchies and thus the illustrated links are not intended to be limiting. Similar, directory structures may be differently arrange so that a single directory structure or greater number of directory structures are utilized.
[0073] FIG. 5B is a logical illustration of directory structures that store access locks for hierarchies, and draft copies of bulk edits to hierarchies of resource data objects in a hierarchical data store, according to some embodiments. For example, directory structure 506 may maintain locks node 570. Locks node 570 may have child nodes corresponding to each hierarchy in hierarchies 540, such as hierarchy 550 and hierarchy 560. If a hierarchy node is linked to locks 570, then a lookup upon the hierarchy node will be able to traverse the locks 570 structure, indicating that the hierarchy is available for read and write access. If, however, a node is not found when traversing locks 570, then it may be determined that the hierarchy is not available for write access.
[0074] Drafts node 570 may also logically point to or associate a directory structure 508 that separately maintains different drafts for bulk edit requests. For instance, although the draft directory structure 508 is logically linked with organization 500, a path-based traversal technique that identifies data by traversing from leaf nodes to the path would not view the logical link as part of a path, so that any path-based traversal would logically separate the drafts from other data stored in organization 500. Each draft node, such as draft 582 and draft 584 may link to a copied hierarchy (e.g., 586 and 588) upon which modifications are performed as part of a bulk edit. When the bulk edit is committed, the link from copied hierarchy 586 is changed from draft 582 to the hierarchies node 540 and links removed from the original hierarchy (e.g., hierarchy 550 or 560) to hierarchies node 540. In this way, a new version of the hierarchy can be easily inserted into hierarchies node 540 without performing operations copy or relink each child node in the hierarchy. The old versions of hierarchies may remained unlinked until storage space for the old versions is reclaimed (e.g., as part of a background garbage collection process).
[0075] FIG. 6 illustrates interactions to manage hierarchies at a resource management service, according to some embodiments. Clients may submit a create hierarchy request 612 via interface 310. The creation request 612 may include a membership policy which provides for a default arrangement of resources data objects that are automatically added to the hierarchy (e.g., as a result of adding the resource to the organization) or the membership policy may be included as part of a separate request. Resource management service 240 may create a hierarchy directory 614 in hierarchical data store 350, and then send requests to add resources to the hierarchy directory 616 (e.g., by adding links between resource data objects and the new hierarchy directory). Resource management service 240 may then acknowledge the hierarchy creation 618 to client 610.
[0076] Client 610 may submit a request to update the hierarchy 622. Hierarchy update requests may include various requests to add a group, remove a group, add resources to a group, remove resource(s) from a group, add a group to a group, remove a group from a group, or any other arrangement modification to the hierarchy. In turn, resource management service 240 may send an update hierarchy directory request 624 to perform one or more corresponding actions, such as requests to create group sub-directories, remove group sub-directories, add resource data object link(s), or remove resource data object link(s). Upon completion or failure of the requests, resource management service 240 may acknowledge the hierarchy update 626 (which may indicate success or failure).
[0077] Client 610 may submit a request to delete a hierarchy 632 to resource management service 240. Resource management service 240 may send a request to delete the hierarchy directory (which may delete any group(s), or group(s) of groups within the hierarchy but not resource data objects or policy data objects which may only be linked to the deleted directory). Instead, the links may be removed (e.g., by hierarchical data store 360 when one of the linked nodes is deleted).
[0078] FIG. 7 illustrates interactions to manage policies within hierarchies at a resource management service, according to some embodiments. Client 710 may submit a request to create a policy 712. For example, the creation request may include a policy definition or content, including an indication of policy type, so that validation of the policy can be performed, as discussed above. Resource management service 240 may add a policy data object 714 representing the policy to hierarchical data store 350 (e.g., storing the policy data object as a new policy data object in the policy directory for the organization). An acknowledgment 716 indicating policy creation success or failure may be returned from resource management service 240 to client 710
[0079] Client 710 may send a request to apply a policy to one or more resource data objects, groups, or hierarchies 722. In turn, resource management service 240 may send a request to link the policy data object to the hierarchy directory(ies), group(s), or resource data object(s) 724. Resource management service 240 may then acknowledge the application 726 to client 710. Similarly, client 710 may send a request to remove the policy from one or more resource data objects, groups, or hierarchies. Resource management service 240 may then send a request to remove the link from the policy data object to the requested hierarchy director(ies), group(s) or resource data object(s). Client 710 may send a request 742 to delete a policy. Resource management service 240 may send a request to delete the policy data object 744 and acknowledge the policy deletion 746 to client 710.
[0080] Although FIGS. 2 - 7 have been described and illustrated in the context of a provider network implementing a resource management service for resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 2 - 7 may be easily applied to other resource management systems, components, or devices. For example, private systems and networks implementing multiple system resources may maintain multiple hierarchies of resource data objects for managing the behavior of the system resources. As such, FIGS. 2 - 7 are not intended to be limiting as to other embodiments of a system that may implement resource management system for system resources. FIG. 8 is a high-level flowchart illustrating methods and techniques to implement maintaining different hierarchies of resource data objects for managing system resources, according to some embodiments. Various different systems and devices may implement the various methods and techniques described below, either singly or working together. For example, a resource management service such as described above with regard to FIGS. 2 - 7 may be configured to implement the various methods. Alternatively, a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
[0081] As indicated at 810, different hierarchies of resource data objects stored in a data store may be maintained. The resource data objects may identify policies applicable to the behavior of resources corresponding to the resource data objects in a system. For example, resource data objects may be maintained to describe system resources (e.g., unique identifier, capabilities, roles, availability, etc.). The resource data objects may be separately arranged in different hierarchies so that policies applied to the resource data objects according to the hierarchies (e.g., by inheritance rules or direct application) may enforce the controls, actions, configurations, operations, or other definitions of the behavior of the corresponding resources. The hierarchies may be maintained in a hierarchical data storage system, as described above with regard to FIG. 3. However, other types of data stores may be implemented to maintain the hierarchies (e.g., by maintaining the data objects, and relationships between the data objects that define the hierarchy so that the hierarchy can be determined).
[0082] As indicated at 820, a request to modify one of the hierarchies may be received. A modification to a hierarchy may include a modification to the arrangement of a hierarchy. For example, resource data objects may be reassigned from one group to another, new groups or groups of groups may be created, groups or groups of groups may be deleted, or any other change to the relationships of resource data objects among the hierarchy may be performed, such as the requests discussed above with regard to FIG. 6. A modification to a hierarchy may also include a change to the application of a policy within the hierarchy, by applying a new policy, removing a policy, changing the application of an existing policy, or changing the definition of a policy, such as the requests discussed above with regard to FIG. 7.
[0083] As indicated at 830, in at least some embodiments, a check or determination may be made as to whether the modification is valid for the hierarchy. For example, limitations on policy application may be checked. If, the policy may only be applied once to the resource data object, group, or hierarchy, then it may be determined whether an instance of the policy has already been applied to resource data object, group, or hierarchy. If so, then, as indicated by the negative exit from 830, the request may be denied, as indicated at 850. Some modifications may not be permitted in certain hierarchies. For example, a security policy may not be applied in a hierarchy associated with human resources or finance, but only in a hierarchy associated with security. In some scenarios, certain organization modifications may not be allowed (e.g., adding a resource data object to more than one group in a hierarchy— although this may be allowed in other embodiments, or deleting resource data objects). Authentication may be implemented in some embodiments, to determine the identity of a user account associated with a client submitting a modification. If the user account is not permitted to perform the modification to the hierarchy, then the modification may be invalid.
[0084] For valid modifications, as indicated by the positive exit from 830, the modification to the hierarchy may be performed in accordance with request, as indicated at 840. Modifying hierarchies may change the application of policies applied within the hierarchy. If, for instance, resource data objects are moved or reassigned, then different policies may be inherited by those moved resource data objects based on the different group assignments. If new policies are applied as result of the modification, then the new policy may be applied along with existing policies, or in the scenario where the modification is a policy removal, then removed policy is no longer included in those policies applied by the hierarchy. However, modifications to one hierarchy may be isolated to that hierarchy, and thus may be made without modifying the application of policies identified in another hierarchy. In at least some embodiments, other modifications may be made to multiple, if not all, hierarchies, such as adding a resource data object to multiple different hierarchies as discussed below with regard to FIG. 10.
[0085] FIG. 9 is a high-level flowchart illustrating methods and techniques to handle a policy lookup request for a resource data object, according to some embodiments. As indicated at 910, a policy lookup request may be received for a resource data object. For example, the request may include an identifier that uniquely identifies the resource and thus the resource data object for which the lookup operation is being performed. As indicated at 920, those hierarch(ies) linked to the resource data object may be identified. For instance, in some scenarios, all hierarchies may be linked to the resource data object, whereas in other scenarios, only one or more hierarchies may include the specified resource data object. For those identified hierarchies, the polic(ies) attached to the resource data object or inherited by the resource data object may be determined, as indicated at 930. For example, the path(s) from the resource data object to the root node of each of the identified hierarchies may be traversed, and all attached policies in the path may be identified.
[0086] As indicated at 940, conflict(s) can occur between policies determined for a resource data object. Different hierarchies may apply different policies describing different access rights for the resource data object or different nodes in the path of the resource data object within a hierarchy may include the different policies describing different access rights, for example. If the policy types of any of the determined policies match for a resource data object, then a conflict may exist, as indicated by the positive exit from 940. Detected conflict(s) may be resolved between determined policies, as indicated at 960. For example, one of the conflicting policies (or confliction portions of the policies) may be elected over the other according to a precedence or inheritance model for policy applications (e.g., policies applied to child nodes in the hierarchy may supersede policies applied to parent nodes, or vice versa). In some embodiments, a knowledge base or other rules-based resolution technique may be implemented to evaluate the conflicting policies with respect to precedence or inheritance rules (including rules that modify conflicting policies) and may be configured to apply different inheritance or precedence rules for a policy type when the policy type is defined. Once the resolved version of the determined polic(ies) is determined, then as indicated at 970 the resolved version of the determined polic(ies) may be provided. If no conflict is detected, then the determined polic(ies) may be provided, as indicated at 950.
[0087] FIG. 10 is a high-level flowchart illustrating methods and techniques to handle a request to add a resource data object, according to some embodiments. As indicated at 1010, a request to add a resource data object to other resource data objects stored in the data store may be received, in various embodiments. The request may specify a unique identifier, and other information descriptive of a corresponding resource in a system that includes the resources corresponding to the other resource data objects. As indicated at 1020, the resource data object may be added to the data store. For example, as the model in FIG. 5 A illustrates, the resource data object may be added to the resources directory in the appropriate resource type sub-directory.
[0088] As indicated at 1030, the hierarch(ies) of the other resource data objects may be determined to include the additional resource data object. For example, not all hierarchies may maintain all resource data objects. A membership policy for each hierarchy, for instance, may specify which resource data objects are maintained in the hierarchy, and which are not. Similarly, locations in the hierarch(ies) may be determined for the resource data object, as indicated at 1040. A default location (e.g., directly linked to the hierarchy root node) may be utilized or the membership policy may specify a location based on an evaluation of the resource data object. For instance, if the resource type of the resource data object is a computing resource, place the resource data object in group A, or if the resource data object is a user account, place the resource data object in group B. Once the location(s) are identified in the determined hierarch(ies), then the hierarch(ies) may be updated to add the resource data object to the location(s) in the hierarch(ies), as indicated at 1050.
[0089] Various embodiments of atomic application of multiple updates to a hierarchical data structure are described herein. Hierarchical data structures provide an optimal way to organize data for a variety of different applications. For example, hierarchical data structures may be a tree, graph, or other hierarchy- based data structure maintained in data storage. Directory storage, for instance, may utilize hierarchical data structures to maintain different locations or paths to data that can be traced back to a single starting destination (e.g., a root directory). Other systems may leverage the relationship information provided by hierarchical data structures to reason over data. For example, managing system resources often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources. Security policies, such as access rights or permitted actions for system resources, may be defined and enforced for users of the system resources, for instance. Data describing the resources of a system may be maintained that also describes these permitted behaviors. For example, data objects describing system resources may be maintained to identify policies that indicate the permitted behaviors for the system resources. In order to apply the same policies to multiple resource data objects, a hierarchical data structure may be created for the resource data objects and policies. For instance, a tree structure may be implemented that arranges the resource data objects in groups, directories, or other sets of resource data objects which apply those policies inherited along the path of the tree structure from the resource data object to the root of the tree structure. In this way, policies applied to parent nodes (e.g., the groups, directories, or other set of resource data objects) may be inherited and applied to child nodes (e.g., the resource data objects in the groups, directories, or sets).
[0090] While some changes or updates to a hierarchical data structure may involve small numbers of discrete operations, in some scenarios large numbers or sets of updates may need to be performed together to effect changes. If the hierarchical data structure were to be accessed while the set of updates were not all complete, then broken links, incomplete information, contradictory information, or other errors may result. Atomic application of modifications to the hierarchical data structure may be implemented in various embodiments so that incomplete sets of updates are not visible when accessing the hierarchical data structure, preventing errors or other erroneous information that would result. Moreover, the hierarchical data structure may remain available for access during atomic application of the modifications so that utilization of the hierarchical data structure is not blocked for long running sets of modifications.
[0091] FIG. 1 1 is a logical block diagram illustrating atomic application of multiple updates to a hierarchical data structure, according to some embodiments. As illustrated in scene 1102, hierarchical data structure 1110 is available for access 1120 so that information stored in hierarchical data structure as well as information determined by reasoning over the hierarchical data structure (e.g., following paths from child to parent nodes and applying inheritance rules) may be available. A request may be made to perform a set of modifications atomically to a portion 1112 (as illustrated in FIG. 25) or the entire data structure 1110. As illustrated in scene 1104, a separate copy 1114 of the identified portion of the hierarchical data structure may be created 1130. While the copy is created, hierarchical data structure 1110 including portion 1112 may still remain available for access. In some embodiments, remaining access may be read access and in some other embodiments both read and write access may remain.
[0092] As illustrated in scene 1106, operations 1140 to modify the copied portion 1114 may be performed. In this way, the changes made to portion 1114 are not visible when hierarchical data structure 1110 is accessed (including portion 1112). Modification operations may occur over a period of time from a request to initiate the atomic application of a set of updates to the request to commit the set of updates. In this way, human timescale interactions (e.g., allowing a user to start editing the portion, stop, redo, receive confirmation of updates, and other time variables) may be accounted for without blocking access or imposing strict time limits upon performing atomic application of a set of modifications. As illustrated in scene 1108, the set of updates may be committed to hierarchical data structure 1110 by atomically replacing 1150 portion 1112 with the modified copy 1114. Atomic replacement does not allow for only partial copying of modifications, but instead ensures that the entire set of changes in the copy are inserted into hierarchical data structure 1110 (or as a result of a failure or error, none of copy 1114 is inserted). Once the set of modifications are successfully committed, then portion 1114 of hierarchical data structure may be made available for read and write access, in those embodiments where write access was restricted upon initiating the application of the set of modifications.
[0093] Please note, FIG. 25 is provided as a logical illustration of atomic application of multiple updates to a hierarchical data structure, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a data store, system or clients or the number, type, or arrangements of hierarchies, performance of updates, copy operations or atomic replacements. For example, a portion of a hierarchical data structure that is identified for modification may include multiple, unconnected nodes or subtrees that are all modified as part of the same atomic update.
[0094] FIG. 12 illustrates interactions to perform a bulk edit at a storage engine that atomically applies multiple changes to a hierarchical data structure, according to some embodiments. Client 1210 may be a client of resource management service 240 or any other client that utilizes a storage engine to atomically apply multiple updates to a hierarchical data structure. Client 1210 may submit a request via interface 1202 to request a bulk edit for hierarch(ies) or portion of hierarch(ies) 1212. A bulk edit request may be a request to atomically apply a set of modifications to a hierarchical data structure as discussed above with regard to FIG. 11 and below with regard to FIG. 13. In response to receiving the request storage engine 1200 may send a request to remove a lock node for the hierarch(ies)614 in order to lock the hierarch(ies), blocking write requests to the hierarch(ies). Storage engine 1200 may also send a request 1216 or multiple requests to create a copy of the hierarch(ies)616 at hierarchical data store 350 that is separate from the hierarch(ies).
[0095] Client 1210 may then submit various modification request(s) 1220 over a period of time (which may or may not be subject to a time limit). Modification request(s) 1220 may correspond to various requests to update the hierarch(ies)as discussed above (e.g., various requests to add a group, remove a group, add resources to a group, remove resource(s) from a group, add a group to a group, remove a group from a group, or any other arrangement modification to the hierarch(ies), requests to add policies or remove policies from resources or groups or any other modification to the resource data objects, groups or other nodes in the hierarch(ies)). Modification requests 1220 may correspond to API requests or over modifications permitted by interface 1202 (which may be like interface 310 in FIG. 3). In response to modification requests 1220, storage engine 1200 may send corresponding requests 1222 to perform operations(s) applying the modifications to the copy of the hierarch(ies).
[0096] Client 1210 may send a request 1230 to commit the bulk edit via interface 1202. In response to the request storage engine 1200 may perform various conflict checks (in some embodiments as discussed below with regard to FIG. 13). Storage engine 1200 may submit a transaction that links the copy of the hierarch(ies)with the hierarchies node and removes the link from the hierarch(ies)to the hierarchies node. Acknowledgment or failure of the transaction may be provided 1234 and in turn storage engine 1200 may indicate acknowledgment or failure of the commit 1238 to client 1210. Storage engine 1200 may also add a lock node back to the hierarch(ies)to unlock the hierarch(ies)636 in order to allow write access to the hierarch(ies).
[0097] Please note that the techniques described above with respect to storage engine 1200 may be applicable to a storage engine managing any hierarchical data structure and may not be limited to a hierarchy of resource data objects. Not all interactions have been illustrated. For example, various acknowledgment indications may be provided for different requests that have not been depicted in FIG. 12.
[0098] Although FIGS. 11-12 have been described and illustrated in the context of a provider network implementing hierarchical data structures as part of a resource management service for managing resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 11-12 may be easily applied to other storage engines or data managers that manage hierarchical data structures. For example, a file directory system accessible to multiple users may allow for atomic application of multiple updates to a portion of a hierarchical data structure that represents a file directory structure. As such, FIGS. 11-12 are not intended to be limiting as to other embodiments of a system that may implement atomic application of multiple updates to a hierarchical data structure. FIG. 13 is a high-level flowchart illustrating methods and techniques to perform atomic application of multiple updates to a hierarchical data structure, according to some embodiments. Various different systems and devices may implement the various methods and techniques described below, either singly or working together. For example, a resource management service such as described above with regard to FIGS. 11-12 may be configured to implement the various methods with respect to hierarchical data structures like different hierarchies. Alternatively, a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
[0099] As indicated at 1310, a request may be received to perform modifications to a portion (or entirety) of a hierarchical data structure. A hierarchical data structure, as noted above, may be a tree, graph, or other hierarchy -based data structure maintained in data storage, such as a hierarchical data store which stores data natively in a hierarchical data format. Hierarchical data structures may be implemented for a variety of different systems and techniques. For example, hierarchical data structures may be implemented to provide a directory structure for file or other data management systems, a classification structure or other representation of data that is interpreted based on the hierarchical relationships within the hierarchical data structure. In one such example, such as discussed above with regard to FIGS. 2 - 6, different hierarchies of resource data objects stored in a data store may be maintained. The resource data objects may identify policies applicable to the behavior of resources corresponding to the resource data objects in a system. For example, resource data objects may be maintained to describe system resources (e.g., unique identifier, capabilities, roles, availability, etc.). The resource data objects may be separately arranged in different hierarchies so that policies applied to the resource data objects according to the hierarchies (e.g., by inheritance rules or direct application) may enforce the controls, actions, configurations, operations, or other definitions of the behavior of the corresponding resources. The hierarchies may be maintained in a hierarchical data storage system, as described above with regard to FIG. 3. However, other types of data stores may be implemented to maintain the hierarchies (e.g., by maintaining the data objects, and relationships between the data objects that define the hierarchy so that the hierarchy can be determined).
[00100] The request may specify the portion of the hierarchical data structure by including an identifier (e.g., node name, path, or identification number) in the request. For example, the request may specify a particular node by a node identification number indicating that the node and all children of the node in the hierarchical data structure are included in the portion for which modifications are to be performed. In some embodiments multiple paths, nodes, or subdirectories of a hierarchical data structure may be identified as part of one portion for performing updates even though the multiple paths, nodes, or sub-directories of the hierarchical data structure may be unconnected except via a root node for the hierarchical data structure. The request to perform modifications may be a request that merely initiates the start of allowing atomic application of multiple updates, as discussed with regard to other elements below. However, in some embodiments the request may also describe, indicate, or propose the various changes to be atomically applied to the specified portion of the hierarchical data structure. Individual modification requests (e.g., formatted according to an API for the performance of different modifications) may be included as part of a request payload, for instance. The request may be received via a programmatic interface such as API, and may be initiated by a command line interface or graphical user interface.
[00101] In response to receiving the request to perform the modifications to the portion of the hierarchical data structure, a copy of the portion of the hierarchical data structure may be created that is separate from the hierarchical data structure, as indicated at 1320. For example, the nodes in the hierarchical data structure along with the relationships defined between the nodes may be read from the hierarchical data structure in the data store and written to a different location in the hierarchical data structure that does not link the copy to the hierarchical data structure from which it was obtained. In this way, any lookup or analysis performed upon the hierarchical data structure does not discover, read, or obtain any information from the copy of the portion of the data structure, allowing for modifications to be performed on the copy without being accessible. A pointer, address, or other location of the copy of the portion of the hierarchical data structure may be maintained in order to direct operations to modify the portion of the hierarchical data structure to the copy.
[00102] In various embodiments, the original portion of the hierarchical data structure (in the hierarchical data structure) may remain available for read access. For example, other clients may wish to access the hierarchical data structure to perform lookup operations (e.g., policy lookups as described above with regard to FIG. 4). Providing read access to the portion of the hierarchical data structure may allow for utilization of the hierarchical data structure to continue during the application of multiple modifications as the modifications are separately performed upon the copy.
[00103] In at least some embodiments, write operations may be restricted or blocked entirely. For example a locking mechanism, as discussed above with regard to FIG. 5B may be implemented to identify a portion of a hierarchical data structure undergoing modification so that intervening or conflicting updates may not be performed. In some other embodiments, write access may also be allowed for the original portion of the hierarchical data structure. The write requests may be performed and then later merged as part of a conflict resolution scheme, as discussed below with regard to element 1350. In some embodiments, write requests received while modifications to the copy are ongoing (but have not yet been committed) may be replicated to the copy of the portion of the hierarchical data structure. For example, the writes may make changes to the copy which may be reflected in a graphical display of the copy by refreshing the display periodically to include the changes. In some embodiments, the writes may be replicated after receiving a request to commit, but before the modifications made as part of the modifications request are committed. Conflict resolution between replicated writes and the hierarchy may be made as the replicated writes are received in some embodiments. For example, a user could approve or deny displayed conflicts from replicated writes as they are displayed via the GUI or may receive a conflict report upon a request to commit and approve or deny writes in response.
[00104] As indicated at 1330, operation(s) to apply the modifications to the copy of the portion of the hierarchical data structure may be performed. For instance, the modifications may be specified in the request received at element 1310. Alternatively, additional, separately received requests that are identified or associated with the request at element 1310 (e.g., that identify a bulk edit or other identifier associated the atomic application of modifications) may identify, describe, or instruct the performance of the modifications (e.g., according to various API commands to perform different hierarchical data structure modifications, change the structure, change nodes, add nodes, remove nodes, update data or attributes associated with nodes, etc.). The requests may be received according to the same interface as the request to perform modifications at element 1310.
[00105] Requests for modifications may continue to be processed and applied to the copy of the hierarchical data structure until a request to commit the modifications to the portion of the hierarchical data structure is received, as indicated 1340. The commit request may include a token, identifier, or other mechanism that corresponds to the initial request for modification (e.g., a bulk edit identifier) so that the commit request is matched with the appropriate copy of the portion of the hierarchical data structure (e.g., in the event that multiple requests for modification are being concurrently processed for the portion of the hierarchical data structure). In response to the commit request, the portion of the hierarchical data structure may be atomically replaced with the copy of the portion of the hierarchical data structure that includes the modifications, as indicated at 1370. Atomically replacing the original portion with the copy may be processed as a single transaction that is either performs or fails (e.g., due to errors or conflicts). The data store maintaining the copy and the original portion, for instance, may have a transaction mechanism (e.g., a transaction API) that allows for operations to effect the replacement to occur (e.g., submitting a transaction that reads all of the data from the copy and overwrites the data of the original portion). In some embodiments, the transaction may include actions to link the copy of the portion to a parent node of the original portion and remove a link between the original portion and the same parent node, so that the copy of the portion is grafted or inserted into the hierarchical data structure without having to read and re-write the entire copy over the original portion.
[00106] In those embodiments where write access to the original portion of the hierarchical data structure is blocked during the performance of modifications to the portion of the hierarchical data structure, then operations to allow write access to the updated hierarchical data structure that includes the copy may be performed as part atomically replacing the portion with the copy. For instance, the portion of the hierarchical data structure may be unlocked for write access.
[00107] As noted above, write access may remain for the original portion of the hierarchical data structure, in some embodiments. As a result, conflicts between the copy and the original portion can occur due to subsequent writes to the original portion. Thus, as indicated at 1350, a check for conflict between the copy and writes performed at the original portion of the hierarchical data structure may be performed. A conflict may be detected in various ways. For example, when writes to the original portion contradict a modification made to the copy, a conflict may be detected. Consider a scenario where a write that changes the relationship between two nodes in the original portion (e.g., changing a node's parent node to another parent node in the portion) may occur. This change may contradict a modification made to the copy (e.g., that changes the node's parent to a different node than the change made by the write request). In another example scenario, a write to the original portion may change an attribute for a node to have one value (e.g., valuel), while a modification changes the same attribute to have a different value (e.g., value2). Note that not all differences may be considered contradictions. For example, a write to the original may add a new node to a group of nodes with a same parent (e.g., add a resource data object to a group). The modifications may add other nodes to the same parent, modify the parent values or remove other nodes from the same parent. These modifications, however, may not be contradicted by adding the new node, so the write may not be considered a conflict. For those writes that do not cause conflicts, the writes may be replicated to the copy of the hierarchy and commitment of the modifications to the copy may proceed. However, if a conflict is detected, then in some embodiments, the commitment request may be denied, as indicated at 1380. Alternatively, if a conflict is detected, the write may be rolled back and failed (e.g., by holding acknowledgement and/or performance of the write request until confirming that the write does not conflict with the copy). Various other conflict detection and resolution schemes may be implemented and thus the previous are not intended to be limiting.
[00108] In some embodiments, multiple requests to perform atomic sets of modifications on a same portion of the hierarchical data structure may be received. As indicated 1360, conflict between the commitment request and the other requests for atomic modifications to the same portion may be detected. For example, multiple requests to perform atomic application of modifications to the same portion of the hierarchical data structure may be allowed to initiate. However, the first one of these sets of modifications that is successfully committed may prevent the remaining sets of modifications for other requests from committing. Consider a scenario where each of these requests may obtain a timestamp or version number for the portion of the hierarchical data structure. When a set of modifications is successfully committed, the version number or timestamp for the portion of the hierarchical data structure may change (e.g., by changing a version number or timestamp at a root node for the portion). If a commit request is received for a set of modifications to the portion, a check may be made prior to committing the modifications to see if the version number is the same as was first obtained. If the version number is not the same, then another set of modifications may have committed first, creating a conflict. As, indicated by the negative exit from 1360, if a conflict exists (e.g., another set of modifications for the portion has already committed), then the commitment request may be denied, as indicated at 1380.
[00109] Various embodiments of multi-party updates to a distributed system are described herein. Distributed systems include multiple different resources (e.g., both physical and virtual) that provide various different services, capabilities, and/or functionalities, typically on behalf of multiple different entities. When changes to the distributed system are desired, it is likely that the changes may affect the way in which the distributed system operates for several of the entities that utilize the distributed system. In order to make the desired changes, approval may be beneficial (or required) so that changes to the distributed system are not made without some notification of the changes to other entities that may be affected. For example, management decisions regarding various resources in a distributed system often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources. Security policies, such as access rights or permitted actions for system resources, for instance, may be defined and enforced for users of the system resources. When making decisions to change the permitted actions, configurations, controls or any other definition of behaviors for the system resources in a distributed system, approval from more than the proposing entity may be desirable. [00110] While manual and/or informal approval mechanisms to effect changes to a distributed system can be implemented, these approval mechanisms are unable to scale for large distributed systems. For example, large scale distributed systems implementing thousands or hundreds of thousands of resources on behalf of thousands or hundreds of thousands of users, clients, or entities may make it difficult to discover, track, and obtain the approval of changes that may need to be made to a distributed system. Implementing multi-party updates for a distributed system as discussed below, however, may coordinate the proposal, approval, and performance of updates to a distributed system in a scalable, traceable, and automated fashion.
[00111] FIG. 14 is a logical block diagram illustrating multi-party updates to a distributed system, according to some embodiments. Proposer 1410 may submit proposed updates 1412 to agreement manager 1420. The proposed updates may include any updates or changes to distributed system resources 1440 (e.g., hardware resources, such as various processing, storage, and/or networking hardware) or virtual resources (e.g., instances, volumes, user accounts, or control policies). The proposed updates 1412 may be included in a request to agreement manager 1420 as executable instructions (e.g., API requests or executable scripts, code, or other executable data objects). Agreement manager 1420 may determine an authorization scheme (e.g., a handshake mechanism) for approving the proposed updates. An authorization scheme may be defined to include one or multiple satisfaction criteria for determining whether the proposed updates 1412 are approved. The authorization scheme may also include the identity of approvers (e.g., by identifying user account names or other user account identifiers), or a mechanism for determining approvers (e.g., by identifying user account types or groups that include user accounts any of which may act as an approver). For example, proposer 1410 may submit an authorization scheme as part of proposal 1412 that identifies specific approvers 1430 (e.g., user accounts or other identities of stakeholders) to approve the proposed update(s) 1412.
[00112] Agreement manager 1420 may send proposal notification(s) 1422 to the identified approver(s) 1430. In turn, approvers 1430 may send a response indicating approval(s) or disapproval(s) 1432 to agreement manager. Agreement manager 1420 may evaluate the responses with respect to the authorization scheme. For example, if the authorization scheme requires that 4 of 6 approver(s) 1430 send an approval response, then agreement manager 1420 may determine whether 4 approval responses were received. If not, then agreement manager 1420 may send a rejection of the proposed amendments (not illustrated). If, however the authorization scheme for the proposed update(s) 1412 is satisfied, then agreement manager 1420 may direct the approved update(s) 1442 with respect to distributed system resources 1440. For example, agreement manager 1420 may send the API requests corresponding to the described updates (e.g., specified by a user in proposed updates 1412) to initiate performance of the updates, or execute a script or executable data object to perform the updates.
[00113] Please note, FIG. 14 is provided as a logical illustration of multi-party updates to a distributed system, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a distributed system, proposer 1410, agreement manager 1420, or approvers 1430.
[00114] As discussed above, some changes to organizations, including hierarchies, policies, and resource data objects may need to be approved and/or coordinated amongst multiple stakeholders. The multi-account agreement manager 322 discussed above may interact with clients to facilitate agreement requests that coordinate approval among multiple user accounts. FIG. 15 illustrates interactions to submit agreement requests for updates to organizations, according to some embodiments. Proposal client 1510 may be one of clients 270 in FIG. 2 above that allows a user to interact with a resource management system that implements multi-account agreement management 322. Interface 310 may be a command line or graphical interface that formats requests according to a programmatic interface, such as API, multi-account agreement management 322. Client 1510 may submit draft proposed agreement for organization updates request 1522 via interface 310 to multi-account agreement management. Draft agreement 1522 may include proposed updates that are user specified (e.g., updates by API commands, executable scripts, code or other executable instructions) or draft agreement 1522 may be a request to propose a pre-defined set of updates (e.g., defined by resource management service 240, such as apply a policy, invite a user account to join an organization, launch a new provider network resource, etc.). Draft agreement 1522 may include an authorization scheme that specifies approvers or a discovery mechanism for approvers (e.g., approver types, groups of possible user accounts that can approve, etc.). Changes can be made to the draft agreement request without triggering notifications to approvers. In some embodiments, agreement requests may be locked or otherwise unchangeable after submission 1532.
[00115] Client 1510 may submit a proposed agreement for approval 1532 to multi-account agreement management 322. For example, submission request 1532 may include an identifier for the draft proposed agreement request created above at 1522. Note that in some embodiments, submission request 1532 may be the initial and only submission to multi-account agreement management 322 (e.g., without first creating a draft agreement request) and thus may identify update(s) (and an authorization scheme) in some instances. Multi-account agreement management 322 may send notifications for the proposed agreement 1534 via interface 310 to approval client(s) 1520 (which may be clients 270 associated with user accounts identified as approvers). Approval client(s) 1536 may send approval/disapproval responses for the proposed agreement 1536 which multi-account agreement management 322 may evaluate for approval of the proposed agreement according to the authorization scheme for the agreement request and send a response indicating acceptance or rejection of the proposed agreement 1538.
[00116] In at least some embodiments, client 1510 may submit a modification to the proposed agreement 1542. The modification may be a modification to the authorization scheme or the updates to be performed. In some scenarios (e.g., where changes to the updates are made), notifications of the proposed modification to the agreement 1544 may be sent to approval client(s) 1520. As noted above, approval client(s) 1520 may send approval/disapproval response for the modified agreement 1546.
[00117] In at least some embodiments, proposal client 1510 may cancel the proposed agreement 1552. In response, multi-account agreement management 322 may send notifications of cancellation 1552 to approval client(s) 1520 and/or may ignore responses received from approval client(s) 1520 for the cancelled agreement request.
[00118] As noted above, multi-account agreement management 322 may track the state of pending or outstanding agreement requests as well as previously performed or rejected agreement requests. FIG. 16 illustrates a state diagram for agreement requests, according to some embodiments. As noted above in FIG. 16, an agreement request may be initially enter a draft state 1610. Draft state 1610 may indicate that a proposing user account can add, change, or modify the agreement request. As illustrated in FIG. 16, a draft agreement request can be cancelled, moving the agreement request to cancelled state 1630. Alternatively, if the agreement request is finalized and submitted, then the agreement request may enter proposed state 1620. From proposed state 1620, an agreement request can enter rejected state 1640 as a result of failing to satisfy the authorization scheme. Similarly, the agreement request may enter expired state 1650 as a result of failing to be approved before expiration conditions are satisfied (e.g., a within an expiration time limit).
[00119] While in proposed state 1620, notifications for the agreement request may be provided, responses received and evaluated. If the authorization scheme for the agreement request is satisfied, then as illustrated in FIG. 16, the agreement request may enter the approved state 1660. In some embodiments, once an agreement request is approved, then the proposed updates may be automatically directed, initiated, or otherwise performed. However, in some embodiments, as illustrated in FIG. 16, approved agreement requests may still enter decline state 1680. For example, if the agreement request is an invitation to add a new user account to an organization, then the invited user account may decline the invitation to join the organization. In some embodiments, the proposer may abort the approved agreement request if, for instance, another change to the distributed system renders the proposed changes undesirable, as indicated by the change from approved state 1660 to cancelled state 1630. Similarly, a time period for execution of the proposed changes may be monitored and if the updates are not performed prior to the expiration of the time period, the agreement request may move from approved state 1660 to expired state 1650. If, however, the proposed changes are performed and/or successfully completed, then the performed state 1670 may be entered.
[00120] Although FIGS. 14 - 16 have been described and illustrated in the context of a provider network implementing a resource management service for resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 14 - 16 may be easily applied to other resource management systems, components, or devices for distributed systems. For example, control planes for data storage services, configuration management systems for apply changes to systems, or other managers or controllers for distributed systems. As such, FIGS. 14 - 16 are not intended to be limiting as to other embodiments of a system that may implement resource management system for system resources. FIG. 17 is a high-level flowchart illustrating methods and techniques to implement multi-party updates to a distributed system, according to some embodiments. Various different systems and devices may implement the various methods and techniques described below, either singly or working together. For example, a resource management service such as described above with regard to FIGS. 14 - 16 may be configured to implement the various methods. Alternatively, a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
[00121] As indicated at 1710, an agreement request proposing one or more updates to a distributed system may be received. The agreement request may be specified according to an interface, such as API, and may include various other executable instructions, such as API requests indicating the proposed updates to the distributed system. For example, the agreement request may include requests to add a resource data object (e.g., user account or resource) to a group in a hierarchy by including an AddToGroup request in the agreement request. In some embodiments, however, other representations of updates may be included. For example, executable instructions, such as code, scripts, or other executable data objects may describe the updates to perform with respect to the distributed system. [00122] Updates to a distributed system may include any changes to the number, arrangement, configuration, execution, operation, access, management, or any other modification to the distributed system. In some embodiments, updates may be updates to a hierarchy of resource data objects, such as an organization discussed above with regard to FIG. 5 A that manage the resources of a distributed system, such as updates to invite user accounts of a provider network to join the organization (e.g., by adding a corresponding resource data object to the organization including information describing the user account and applying policies to the user account dependent upon the location, such as the group assignment, of the user account in the organization) or to apply or attach policies to groups or data objects. If multiple updates are indicated in the agreement request, the updates may describe different types of updates, such as updates to the organization and updates to add, launch, modify, halt, or create a new resource (e.g., a virtual compute instance or data storage volume) in the provider network in the same agreement request. In some embodiments, updates may be a request to execute a function, operation, task, workflow, or action defined and/or execute by a different resource in the distributed system than the resource (e.g., agreement manager) determining whether agreement is reached to perform the update. For instance, a network-based service implemented as part of a provider network may execute user-specified functions upon invocation by an API call to the service, which would allow an update to describe the API call to the service which in turn invokes execution of a function.
[00123] An authorization scheme for the received agreement request may be determined, in various embodiments. For instance, the agreement request may specify, identify, or otherwise comprise the authorization scheme. Consider that various available authorization schemes may be implemented by an agreement manager which may be selected for processing an agreement request that identifies one of the multiple authorization schemes. In some embodiments, the authorization scheme may be defined or specified in the agreement request. For example, the authorization scheme may be defined to include one or multiple satisfaction criteria for determining whether the proposed updates in the agreement request are approved. The authorization scheme may also include the identity of approvers (e.g., by identifying user account names or other user account identifiers), or a mechanism for determining approvers (e.g., by identifying user account types or groups that include user accounts any of which may act as an approver).
[00124] Authorization schemes may be implemented in different ways. For example, a nuclear key authorization scheme may be implemented that identifies an exact number of entities (e.g., user accounts) as well as the identity of specific entities (e.g., specific user accounts) that may approve the proposed changes. Consider a scenario where the authorization scheme includes a requirement that 3 user accounts must each approve the proposed updates (e.g., user account A and user account B and user account C). If only two of the three user accounts (e.g., B and C) approve of the proposed updates, then the agreement request cannot satisfy the requirement, even if another user account, user account D, where to approve the proposed updates. In some embodiments, quorum-based approval techniques may be implemented as an authorization scheme so that a minimum number of approvers approve of the proposed updates (even if all approvers do not approve of the proposed amendments). A quorum-based requirement for an authorization policy, for example, may require that 3 of 5 identified approvers provide approval for the proposed updates. Another type of authorization requirement may be a veto-based requirement that allows for authorization of the proposed updates as long as none of the identified approvers (or a quorum of identified approvers) do not veto or otherwise reject the proposed updates within a certain time period (e.g., 24 hours).
[00125] Authorization schemes may include multiple requirements, in some embodiments. For example, an authorization scheme may include a requirement that a particular approver must approve the proposed updates and that at least one approver from multiple different groups of other approvers approve the proposed updates. An authorization scheme, for instance, could specify that a user account of a particular organization leader (e.g., manager, director, vice- president, etc.) approve of the updates and that 1 user account from a human resources (HR) group and 1 user account from a security group approve of the updates (combining quorum requirements with a specific approver requirement).
[00126] As indicated at 1720, in response to receiving the request a determination may be made as to whether the agreement request is to proceed. For example, agreement requests may be limited by a throttling scheme imposed upon agreement requests submitted by a single user account, or a total number of agreement requests that may be outstanding (e.g., not yet approved or rejected) in a given time period. If a request from a user account exceeds a limit or threshold on the number agreement requests that can be outstanding or submitted for a user account in a time period, then as indicated by the negative exit from 1720, the agreement request may be rejected. Agreement requests may also not be allowed to proceed if resulting in duplicate updates. For example, data describing outstanding or completed updates to a distributed system may be performed. When the agreement request is received, a comparison of the described updates with the updates of the outstanding and/or past agreement requests may be made. If, the proposed updates match one of the outstanding or past agreement requests, then the agreement request may be identified as a duplicate agreement request and rejected, as indicated at 1780. [00127] As indicated at 1730, approver(s) for the agreement request may be identified according to the authorization scheme for the request, in various embodiments. If the authorization scheme identifies specific approvers (e.g., specific user account ids or user names), then the identity of the approvers may be determined by accessing the authorization scheme. In some embodiments, the authorization scheme may provide a discovery mechanism to determine the approvers. For example, the authorization scheme may provide an attribute, condition, or other signature that can be compared with possible users to determine which users may be approvers. Consider the scenario where the authorization scheme describes that the approvers must be user accounts associated with a particular team, organization, or department. The authorization request may specify that any user account associated with the team, organization or department may be an approver for the agreement request. In some embodiments, the requested updates may identify one, some, or all of the approvers. For instance, if the update is an update to the user account itself (e.g., changing group membership or joining an organization), then the approver may be the user account identified by the update.
[00128] As indicated at 1740, notifications of the proposed update(s) may be sent to the identified approver(s), in some embodiments. For example, notifications may include plain text descriptions of proposed updates (e.g., plain text descriptions of included API calls, scripts, or executable data objects that are not human readable). Notifications may also identify other approvers, expiration times (e.g., an approval deadline), the user account proposing the updates, and/or any other information that an approver may need to determine whether or not to approve the proposed updates. Notifications may be sent via network communications to a client that is associated with the user account of the approver (e.g., send an approval email to a computer providing access to an email address associated with the user account, a message or communication portal, window, or display provided to the user account when the user account logs onto a network-based site, such as a user control panel provided as part of a service or provider network interface). Responses of approval or disapproval may be sent back via the same communication or notification channel (e.g., via the same interface) or via a different communication channel. For example, an email or text notification sent via mail protocol or messaging protocol may include a link to a web interface, which can display approval or disapproval response controls so that the response is sent via network communication via the web interface. Note that in some embodiments, notifications of proposed update(s) may not be sent to approvers. Instead, approvers may poll for periodically (or randomly request) a list of proposed updates for which the approver has been identified from an agreement manager, such as multi-account agreement management 322. [00129] Agreement requests may be asynchronously processed, in various embodiments. Once notifications are sent to approvers, approval (or disapproval) responses may be processed as received until the proposed changes are approved according to the authorization scheme, disapproved, or expired. As indicated by the positive exit from 1750, when response(s) are received from the approver(s), a determination may be made as to whether the authorization scheme is satisfied, as indicated at 1752. Response data, such as the responding approver and the answer (e.g., approve or disapprove) may be maintained so that as responses for agreement requests arrive at different times, as well as data indicating those authorization requirements satisfied and outstanding so that an evaluation of the authorization scheme may be performed as requests are received. For example, quorum requirements may provide more notifications to approvers than may be required to satisfy the quorum, therefore once a quorum requirement is satisfied, the quorum requirement may be marked or stored as satisfied so that responses received from additional approvers in the quorum can be ignored for authorization scheme evaluation purposes.
[00130] The authorization scheme may not be satisfied by received responses, either because as indicated at 1762, enough approver(s) have disapproved the request so that the authorization cannot be satisfied, and the agreement request is rejected, as indicated by the positive exit from 1762. If, however, the received responses neither satisfy, and thus approve the agreement request or disqualify the agreement request, then as indicated by the negative exit from 1762, processing of the agreement request may continue. In at least some embodiments, agreement request may be subject to a default time expiration threshold (or an expiration threshold or condition defined by the authorization scheme). If no responses are received, as indicated by the negative exit from 1750, and a sufficient amount of time since the notifications of the agreement request has passed (or time since the submission of the agreement request at 1710), then the agreement request may be expired, as indicated by the positive exit from 1760, and the agreement request rejected, as indicated at 1780. For example, a 24 hour approval expiration date may deny agreement requests not approved within 24 hours of submission. If, however, the agreement request is not yet expired, then as indicated by the negative exit from 1760, the agreement request may remain outstanding or pending, waiting for approval or disapproval.
[00131] As indicated at 1770, the proposed updates of an agreement scheme that is approved according to the authorization scheme may be performed to the distributed system. For example, the described API requests may be sent, the included script parsed and executed, or the executable data executed. [00132] In at least some embodiments, changes to the authorization scheme, including changes to approvers can be made after submitting the agreement request. For example, a user may wish to add an additional approver (e.g., so that the additional approver is aware of the change). In response to the change in authorization scheme, a notification may be sent to the additional approver. In the event that authorization changes remove approver(s), responses received from the removed approvers may be ignored for determining whether the authorization scheme is satisfied. In addition to changes to the authorization scheme, changes to the proposed updates may be made, in some embodiments. For example, update(s) may be added, removed, or modified for the agreement request. In response to changes to the proposed updates, updated notifications may be sent to approvers so that the approvers can approve the changed proposed updates.
[00133] Various embodiments of remote policy validation for managing distributed system resources are described herein. Managing system resources often involves defining and enforcing the permitted actions, configurations, controls or any other definition of behaviors for the system resources that are described in respective policies applied to the system resources. For example, security policies, such as access rights or permitted actions for system resources, may be defined and enforced for users of the system resources. In various embodiments, data describing the resources of a system may be maintained that also describes these permitted behaviors. For example, data objects describing system resources may be maintained to identify policies that indicate the permitted behaviors for the system resources. In order to apply the same policies to multiple resource data objects, a hierarchy or structure of the resource data objects may be implemented. A tree structure, for instance, may be implemented that arranges the resource data objects in groups, directories, or other sets of resource data objects which apply those policies inherited along the path of the tree structure from the resource data object to the root of the tree structure. In this way, policies applied to parent nodes (e.g., the groups, directories, or other set of resource data objects) may be inherited and applied to child nodes (e.g., the resource data objects in the groups, directories, or sets).
[00134] As noted above, policies may be implemented in many different scenarios to manage system resources. Given both the variety of types of management actions, configurations, controls or other definition of behaviors for resources as well as the many different types of resources that may be managed, determining whether created or applied policies are valid can quickly become unmanageable. Typically, resource management systems provide a limited set of pre-defined policies which may be applied. However, a limited set of pre-defined policies may be unable to adapt to new or changing conditions, resources, or scenarios where policies could be applied to manage resources. For instance, large scale distributed systems, like provider network 200 discussed above with regard to FIG. 2, may offer hundreds of services and run thousands of resources on behalf of users, which may be configured and/or operated in large number of combinations. In various embodiments, remote policy validation may allow for a resource manager to host, apply, and manage all kinds of policies without any pre-defined policy sets or limitations. Instead, users of the resource manager may craft custom policies particular to the individual needs or desires of the system resources to be managed, as validation of the policies may be remotely performed by remote validation agents implemented, configured, controlled, or directed by the users.
[00135] FIG. 18 is a logical block diagram illustrating remote policy validation for managing distributed system resources, according to some embodiments. Distributed system resources 1840 may be physical system resources, such as computing devices (e.g., servers), networking devices, or storage devices, or virtual system resources, such as user accounts, user data (e.g., data objects such as database tables, data volumes, data files, etc.), user resource allocations (e.g., allocated resource bandwidth, capacity, performance, or other usage of system resources as determined by credits or budgets), virtual computing, networking, and storage resources (e.g., compute instances, clusters, or nodes), or any other component, function or process operating in a distributed system. As discussed above, with regard to FIG. 5 A, these resources 1840 may be represented as resource data objects to which policies are applied (e.g., mapping, link, or otherwise associating). A lookup operation may be performed, as discussed above with regard to FIG. 4, in order to determine which policies are associated with a given resource data object (e.g., by traversing a path that includes the resource data object).
[00136] Different policies may be created by be a client of the distributed system, such as client 1810. To create a policy, client 1810 may send one or more requests 1812 to resource manager 1820 to create and apply policies to distributed system resources. Prior to applying policies, resource manager 1820 may ensure that the created/applied policies are valid, in various embodiments. Validating policies may include evaluating policies for syntactic errors and semantic errors. Syntactic errors may be errors that indicate the format or composition of a policy is incorrect when compared with a schema or other set of syntax rules for the policy. For example, syntactic errors may be identified when a policy fails to include a data field, modifier, or other term that signals the location of a policy attribute (e.g., resource identifier). Semantic errors may be errors that indicate whether the content of a policy that is meaningful, and thus enforceable. For example, a semantic error may occur when the policy identifies an operation to modify a resource that does not exist. The non-existent resource has no meaning, and therefore is a semantic error in the policy. Semantic validation may include validating based on business or operational logic or rules and thus may be specific to the policy type being validated.
[00137] As syntactic and semantic errors may vary from one policy to another, remote policy validation may allow for a remote validation agent, specifically configured to validate a specific policy or policies to perform syntactic or semantic validation, such as remote validation agent(s) 1830. Resource manager 1820 may identify remote validation agent(s) 1830 according to the policy or policy type of the policy. For instance, a policy type for the policy may be determined so that a remote validation agent 1830 that is associated with the policy type is identified. In some embodiments, remote validation agent(s) 1830 may be implemented as part of a resource that consumes the policy (e.g. enforces the policy at runtime, such as enforcing access restrictions, configuring resource settings, or directing operations) and/or requests a policy lookup for a data resource object. Syntactic remote validation agents and semantic remote validation agents may be implemented separately, in some embodiments. The policy or policy type may specifically identify remote validation agent(s) 1830 by including a network address or endpoint to which a validation request may be directed to (e.g., without any particular formatting or information for the policy) or may send the validation request to a pre-registered remote validation agent for the policy via an interface formatted to request and obtain certain information about the policy and whether the policy is valid.
[00138] Once identified, resource manager 1820 may send a validation request 1822 including information for the policy to remote validation agent(s) 1830 to initiate validation. For example, the validation request 1822 may include a copy of the policy, or portions of the policy, which remote validation agent(s) 1830 may compare with a policy type schema for the policy. In some embodiments, remote validation agent(s) 1830 may only receive an identification of a policy as part of validation information and remote validation agent(s) 1830 may request further information (e.g., further validation content, such as data field values or a policy type schema) from resource manager 1820 or other source(s) (not illustrated). Once validation result is reached, remote validation agents 1830 may provide a valid/invalid response to resource manager 1820. The invalid response may, in some embodiments, indicate the validation errors detected for policy, which may be provided to client 1810 (or other associated client, not illustrated) for correction. Validated policies may be applied 1842 to distributed system resources. For example, the policy may be attached, associated, or otherwise linked to one or multiple resources so that when certain resource actions are initiated, the policy directs or controls the actions. [00139] Please note, FIG. 18 is provided as a logical illustration of remote validation for managing distributed system resources, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing a resource manager, remote validation agents, client or clients or the number, type, or arrangements of distributed system resources.
[00140] FIG. 19 is a logical block diagram illustrating a policy manager for resource management service policies applicable to provider network resources, according to some embodiments. As discussed above, policy management 330 may handle policy creation, application, lookups, and validation for policies that are applied to resource data objects in an organization. Policy management 330 may implement policy/creation handling 1910, in various embodiments, to process policy type and policy creation requests. Consider, a scenario where a client wants to introduce a new policy to allow various users of an organization in resource management service 240 to establish a payer account that identifies a user account that is financial responsible for service charges incurred in provider network 200. The client may submit a request to create a new policy type named "Payer" and a new policy schema for the newly created policy type. The policy schema may be specified in various formats both human readable and/or machine readable, such as JSON or XML. Policy creation/application handling 1910 may store the "Payer" policy type (e.g., by storing the associated schema and metadata, including version information for the schema) along with other policy types 1942 in policy store 1940. In some embodiments, the stored schema may identify a remote policy validation agent and whether the validation is performed synchronously or asynchronously with respect to a validation request.
[00141] Policy store 1940 may persistently maintain policy types 1942 by persistently maintaining the policy schemas and corresponding metadata for the policy schemas. Policy store 1940 may be implemented as a database or otherwise searchable/query-able storage system to provide access to other components of policy management 330 or resource management 240. In some embodiments, policy store may be separately implemented from policy management 330 or resource management 240 (e.g., as part of a storage service 230). Because policy store 1940 maintains metadata for policy types 1942, policy creation application handling 1910 may allow users to create new versions of policy schemas, identifying prior versions by schema version numbers. For instance, if after some testing, the client decides the newly created Payer policy type is not sufficient for their use case and decides to create a new version for it, the policy schema may be updated or replaced and the version number changed to indicate a later version (e.g., version 2.0). In some embodiments, multiple versions (associated with different version numbers) may be considered valid for policies, while clients may mark or indicate that some versions of a policy schema are obsolete (and should not be used).
[00142] Policy creation/application handling 1910 may also handle requests to create an instance of a policy type, a policy. For example, another client may create a new Payer policy based on the Payer policy type. The other client can submit the appropriate policy content to populate the new policy (as discussed below with regard to FIGS. 20 and 22). In some embodiments, policy validation handling 1930 may direct the performance of syntactic validation, either via a remote validation agent or through a remote validation agent implemented as part of policy management 330. If valid, the newly created policy may then be written as a new policy resource data object into the organization, as discussed above with regard to FIGS. 4 and 5A.
[00143] Policy management 330 may implement policy lookup handling 1920, in various embodiments, to handle lookup requests for policies (as discussed above with regard to FIG. 4). For example, policies can also be inherited in a chain from the organization down to a group, group of groups, or individual resource data object. If a policy is applied to a parent node in the hierarchy, then the child node (group, group of groups, or individual resource data object) may inherit the policy of the parent node. In this way, the policy applied to the parent node becomes the "default" policy, in the absence of any other policy applications. When there are multiple policies in the inheritance path, for example there is a policy applied at both the hierarchy and group level, then different policies may have different inheritance semantics, which may have to be resolved. In one scenario, access policies may follow the semantics of a set union, where ordering does not matter (e.g., everything is allowed unless explicitly excluded). Billing policies, in another scenario, may implement a "child wins/parent appends" inheritance model where a child policy may be executed, followed by a parent policy. In such scenarios, ordering of policies matters. Thus, policy lookup handling 1920 may be configured to resolve conflicting policies according to the appropriate inheritance semantics for the policy.
[00144] In at least some embodiments, policy management 330 may implement policy validation handling 1930 to direct syntactic and semantic validation of policies via remote validation agents. As noted earlier, validation of policies may include syntax validation. Syntax validation may evaluate whether a policy is syntactically correct with respect to the policy schema of the policy type for the policy so that the policy can be parsed and evaluated by backend systems that lookup the policy. Syntactic validation may be performed, in some embodiments, when authored, as discussed below with regard to FIG. 20. In addition to syntactic validation, some policies may undergo semantic validation. As noted above, semantic validation may be performed to ensure that policy content is meaningful, so that a resource or other information specified in the policy results in a policy that can be enforced. For example, semantic validation could determine whether a user account identifier specified in the "Payer" policy example discussed above is an account in the organization that has a valid payment instrument (e.g., a valid source of funds to pay for incurred expenses). Policy validation handling 1930 may direct validation upon policy applications and resource or organization changes/modifications, in order to ensure that the changes do not invalidate policies that are applied within the organization. For example, validation of a modification to resource (e.g., a payer account leaving an organization or group) to ensure the modification does not invalidate the policy (e.g., that the payer account does not leave the organization or group without a valid payment instrument). As each policy may have different semantic validation logic, each policy may have a separately configurable remote validation agent.
[00145] In at least some embodiments, policy validation handling 1930 may direct synchronous or asynchronous validation of policies. For example, a policy (or policy schema) may specify that validation for the policy is performed synchronously, so that the client that initiated the validation request (e.g., a client attempting to attach a policy or enforce a policy) waits for the validation result before continuing to operate. Whereas, for a policy (or policy schema) that specifies asynchronous validation, the client may not wait for the validation result to continue operating. In cases where asynchronous validation is performed (e.g., long running validations), policy validation handling 1930 may track the state of validation for a policy (e.g., "Validation Request Submitted," "Validation Ongoing," "Validation Success," or "Validation Error") and may provide the state of the validation for the policy to the client in response to requests (e.g., the client may periodically poll for the state of the validation). In some embodiments, policy validation handling 1930 may provide a recommendation to a policy creator (e.g., to a user account that created the policy) to change the policy validation behavior to synchronous or asynchronous depending on previous performance of validating the policy. For example, for validation of policies that are performed repeatedly (e.g., by a service or client that validates and then enforces or consumes the policy), a change in validation behavior may offer better performance (e.g., by not tying resources using synchronous behavior waiting for a long running policy validation or by using asynchronous behavior, spending time releasing and polling for validation state when the validation completes quickly).
[00146] FIG. 20 illustrates interactions to manage policy types and policies in resource management service, according to some embodiments. Client 2010, which may be a client 270 of provider network 200 as discussed above, may submit requests to resource management service 240 via interface 310. Interface 310 may provide an API for requests from client 2010, which may be formatted and sent according to the API via a command line interface or graphical user interface, such as discussed below with regard to FIG. 20. As indicated at 2012, client 2010 may send a request to create, modify, or delete a policy type maintained at resource management service 240 (e.g., in policy store 1940). To create or modify the policy type, a policy schema or changes to a policy schema may be specified (e.g., by including a schema data object or file). As noted above the policy schema may be described by a script or language which may define allowable structure and values for a policy type. Resource management service 240 may update the policy type in policy store 1940 in accordance with request (e.g., creating a new policy type and storing the policy schema and related metadata, updating the policy schema and metadata, or deleting the policy schema and metadata). Resource management service 240 may then acknowledge the completion of the request 2014.
[00147] Client 2010 may send a request to create or modify a policy 2022 to resource management service 240. The created or modified policy may be an instance of a policy type. Multiple policies may be created for a single policy type so that policies may be configured differently for application to resources in different circumstances. The creation request or update request 2022 may include policy content that defines actions taken (or not taken) in certain conditions. For instance, the creation request may specify a new policy for a resource launch policy type to describe the actions to be taken when a compute resource is launched in the provider network (e.g., a condition describing the type of compute instance to be launched, configuration action(s) to take for the compute instance to be launched). In response to receiving a request to create/update a policy, resource management service 240 may request syntactic validation 2024 from a remote validation agent 2020. Remote validation agent 2020 may be another resource implemented in the provider network, such as a virtual compute instance or server resource configured to handle validation requests from remote management service for the policy type. In at least some embodiments, the request for syntactic validation may include the policy schema for the policy (or remote validation agent 2020 may maintain or separately request the policy schema from resource management service 240) and the policy content to be validated (or an identifier so that the policy content may be retrieved from resource management service 240). Remote validation agent 2020 may perform a syntactic validation by comparing the policy content of the created/updated policy with the policy schema for the policy type to determine whether the policy content violates any of the allowed structure (e.g., ordering of data fields) or content (e.g., data types or resource types— such as allowing a storage resource to be specified when the policy schema describes a computing resource). Remote validation agent 2020 may then provide syntactic validation results 2026 to resource management service 240 (e.g., indicating that the policy is valid or that error(s) are detected— and possibly include the detected error(s)). If validation fails, then failure indication 2028 may be provided to client 2010. If the created/updated policy is valid, then resource management service 240 may store the new policy object or update the existing policy object 2032 in hierarchical data store 350. Upon acknowledgment 2034 of successfully storing/updating the policy object in hierarchical data store 350, resource management service 240 may then acknowledge the success of the creation or modification request 2036.
[00148] Client 2010 may send a request to delete a policy 2042 to resource management service 240. Resource management service 240 may send a corresponding request 2044 to delete the policy object from hierarchical data store 350. Upon acknowledgement 2046 of successful deletion of the policy data object, resource management service 240 may send an acknowledgment of the policy deletion 2048 to client 2010.
[00149] FIG. 21 illustrates interactions to attach policies to resource data objects, according to some embodiments. Client 2110, which may be the same or different from client 2010, may send a request to apply a policy 2112 (that has been created, as discussed above in FIG. 20) to a resource data object (e.g., to a group resource data object or an individual resource data object) to resource management service via interface 310. Resource management service 240 may then identify a remote validation agent for the policy (e.g., as may be identified in the policy or policy schema) and send a validation request 2122 to remote validation agent 2120. For example, the policy or policy schema for the type of policy may include a network endpoint (e.g., a network address, such as an Internet Protocol (IP) address) to which the validation request is sent. In some embodiments, remote validation agent 2120 may be preregistered with resource management service 240 so that every time a policy of a policy type associated with remote validation agent 2120 is received, the validation request may be sent to remote validation agent 2120. The semantic validation request 2122 and/or response may be formatted according to an API for validation requests and responses or may be an event or trigger indication configured by the policy or policy schema (e.g., an API request formatted according to an interface for remote validation agent 2120). Remote validation agent 2120, similar to remote validation agent 2020 discussed above in FIG. 20, may be another resource implemented in the provider network, such as a virtual compute instance or server resource configured to handle validation requests from remote management service for the policy type.
[00150] Remote validation agent 2120 may perform semantic validation with respect to the policy and the attached resource data object. For example, if the attachment to the resource policy gives a resource, such as a compute instance, access to a storage resource, semantic validation may determine wither the identified resource data object is a compute instance, and whether or not an instance of that type is allowed to have access to the storage resource. Semantic validation may validate the content of the policy to determine whether or not the any of the actions or conditions defined in the policy violates any business or operational logic or rules or is otherwise unenforceable. Remote validation agent 2120 may send semantic validation results 2124 back to resource management service 240. If the semantic validation fails, then resource management service 240 may provide an indication of validation failure for the policy and reject the request to attach the policy to the resource data object. The indication 2114 may include validation error information so that corrections to the policy may be made, in some embodiments.
[00151] If the policy is determined to be valid, then resource management service 240 may send a request to update a hierarchy in hierarchical data store 350 to link the policy to the data object 2132. Hierarchical data store 350 may write the link to the stored hierarchy and return an update acknowledgement 2134. In turn, resource management 240 may return an acknowledgement 2116 of the policy attachment to client 2110.
[00152] As noted above, policy types and policies may user authored or specified. In this way custom policies may be both created, managed, and applied to resources in a distributed system, such as provider network 200, as well as being validated according to a custom policy schema for the policy and custom semantic validation rules (e.g., business logic specific to a resource or service implementing a resource to which the policy is applied). FIG. 22 illustrates an example graphical user interface for creating and editing policies, according to some embodiments.
[00153] As illustrated in FIG. 22, policy creation interface 2200 may be a graphical user interface hosted or provided by a network-based site (e.g., provider network website) or be a local GUI implemented at a client of provider network 200 (e.g., built on top of various APIs of provider network 200). Policy creation interface 2200 may implement a policy selection area 2210 to display various options for triggering the creation or modification of policies or policy types. For example, select policy type 2222 may be a drop down list, search interface, or any other kind of selection user interface component that allows a user to identify an existing policy type. As indicated at 2224, users may also select an element to upload a policy type (e.g., create a new policy schema) which can then be selected to create a new policy of that policy type. In some embodiments, policy editor 2250 may display in edit interface 2254 the policy schema for editing (not illustrated). Selection of the policy type may populate one or more possible policy templates 2232 which may be examples of policies that can be configured or filled by a user via edit interface 2254. In at least some embodiments, a search interface for existing policy templates (or policy types) may be implemented so that users can identify a policy type or policy template that suits specified resource management needs (e.g., security, storage resources, deployment, networking, payment configuration, etc.). Upload policy/template element 2234 may allow users to select a policy template or policy for upload (which may then be edited in edit interface 2254. Similarly, select existing policy element 2242 may allow users to select a previously created policy and make changes to the policy.
[00154] Policy editor 2250 may be implemented to provide various policy content editing features, such as a text editor like edit interface 2254. To apply changes, including the creation of a new policy, user interface element 2260 may be selected. Note however, that in at least some embodiments, policy type creation, policy creation, policy type update, or policy type updates may be performed via a series of user interface elements or windows (e.g., a policy type selection wizard, a policy type creation wizard, a policy type update wizard, a policy template selection wizard, a policy creation wizard, a policy edit wizard, etc.), or some other form or combination of graphical user interface elements and thus FIG. 22 is not intended to be limiting.
[00155] Although FIGS. 2 - 22 have been described and illustrated in the context of a provider network implementing a resource management service for resources of multiple different services in the provider network, the various components illustrated and described in FIGS. 2 - 22 may be easily applied to other resource management systems, components, or devices. For example, private systems and networks implementing multiple system resources may maintain remote policy validation for managing the behavior of the system resources. As such, FIGS. 2 - 22 are not intended to be limiting as to other embodiments of a system that may implement resource management system for system resources. FIG. 23 is a high-level flowchart illustrating methods and techniques to implement remote policy validation for managing distributed system resources, according to some embodiments. Various different systems and devices may implement the various methods and techniques described below, either singly or working together. For example, a resource management service such as described above with regard to FIGS. 2 - 22 may be configured to implement the various methods. Alternatively, a combination of different systems and devices may implement these methods. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices.
[00156] As indicated at 2310, policies applicable to manage resource(s) in a distributed system may be maintained. For example, a hierarchical data store, such as discussed above with regard to FIGS. 3 - 22, may be implemented to maintain resource data objects and policies for managing resources corresponding to the data objects. The maintained policies may be made applicable to a resource by associating the policies with a resource (e.g., creating a link in the hierarchy between the policy and resource data object) so that when a policy consumer (e.g., a system, service, or control that manages the resource) checks to see whether policies are enforced against the resource, the associated policy is identified as applied to the resource. While FIGS. 2 - 22 discuss utilizing a hierarchical data store to associate policies with resources, various other data structures and/or data stores may be implemented. For example, a table indexed by resource id may be maintained that stores all policies applied to a resource in a row with the resource, so that when the policies associated with the resource need to be determined, the resource id of the resource may be looked up and applied policies read from the row.
[00157] A validation event may be detected for one of the policies, as indicated at 2320, in some embodiments. A validation event may be triggered by a policy action (e.g., creation, application, or enforcement) of a policy that results in a validation of the policy. For example, as illustrated in FIG. 20, a validation event may occur when a policy is created. Similarly, as illustrated in FIG. 21, a validation event may occur when a policy is applied (e.g., attached) or enforced (e.g., by a policy consumer, such as another network service that implements the actions specified in the policy when the conditions specified in the policy are satisfied, as discussed above with regard to FIG. 4). A validation event may also be triggered by a policy action resulting from a modification to (or an attempt to modify) a resource (e.g., adding or removing resources from a group or hierarchy).
[00158] A remote validation agent may be identified according to the policy, in some embodiments, as indicated at 2330. A remote validation agent may be a remote validation agent implemented remotely (e.g., separated via a network communication) from a resource manager or other system, component, or device that maintains the policies for managing resources in a distributed system. In some embodiments, remote validation agents may be pre-registered to associate the remote validation agent with handling certain types of validation (e.g., syntactic and/or semantic), so that the remote validation agent implements a common interface (e.g., API) format for receiving a validation request and sending validation results. In some embodiments, a remote validation agent may only be specified by a network endpoint (e.g., in a policy or policy schema for the policy). Validation information for the policy may be sent to the validation agent to initiate validation of the policy, as indicated at 2340. Validation information may include policy content, a policy schema, information about the action triggering the validation event (e.g., if a request to apply the policy to a particular resource, validation information may include the identity of and/or information about the particular resource), or any other data for performing a validation. In some embodiments, validation information may include a request for a specific type of validation (e.g., semantic or syntactic) if both may be performed by the remote validation agent. In some embodiments, validation information may include an identifier of the policy, as discussed below with regard to FIG. 24, which the remote validation agent may then be used to obtain appropriate validation information (either from a resource manager or other source).
[00159] As indicated at 2350, a validation result may be received from the remote validation agent, in some embodiments. If the validation result indicates that the policy is not valid, as indicated by the negative exit from 2360, then the policy action triggering the validation may be denied, as indicated at 2380. A denial or other failure indication may be provided to a requesting client to block, stop, or disallow the policy action. If the validation result indicates that the policy is valid, as indicated by the positive exit from 2360, then the policy action triggering the validation event with respect to resource(s) in the distributed system may be allowed, as indicated 2370. For example, the requested policy creation, application, or enforcement may proceed.
[00160] FIG. 24 is a high-level flowchart illustrating methods and techniques to implement policy validation at a remote validation agent, according to some embodiments. As indicated at 2410, a validation request for a policy may be received at a validation agent from a resource manager for a distributed system, in some embodiments. The validation request may include validation information, which as noted above, may include a variety of information, such as policy content, a policy schema, information about the action triggering the validation event (e.g., if a request to apply the policy to a particular resource, validation information may include the identity of and/or information about the particular resource), validation type (e.g., syntactic or semantic) or any other data for performing a validation. In some embodiments, the validation information may not include all the information needed to perform the validation, as indicated at 2420, (e.g., if the validation request includes a policy identifier but no policy content). If not, then the remote validation agent may request additional information from one or more sources (e.g., policy content from the resource manager , information about resources identified in the policy from other network services, such as whether a specified resource id is valid or allowed to perform an action specified by the policy), as indicated at 2430.
[00161] As indicated at 2440, the policy content may be evaluated to determine whether the policy is valid. For example, syntactic validation may evaluate whether a policy is syntactically correct with respect to a policy schema of a policy type for the policy so that the policy can be parsed and evaluated by backend systems that lookup the policy, whereas semantic validation may be performed to ensure that policy content is meaningful, and thus enforceable, so that a resource or other information specified in the policy results in a policy that can be enforced. Because remote validation agent may be customized to perform validation based on knowledge that the resource manager does not have (e.g., whether identifiers included in a policy exist, whether the resources identified in the policy can be configured in a particular way, whether a user account can be authorized to access certain information, etc.), remote validation agent may also access or obtain the other information that the resource manager does not have (or understand) (some of which may be obtained as indicated at element 2430 discussed above), in order to perform the validation. Once validation is complete, a validation result may be sent to the resource manager, as indicated at 2450. The result may identify errors in the event that the policy is determined to be invalid.
[00162] The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as in FIG. 25) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may be configured to implement the functionality described herein (e.g., the functionality of various servers and other components that implement the directory storage service and/or storage services/systems described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
[00163] FIG. 25is a block diagram illustrating a computer system configured to implement different hierarchies of resource data objects for managing system resources, according to various embodiments, as well as various other systems, components, services or devices described above. For example, computer system 2500 may be configured to implement various components of a resource management service, hierarchical data store, or other provider network services, in different embodiments. Computer system 2500 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device.
[00164] Computer system 2500 includes one or more processors 2510 (any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory 2520 via an input/output (I/O) interface 2530. Computer system 2500 further includes a network interface 2540 coupled to I/O interface 2530. In various embodiments, computer system 2500 may be a uniprocessor system including one processor 2510, or a multiprocessor system including several processors 2510 (e.g., two, four, eight, or another suitable number). Processors 2510 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 2510 may be general -purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2510 may commonly, but not necessarily, implement the same ISA. The computer system 2500 also includes one or more network communication devices (e.g., network interface 2540) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.). For example, a client application executing on system 2500 may use network interface 2540 to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the resource management or other systems implementing multiple hierarchies for managing system resources described herein. In another example, an instance of a server application executing on computer system 2500 may use network interface 2540 to communicate with other instances of the server application (or another server application) that may be implemented on other computer systems (e.g., computer systems 2590).
[00165] In the illustrated embodiment, computer system 2500 also includes one or more persistent storage devices 2560 and/or one or more I/O devices 2580. In various embodiments, persistent storage devices 2560 may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device. Computer system 2500 (or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices 2560, as desired, and may retrieve the stored instruction and/or data as needed. For example, in some embodiments, computer system 2500 may host a storage system server node, and persistent storage 2560 may include the SSDs attached to that server node.
[00166] Computer system 2500 includes one or more system memories 2520 that are configured to store instructions and data accessible by processor(s) 2510. In various embodiments, system memories 2520 may be implemented using any suitable memory technology, (e.g., one or more of cache, static random access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory). System memory 2520 may contain program instructions 2525 that are executable by processor(s) 2510 to implement the methods and techniques described herein. In various embodiments, program instructions 2525 may be encoded in platform native binary, any interpreted language such as JavaTM byte-code, or in any other language such as C/C++, JavaTM, etc., or in any combination thereof. For example, in the illustrated embodiment, program instructions 2525 include program instructions executable to implement the functionality of a hierarchy storage nodes that maintain versions of hierarchical data structures or components of a transaction log store that maintain transaction logs for hierarchical data structures, in different embodiments. In some embodiments, program instructions 2525 may implement multiple separate clients, server nodes, and/or other components.
[00167] In some embodiments, program instructions 2525 may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, SolarisTM, MacOSTM, WindowsTM, etc. Any or all of program instructions 2525 may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments. A non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Generally speaking, a non- transitory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/DIRECTORY STORAGE SERVICE 220-ROM coupled to computer system 2500 via I/O interface 2530. A non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system 2500 as system memory 2520 or another type of memory. In other embodiments, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2540.
[00168] In some embodiments, system memory 2520 may include data store 2545, which may be configured as described herein. For example, the information described herein as being stored by the hierarchy storage nodes or transaction log store described herein may be stored in data store 2545 or in another portion of system memory 2520 on one or more nodes, in persistent storage 2560, and/or on one or more remote storage devices 2570, at different times and in various embodiments. In general, system memory 2520 (e.g., data store 2545 within system memory 2520), persistent storage 2560, and/or remote storage 2570 may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, configuration information, and/or any other information usable in implementing the methods and techniques described herein.
[00169] In one embodiment, I/O interface 2530 may be configured to coordinate I/O traffic between processor 2510, system memory 2520 and any peripheral devices in the system, including through network interface 2540 or other peripheral interfaces. In some embodiments, I/O interface 2530 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2520) into a format suitable for use by another component (e.g., processor 2510). In some embodiments, I/O interface 2530 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 2530 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface 2530, such as an interface to system memory 2520, may be incorporated directly into processor 2510.
[00170] Network interface 2540 may be configured to allow data to be exchanged between computer system 2500 and other devices attached to a network, such as other computer systems 2590 (which may implement embodiments described herein), for example. In addition, network interface 2540 may be configured to allow communication between computer system 2500 and various I/O devices 2550 and/or remote storage 2570. Input/output devices 2550 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems 2500. Multiple input/output devices 2550 may be present in computer system 2500 or may be distributed on various nodes of a distributed system that includes computer system 2500. In some embodiments, similar input/output devices may be separate from computer system 2500 and may interact with one or more nodes of a distributed system that includes computer system 2500 through a wired or wireless connection, such as over network interface 2540. Network interface 2540 may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface 2540 may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface 2540 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In various embodiments, computer system 2500 may include more, fewer, or different components than those illustrated in FIG. 25 (e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.)
[00171] It is noted that any of the distributed system embodiments described herein, or any of their components, may be implemented as one or more network-based services. In some embodiments, a network-based service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network. A network-based service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the network- based service in a manner prescribed by the description of the network-based service's interface. For example, the network-based service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations.
[00172] In various embodiments, a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network- based services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a network-based services request, a network-based services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the network-based service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP).
[00173] In some embodiments, network-based services may be implemented using Representational State Transfer ("RESTful") techniques rather than message-based techniques. For example, a network-based service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message.
[00174] The various methods as illustrated in the figures and described herein represent example embodiments of methods. The methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. [00175] Although the embodiments above have been described in considerable detail, numerous variations and modifications may be made as would become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
[00176] The foregoing may be better understood in view of the following clauses:
Clause 1. A system, comprising:
a hierarchical data store that stores different hierarchies of a plurality of resource data objects, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in the system;
at least one processor and a memory storing program instructions that cause the at least one processor to implement a system resource manager, configured to:
access the hierarchical data store to evaluate policy lookup requests for different resource data objects with respect to the different hierarchies of the resource data objects;
receive a request to modify one of the different hierarchies of the resource data objects; and
modify the one hierarchy in the hierarchical data store according to the request, wherein the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resource data objects identified by other ones of the hierarchies when processing subsequent policy lookup requests with respect to the different hierarchies.
Clause 2. The system of as recited in clause 1, wherein the modification to the hierarchy comprises:
the addition of new policy to be identified as applicable by the hierarchy to at least one of the resources in the system; or
the removal of a policy identified as applicable by the hierarchy to at least one of the resources in the system.
Clause 3. The system as recited in clause 1, wherein the modification to the one hierarchy is performed in response to a determination that the request is received from a client authorized to access the one hierarchy. Clause 4. The system as recited in clause 1, wherein the system is a provider network that implements a plurality of different network-based services, wherein the resources are implemented as part of the different network-based services, and wherein the policy lookup requests are received from different ones of the network-based services.
Clause 5. A method, comprising:
performing, by one or more computing devices:
maintaining different hierarchies of a plurality of resource data objects that are stored in a data store, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in a system;
receiving a request to modify one of the different hierarchies of the resource data objects; and
modifying the one hierarchy according to the request, wherein the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resources identified by other ones of the hierarchies.
Clause 6. The method as recited in clause 5, wherein modifying the one hierarchy according to the request comprises changing the arrangement of at least one of the resource data objects within the hierarchy.
Clause 7. The method as recited in clause 5, wherein modifying the one hierarchy according to the request comprises adding a new policy to be identified as applicable by the hierarchy to at least one of the resources in the system.
Clause 8. The method as recited in clause 7, wherein modifying the one hierarchy according to the request further comprises determining that adding the new policy is permitted in the hierarchy, wherein adding the new policy is performed in response to determining that adding the new policy is permitted in the hierarchy.
Clause 9. The method as recited in clause 5, further comprising:
receiving a request to add another resource data object to those resource data objects describing resources in the system;
determining one or more of the hierarchies to include the other resource data object; and adding the other resource data object in the determined one or more hierarchies.
Clause 10. The method as recited in clause 5, further comprising:
receiving a policy lookup request for one of the resource data objects; evaluating one or more of the hierarchies to determine one or more policies identified as applicable to the resource corresponding to the resource data object; and providing the determined one or more policies in response to the policy lookup request.
Clause 11. The method as recited in clause 10, further comprising:
detecting one or more conflicts between the one or more determined policies identified as applicable to the resource corresponding to the resource data object; and resolving the one or more conflicts to determine a resolved version of the determined one or more policies, wherein the resolved version of the one or more policies is provided in response to the policy lookup request.
Clause 12. The method as recited in clause 5, further comprising:
receiving a request to create another hierarchy for the resource data objects; and creating the other hierarchy for the resource data objects such that a subsequently received policy lookup request for one of the resource data objects evaluates the other hierarchy in addition to the hierarchies to determine policies applicable to the resource corresponding to the resource data object.
Clause 13. The method as recited in clause 5, wherein the data store is a separate hierarchical data store, wherein modifying the one hierarchy according to the request comprises sending one or more requests to the hierarchical data store to modify the hierarchy in the separate hierarchical data store.
Clause 14. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
processing policy lookup requests for different resource data objects with respect to different hierarchies of a plurality of resource data objects that are stored in a data store, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in a system that are provided in response to the policy lookup requests; receiving a request to modify one of the different hierarchies of the resource data objects; and
modifying the one hierarchy according to the request, wherein the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resource data objects identified by other ones of the hierarchies when processing subsequent policy lookup requests with respect to the different hierarchies.
Clause 15. The non-transitory, computer-readable storage medium as recited in clause 14, wherein, in modifying the one hierarchy according to the request, the program instructions cause the one or more computing devices to implement:
adding a new policy to be identified as applicable by the hierarchy to at least one of the resources in the system; or
removing a policy identified by the hierarchy as applicable to at least one of the resources in the system.
Clause 16. The non-transitory, computer-readable storage medium as recited in clause
14, wherein, in modifying the one hierarchy according to the request, the program instructions cause the one or more computing devices to implement changing the arrangement of at least one of the resource data objects within the hierarchy.
Clause 17. The non-transitory, computer-readable storage medium as recited in clause 14, wherein the program instructions cause the one or more computing devices to further implement:
receiving a request to create another hierarchy for the resource data objects; and creating the other hierarchy for the resource data objects such that a subsequently received policy lookup request for one of the resource data objects evaluates the other hierarchy in addition to the hierarchies to determine policies applicable to the resource corresponding to the resource data object.
Clause 18. The non-transitory, computer-readable storage medium as recited in clause 14, wherein the program instructions cause the one or more computing devices to further implement maintaining a historical version of the hierarchy prior to the modification as part of a plurality of respective historical versions maintained for the hierarchies.
Clause 19. The non-transitory, computer-readable storage medium as recited in clause 18, wherein the program instructions cause the one or more computing devices to further implement processing one or more other policy lookup requests for different resource data objects with respect to the hierarchies at one or more different points in time based, at least in part, on the historical versions maintained for the hierarchies.
Clause 20. The non-transitory, computer-readable storage medium as recited in clause 14, wherein the resources are implemented as part of the different network-based services, wherein the one or more computing devices implement a resource management service as part of the provider network, and wherein the policy lookup requests are received from different ones of the network-based services.
[00177] Additionally, the foregoing may be better understood in view of the following clauses:
Clause 21. A system, comprising:
a hierarchical data store that stores a hierarchical data structure;
at least one processor and a memory storing program instructions that cause the at least one processor to implement a storage engine, configured to:
receive, via an interface, a request to initiate a bulk edit for at least a portion of the hierarchical data structure;
create a copy of the portion of the hierarchical data structure in the hierarchical data store that is separate from the hierarchical data structure, wherein the portion of the hierarchical data structure remains available for read access; receive one or more requests to modify the portion of the hierarchical data structure;
access the hierarchical data store to perform one or more operations corresponding to the modification requests to modify the copy of the portion of the hierarchical data structure;
receive a request to commit the bulk edit for the portion of the hierarchical data structure; and
perform a transaction that atomically replaces the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure that includes the one or more modifications to the hierarchical data structure such that the modified portion of the hierarchical data structure becomes available for read and write access.
Clause 22. The system as recited in clause 21, wherein the storage engine is further configured to:
in response to the receipt of the request to initiate the bulk edit for the portion of the hierarchical data structure, block write access to the portion of the hierarchical data structure; and
upon performance of the transaction to atomically replace the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, allow write access to the copy of the portion in the hierarchical data structure. Clause 23. The system as recited in clause 21, wherein to perform the transaction to atomically replace the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, the storage engine is configured to:
remove a link from the portion of the hierarchical data structure from the parent node; and
add a link from the copy of the portion of the hierarchical data structure to a parent node for the portion of the hierarchical data structure.
Clause 24. The system as recited in clause 21, wherein the storage engine is implemented as part of a resource management service for a provider network, wherein the provider network implements a plurality of different network-based services, wherein the hierarchical data structure comprises resource data objects that identify policies applicable to the behavior of resources corresponding to the resource data objects, and wherein the resources are implemented as part of the different network-based services.
Clause 25. A method, comprising:
performing, by one or more computing devices:
receiving a request to perform a plurality of modifications to at least a portion of a hierarchical data structure, wherein the hierarchical data structure comprises resource data objects that identify policies applicable to the behavior of a plurality of resources in a system that correspond to the resource data objects;
creating a copy of the portion of the hierarchical data structure that is separate from the hierarchical data structure, wherein the portion of the hierarchical data structure remains available for read access;
performing one or more operations to apply the modifications to the copy of the portion of the hierarchical data structure;
receiving a request to commit the modifications to the portion of the hierarchical data structure; and
atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure that includes the modification to the hierarchical data structure such that the modified portion of the hierarchical data structure becomes available for read and write access. Clause 26. The method as recited in clause 25, further comprising: in response to receiving the request to perform the modifications to the portion of the hierarchical data structure, blocking write access to the portion of the hierarchical data structure; and
upon atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, allowing write access to the copy of the portion in the hierarchical data structure.
Clause 27. The method as recited in clause 26,
wherein blocking write access to the portion of the hierarchical data structure comprises removing a lock indication for the portion of the data structure from a lock structure in the hierarchical data structure; and
wherein allowing write access to the copy of the portion in the hierarchical data structure comprises adding the lock indication for the portion of the data structure back to the lock structure in the hierarchical data structure.
Clause 28. The method as recited in clause 25, wherein the portion of the hierarchical data structure remains available for write access, and wherein the method further comprises: prior to committing the modifications to the portion of the hierarchical data structure, replicating one or more writes received for the portion of the hierarchical data structure to the copy of the hierarchical data structure.
Clause 29. The method as recited in clause 25, wherein the request to perform the modifications to the portion of the hierarchical data structure is one of a plurality of received requests to perform respective modifications to the same portion of the hierarchical data structure.
Clause 30. The method as recited in clause 29, wherein atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure is further performed in response to determining that the modifications to commit do not conflict with the plurality of received requests for the same portion of the hierarchical data structure.
Clause 31. The method as recited in clause 25, further comprising:
wherein atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure comprises removing a link from the portion of the hierarchical data structure from the parent node; and subsequent to atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, reclaiming storage space in one or more storage devices maintaining the unlinked portion of the hierarchical data structure.
Clause 32. The method as recited in clause 25, wherein the request to perform the modifications to the portion of the hierarchical data structure and the request to commit the modifications are received via a programmatic interface, and wherein the method further comprises:
receiving, via the programmatic interface, one or more requests identifying the modifications to perform to the portion of the hierarchical data structure, wherein the one or more operations to apply the modifications to the copy of the portion of the hierarchical data structure are performed in response to receiving the requests identifying the modifications.
Clause 33. The method as recited in clause 32, wherein the one or more computing devices implement a system resource manager for the system.
Clause 34. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
receiving a request to perform a plurality of modifications to at least a portion of a hierarchical data structure, wherein the hierarchical data structure comprises resource data objects that identify policies applicable to the behavior of a plurality of resources in a system that correspond to the resource data objects; creating a copy of the portion of the hierarchical data structure that is separate from the hierarchical data structure, wherein the portion of the hierarchical data structure remains available for read access;
receiving one or more requests identifying the modifications to perform to the portion of the hierarchical data structure;
performing one or more operations to apply the modifications to the copy of the portion of the hierarchical data structure;
receiving a request to commit the modifications to the portion of the hierarchical data structure; and
atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure that includes the modification to the hierarchical data structure such that the modified portion of the hierarchical data structure becomes available for read and write access. Clause 35. The non-transitory, computer-readable storage medium as recited in clause
34, wherein the portion of the hierarchical data structure remains available for write access, and wherein the program instructions cause the one or more computing devices to further implement: prior to committing the modifications to the portion of the hierarchical data structure, replicating one or more writes received for the portion of the hierarchical data structure to the copy of the hierarchical data structure.
Clause 36. The non-transitory, computer-readable storage medium as recited in clause
35, wherein the program instructions cause the one or more computing devices to further implement:
prior to committing the modifications to the portion of the hierarchical data structure, determining that the replicated one or more writes do not conflict with the modifications to the portion of the hierarchical data structure.
Clause 37. The non-transitory, computer-readable storage medium as recited in clause 34, wherein the program instructions cause the one or more computing devices to further implement:
in response to receiving the request to perform the modifications to the portion of the hierarchical data structure, blocking write access to the portion of the hierarchical data structure; and
upon atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, allowing write access to the copy of the portion in the hierarchical data structure.
Clause 38. The non-transitory, computer-readable storage medium as recited in clause 34, wherein the request to perform the modifications to the portion of the hierarchical data structure, the one or more requests identifying the modifications to perform to the portion of the hierarchical data structure, and the request to commit the modifications are received via a graphical user interface.
Clause 39. The non-transitory, computer-readable storage medium as recited in clause 34, wherein, in atomically replacing the portion of the hierarchical data structure with the copy of the portion of the hierarchical data structure, the program instructions cause the one or more computing devices to implement:
removing a link from the portion of the hierarchical data structure from the parent node; and
adding a link from the copy of the portion of the hierarchical data structure to a parent node for the portion of the hierarchical data structure. Clause 40. The non-transitory, computer-readable storage medium as recited in clause 34, wherein the one or more computing devices implement a resource management service for a provider network, wherein the provider network implements a plurality of different network- based services, and wherein the resources are implemented as part of the different network-based services.
[00178] Additionally, the foregoing may be better understood in view of the following clauses:
Clause 41. A system, comprising:
a plurality of compute nodes, comprising at least one processor and a memory that implement a distributed system, wherein the distributed system is operated on behalf of a plurality of user accounts;
one or more of the compute nodes, that implement an agreement manager for performance of updates to the distributed system;
the agreement manager, configured to:
receive, via an interface for the distributed system, an agreement request from one of the user accounts that proposes one or more updates to the distributed system;
determine an authorization scheme for authorization of the proposed updates; identify one or more other ones of the user accounts as approvers for the agreement request according to the authorization scheme;
provide, via the interface, respective notifications of the proposed updates to the identified user accounts for approval;
receive, via the interface, one or more responses from at least one of the user accounts identified for approval;
evaluate the one or more responses to determine that the authorization scheme for the agreement request is satisfied;
determine that the authorization scheme for the agreement request is satisfied; and direct performance of the one or more updates to the distributed system.
Clause 42. The system as recited in clause 41, wherein the agreement request identifies the authorization scheme for the agreement request, and wherein to determine the authorization scheme, the agreement manager is configured to parse the agreement request to discover the identified authorization scheme.
Clause 43. The system as recited in clause 41, wherein the authorization scheme comprises a requirement that the at least one user account approve of the proposed updates. Clause 44. The system as recited in clause 41, wherein the distributed system is a provider network, wherein the updates describe updates to a hierarchical data structure maintained for the provider network comprising a plurality of resource data objects that identify policies applicable to the behavior of resources implemented at one or more network-based services in the provider network corresponding to the resource data objects.
Clause 45. A method, comprising:
performing, by one or more computing devices:
receiving an agreement request proposing one or more updates to a hierarchical data structure comprising a plurality of resource data objects that identify policies applicable to the behavior of resources corresponding to the resource data objects in the distributed system.;
identifying one or more approvers for the agreement request according to an authorization scheme for the agreement request;
evaluating one or more responses received from at least one of the approvers to determine that the authorization scheme for the agreement request is satisfied;
determining that the authorization scheme for the agreement request is satisfied; and
performing the one or more updates to the hierarchical data structure. Clause 46. The method as recited in clause 45, wherein the agreement request identifies the authorization scheme for the agreement request.
Clause 47. The method as recited in clause 45, wherein the authorization scheme comprises a requirement that the at least one approver approve of the proposed updates.
Clause 48. The method as recited in clause 45, wherein the authorization scheme comprises one or more quorum requirements for the identified approvers, and wherein evaluating the one or more responses received from the at least one user account identified for approval comprises verifying that the responses indicate approval of a respective minimum number of approvers identified for the one or more quorum requirements.
Clause 49. The method as recited in clause 45, further comprising:
prior to evaluating the one or more responses, receiving a request to modify the authorization scheme for the agreement request, wherein the evaluation of the one or more response determines whether the modified authorization scheme is satisfied.
Clause 50. The method as recited in clause 45, further comprising: receiving another agreement request proposing one or more other updates to the hierarchical data structure;
identifying one or more other approvers for the other agreement request according to a different authorization scheme for the other agreement request;
sending other respective notifications of the other proposed updates to the other identified approvers;
evaluating one or more other responses received from at least one of the other approvers to determine that the different authorization scheme for the other agreement request is not satisfied; and
determining that the different authorization scheme for the other agreement request is not satisfied; and
rejecting the other agreement request.
Clause 51. The method as recited in clause 45, further comprising:
receiving another agreement request proposing one or more other updates to the hierarchical data structure;
identifying one or more other approvers for the other agreement request according to a different authorization scheme for the other agreement request;
sending other respective notifications of the other proposed updates to the other identified approvers;
determining that an expiration time limit to authorize the other agreement request is expired; and
rejecting the other agreement request.
Clause 52. The method as recited in clause 45, further comprising:
receiving another agreement request proposing one or more other updates to the hierarchical data structure;
determining that the other agreement request is a duplicate of a prior agreement request that has been received; and
rejecting the other agreement request.
Clause 53. The method as recited in clause 45, wherein the distributed system is a provider network, wherein the resources implemented as part of one or more network-based services in the provider network, and wherein the agreement request and the responses are received via an interface of the provider network. Clause 54. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
receiving an agreement request proposing one or more updates to a distributed system, wherein the distributed system is operated on behalf of a plurality of user accounts, wherein the plurality of user accounts correspond to resource data objects in a hierarchical data structure describing the user accounts for the distributed system, wherein the agreement request is received from one of the user accounts;
identifying one or more other ones of the user accounts as approvers for the agreement request according to an authorization scheme for the agreement request;
providing respective notifications of the proposed updates to the identified approvers; evaluating one or more responses received from at least one of the user accounts identified for approval to determine that the authorization scheme for the agreement request is satisfied;
determining that the authorization scheme for the agreement request is satisfied; and directing performance of the one or more updates to the distributed system.
Clause 55. The non-transitory, computer-readable storage medium as recited in clause 54, wherein the agreement request comprises one or more instructions to perform the one or more updates to the distributed system and wherein directing performance of the one or more updates to the distributed system comprises executing the one or more instructions in the agreement request.
Clause 56. The non-transitory, computer-readable storage medium as recited in clause 54, wherein the authorization scheme comprises one or more quorum requirements for the identified approvers, and wherein, in evaluating the one or more responses received from the at least one user account identified for approval, the program instructions cause the one or more computing devices to implement verifying that the responses indicate approval of a respective minimum number of approvers identified for the one or more quorum requirements.
Clause 57. The non-transitory, computer-readable storage medium as recited in clause 54, wherein the hierarchical data structure identifies different groups of user accounts for the plurality of user accounts, and wherein the one or more quorum requirements correspond to different ones of the groups of user accounts. Clause 58. The non-transitory, computer-readable storage medium as recited in clause 54, wherein the program instructions cause the one or more computing devices to further implement:
prior to evaluating the one or more responses, receiving a request to modify the identified approvers for the agreement request, wherein the evaluation of the at least one response determines whether the authorization scheme is satisfied based on whether the at least one response is received from one of the modified identified approvers.
Clause 59. The non-transitory, computer-readable storage medium as recited in clause 54, wherein the program instructions cause the one or more computing devices to further implement:
receiving another agreement request proposing one or more other updates to the distributed system;
determining that the other agreement request exceeds an agreement request rate threshold for the one user account; and
rejecting the other agreement request.
Clause 60. The non-transitory, computer-readable storage medium as recited in clause 54, wherein the distributed system is a provider network, wherein the updates describe updates to a hierarchical data structure maintained for the provider network comprising a plurality of resource data objects that identify policies applicable to the behavior of resources implemented at one or more network-based services in the provider network corresponding to the resource data objects, and wherein the agreement request and the responses are received via an interface of the provider network.
[00179] Additionally, the foregoing may be better understood in view of the following clauses:
Clause 61. A system, comprising:
a data store that maintains a hierarchy of resource data objects, wherein the hierarchy of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in the system;
at least one processor and a memory storing program instructions that cause the at least one processor to implement a system resource manager, configured to:
receive a request to apply a policy to one of the resource data objects; identify a remote validation agent according to the policy; send a request to initiate validation of the policy to the remote validation agent that comprises validation information for the policy;
receive a validation result from the validation agent that indicates that the policy is valid; and
upon receipt of the validation result that indicates that the policy is valid, apply the policy to the one resource data object.
Clause 62. The system as recited in clause 61, wherein the validation of the policy performed at the remote validation agent is semantic validation, and wherein the system resource manager is further configured to:
prior to the receipt of the request to apply the policy:
receive a request to create the policy;
send a request to initiate syntactic validation of the policy to the same remote validation agent or a different remote validation agent;
receive a different validation result from the same remote validation agent or the different remote validation agent that was sent the request to initiate syntactic validation, wherein the different validation result indicates that the policy is syntactically valid; and
create a policy object in the data store that is available for application.
Clause 63. The system as recited in clause 61, wherein the data store is a hierarchical data store, and wherein to apply the policy to the one resource data object, the system resource manager is configured to link a policy data object for the policy in the hierarchical data store to the one resource data object.
Clause 64. The system as recited in clause 61, wherein the system is a provider network that implements a plurality of different network-based services, wherein the resources are implemented as part of the different network-based services, and wherein the system resource manager is implemented as another one of the network-based services.
Clause 65. A method, comprising:
performing, by one or more computing devices:
detecting a policy validation event for a policy applicable to manage one or more resources in a distributed system, wherein respective resource data objects corresponding to a plurality of resources in the distributed system including the one or more resources are maintained in a hierarchical data structure in a hierarchical data store, wherein the respective resource data objects identify policies including the policy applicable to the resources in the distributed system;
sending validation information for the policy to a remote validation agent identified according to the policy to initiate validation of the policy;
receiving a validation result from the remote validation agent; and allowing or denying a policy action that triggered the policy validation event according to the received validation result.
Clause 66. The method as recited in clause 65, wherein the validation of the policy initiated at the remote validation agent is a semantic policy evaluation that evaluates content of the policy to determine whether the policy is enforceable.
Clause 67. The method as recited in clause 65,
wherein the one or more computing devices implement a resource manager for the distributed system; and
wherein the method further comprises:
performing, by one or more other computing device implementing the remote validation agent:
receiving the validation information for the policy;
evaluating the policy based, at least in part, on the validation information to determine whether the policy is valid; and
sending the validation result to the resource manager indicating whether the policy is valid.
Clause 68. The method as recited in clause 67, further comprising:
prior evaluating the policy, obtaining, by the remote validation agent, additional information for the policy from one or more sources.
Clause 69. The method as recited in clause 68, wherein at least one of the one or more sources is the resource manager.
Clause 70. The method as recited in clause 65, wherein the policy validation event is triggered in response to an attempt to modify of one of the resources, and wherein the policy action allows or denies the modification to the resource. .
Clause 71. The method as recited in clause 65, wherein the policy indicates one of synchronous or asynchronous processing behavior for the validation of the policy.
Clause 72. The method as recited in clause 65, wherein the policy is associated with a network endpoint that identifies the remote validation agent, wherein the validation information is sent to the network endpoint to initiate the validation at the remote validation agent. Clause 73. The method as recited in clause 65, wherein the validation result indicates that the policy is valid, and wherein allowing or denying a policy action with respect to the that triggered the policy validation event according to the received validation result comprises:
upon determining that the policy is valid, updating the hierarchical data structure to store a policy data object corresponding to the policy or link a policy data object to at least one of the respective resource data objects in the hierarchical data structure. Clause 74. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
detecting a policy validation event for a policy applicable to manage one or more resources in a distributed system, wherein respective resource data objects corresponding to a plurality of resources in the distributed system including the one or more resources are maintained in a hierarchical data structure in a hierarchical data store, wherein the respective resource data objects identify policies including the policy applicable to the resources in the distributed system; identifying a remote validation agent according to the policy;
sending a request to the remote validation agent to validate the policy, wherein the request comprises validation information for the policy;
receiving a validation result from the remote validation agent; and
allowing or denying a policy action that triggered the validation event for the policy according to the validation result.
Clause 75. The non-transitory, computer-readable storage medium as recited in clause 74, wherein the policy is one of a plurality of policy types, and wherein the validation of the policy initiated at the remote validation agent is a syntactic policy evaluation that evaluates the policy with respect to a policy schema for the one policy type to determine whether the policy conforms to the policy schema.
Clause 76. The non-transitory, computer-readable storage medium as recited in clause 74, wherein the validation of the policy initiated at the remote validation agent is a semantic policy evaluation that evaluates content of the policy to determine whether the policy is enforceable.
Clause 77. The non-transitory, computer-readable storage medium as recited in clause 74, wherein the policy action is an action to create the policy, wherein the validation result indicates that the policy is valid, and wherein, in allowing or denying a policy action that triggered the policy validation event according to the received validation result, the program instructions cause the one or more computing devices to implement:
upon determining that the policy is valid, storing the policy.
Clause 78. The non-transitory, computer-readable storage medium as recited in clause 74, wherein the detecting a policy validation event, the identifying the remote validation agent, the receiving the validation result, and the allowing or denying the policy action are performed by a resource manager for the distributed system, and wherein the program instructions cause the one or more computing devices to further implement:
receiving, at the remote validation agent, the validation information for the policy;
evaluating, by the remote validation agent, the policy based, at least in part, on the validation information to determine whether the policy is valid; and sending, by the remote validation agent, the validation result to the resource manager indicating whether the policy is valid.
Clause 79. The non-transitory, computer-readable storage medium as recited in clause 74, wherein the policy action is an action to enforce the policy, wherein the validation result indicates that the policy is valid, and wherein, in allowing or denying a policy action that triggered the policy validation event according to the received validation result, the program instructions cause the one or more computing devices to implement:
upon determining that the policy is valid, enforcing the policy with respect to a least one of the one or more resources.
Clause 80. The non-transitory, computer-readable storage medium as recited in clause 74, wherein the distributed system is a provider network that implements a plurality of different network-based services, wherein the one or more resources are implemented as part of the different network-based services, and wherein the detecting a policy validation event, the identifying the remote validation agent, the receiving the validation result, and the allowing or denying the policy action are performed by a resource manager for the distributed system implemented as another one of the network-based services.

Claims

CLAIMS:
1. A system, comprising:
at least one processor and a memory storing program instructions that when executed by the at least one processor cause the at least one processor to:
maintain different hierarchies of a plurality of resource data objects that are stored in a data store, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in a system;
receive a request to modify one of the different hierarchies of the resource data objects; and
modify the one hierarchy according to the request, wherein the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resources identified by other ones of the hierarchies.
2. The system of claim 1, wherein to modify the one hierarchy according to the request, the program instructions cause the at least one processor to determine that adding the new policy is permitted in the hierarchy, wherein adding the new policy is performed in response to determining that adding the new policy is permitted in the hierarchy.
3. The system of claim 1, wherein the modification to the one hierarchy is performed in response to a determination that the request is received from a client authorized to access the one hierarchy.
4. The system of claim 1, wherein the system is a provider network that implements a plurality of different network-based services, wherein the resources are implemented as part of the different network-based services, and wherein the policy lookup requests are received from different ones of the network-based services.
5. A method, comprising:
performing, by one or more computing devices:
maintaining different hierarchies of a plurality of resource data objects that are stored in a data store, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in a system;
receiving a request to modify one of the different hierarchies of the resource data objects; and
modifying the one hierarchy according to the request, wherein the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resources identified by other ones of the hierarchies.
6. The method of claim 5, wherein modifying the one hierarchy according to the request comprises changing the arrangement of at least one of the resource data objects within the hierarchy.
7. The method of claim 5, wherein modifying the one hierarchy according to the request comprises adding a new policy to be identified as applicable by the hierarchy to at least one of the resources in the system.
8. The method of claim 7, wherein modifying the one hierarchy according to the request further comprises determining that adding the new policy is permitted in the hierarchy, wherein adding the new policy is performed in response to determining that adding the new policy is permitted in the hierarchy.
9. The method of claim 5, further comprising:
receiving a request to add another resource data object to those resource data objects describing resources in the system;
determining one or more of the hierarchies to include the other resource data object; and adding the other resource data object in the determined one or more hierarchies.
10. The method of claim 5, further comprising:
receiving a policy lookup request for one of the resource data objects;
evaluating one or more of the hierarchies to determine one or more policies identified as applicable to the resource corresponding to the resource data object; and providing the determined one or more policies in response to the policy lookup request.
11. The method of claim 10, further comprising:
detecting one or more conflicts between the one or more determined policies identified as applicable to the resource corresponding to the resource data object; and resolving the one or more conflicts to determine a resolved version of the determined one or more policies, wherein the resolved version of the one or more policies is provided in response to the policy lookup request.
12. The method of claim 5, further comprising:
receiving a request to create another hierarchy for the resource data objects; and creating the other hierarchy for the resource data objects such that a subsequently received policy lookup request for one of the resource data objects evaluates the other hierarchy in addition to the hierarchies to determine policies applicable to the resource corresponding to the resource data object.
13. The method of claim 5, wherein the data store is a separate hierarchical data store, wherein modifying the one hierarchy according to the request comprises sending one or more requests to the hierarchical data store to modify the hierarchy in the separate hierarchical data store.
14. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
maintaining different hierarchies of a plurality of resource data objects that are stored in a data store, wherein the different hierarchies of the resource data objects identify policies applicable to the behavior of resources corresponding to the resource data objects in a system;
receiving a request to modify one of the different hierarchies of the resource data objects; and modifying the one hierarchy according to the request, wherein the modification to the hierarchy of the resource data objects modifies the application of one or more policies identified by the one hierarchy to the resources without modifying the application of other policies to the resources identified by other ones of the hierarchies.
15. The non-transitory, computer-readable storage medium of claim 14, wherein, in modifying the one hierarchy according to the request, the program instructions cause the one or more computing devices to implement:
adding a new policy to be identified as applicable by the hierarchy to at least one of the resources in the system; or
removing a policy identified by the hierarchy as applicable to at least one of the resources in the system.
PCT/US2017/052943 2016-09-23 2017-09-22 Different hierarchies of resource data objects for managing system resources Ceased WO2018057881A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US15/275,219 US11675774B2 (en) 2016-09-23 2016-09-23 Remote policy validation for managing distributed system resources
US15/275,219 2016-09-23
US15/276,708 2016-09-26
US15/276,708 US10489424B2 (en) 2016-09-26 2016-09-26 Different hierarchies of resource data objects for managing system resources
US15/276,714 US10545950B2 (en) 2016-09-26 2016-09-26 Atomic application of multiple updates to a hierarchical data structure
US15/276,711 2016-09-26
US15/276,714 2016-09-26
US15/276,711 US10454786B2 (en) 2016-09-26 2016-09-26 Multi-party updates to distributed systems

Publications (1)

Publication Number Publication Date
WO2018057881A1 true WO2018057881A1 (en) 2018-03-29

Family

ID=60022202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/052943 Ceased WO2018057881A1 (en) 2016-09-23 2017-09-22 Different hierarchies of resource data objects for managing system resources

Country Status (1)

Country Link
WO (1) WO2018057881A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020207A (en) * 2021-09-06 2022-02-08 西安电子科技大学 A tree-structured data insertion method for distributed storage network
CN114327892A (en) * 2021-12-28 2022-04-12 武汉天喻信息产业股份有限公司 FLASH resource management method, storage medium, electronic equipment and device
US20230269229A1 (en) * 2022-02-24 2023-08-24 Google Llc Protecting Organizations Using Hierarchical Firewalls

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090132557A1 (en) * 2007-11-19 2009-05-21 Cohen Richard J Using hierarchical groupings to organize grc guidelines, policies, categories, and rules
WO2012068488A2 (en) * 2010-11-19 2012-05-24 Alektrona Corporation Remote asset control systems and methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090132557A1 (en) * 2007-11-19 2009-05-21 Cohen Richard J Using hierarchical groupings to organize grc guidelines, policies, categories, and rules
WO2012068488A2 (en) * 2010-11-19 2012-05-24 Alektrona Corporation Remote asset control systems and methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SYMANTEC CORPORATION: "E-security begins with sound security policies", ANNOUNCEMENT SYMANTEC, XX, XX, 14 June 2001 (2001-06-14), XP002265695 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020207A (en) * 2021-09-06 2022-02-08 西安电子科技大学 A tree-structured data insertion method for distributed storage network
CN114327892A (en) * 2021-12-28 2022-04-12 武汉天喻信息产业股份有限公司 FLASH resource management method, storage medium, electronic equipment and device
CN114327892B (en) * 2021-12-28 2024-05-03 武汉天喻信息产业股份有限公司 FLASH resource management method, storage medium, electronic equipment and device
US20230269229A1 (en) * 2022-02-24 2023-08-24 Google Llc Protecting Organizations Using Hierarchical Firewalls

Similar Documents

Publication Publication Date Title
US11308126B2 (en) Different hierarchies of resource data objects for managing system resources
US11341118B2 (en) Atomic application of multiple updates to a hierarchical data structure
US10454786B2 (en) Multi-party updates to distributed systems
US11675774B2 (en) Remote policy validation for managing distributed system resources
US11574070B2 (en) Application specific schema extensions for a hierarchical data structure
US11550763B2 (en) Versioning schemas for hierarchical data structures
US12174854B2 (en) Versioned hierarchical data structures in a distributed data store
JP7500589B2 (en) Managing and Organizing Relational Data Using Distributed Ledger Technology (DLT)
US7546633B2 (en) Role-based authorization management framework
RU2686594C2 (en) File service using for interface of sharing file access and transmission of represent state
RU2586866C2 (en) Differentiation of set of features of participant of leased medium and user
US12346309B2 (en) System and method for using policy to achieve data segmentation
US20080184336A1 (en) Policy resolution in an entitlement management system
US20090234880A1 (en) Remote storage and management of binary object data
MX2007014551A (en) Unified authorization for heterogeneous applications.
US20090037197A1 (en) Multi-threaded Business Programming Library
US11100129B1 (en) Providing a consistent view of associations between independently replicated data objects
US11500837B1 (en) Automating optimizations for items in a hierarchical data store
WO2018057881A1 (en) Different hierarchies of resource data objects for managing system resources
US20230095230A1 (en) Separate relationship management for application data objects
US9229787B2 (en) Method and system for propagating modification operations in service-oriented architecture
CN115618387B (en) ABAC-based authentication method, apparatus, device and computer readable medium
US11669527B1 (en) Optimized policy data structure for distributed authorization systems
US10956250B1 (en) Database interface to obtain item state on conditional operation failure
Belyaev et al. On the formalization, design, and implementation of component-oriented access control in lightweight virtualized server environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17780593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17780593

Country of ref document: EP

Kind code of ref document: A1