[go: up one dir, main page]

WO2025193725A1 - Techniques for compute service in an overlay network - Google Patents

Techniques for compute service in an overlay network

Info

Publication number
WO2025193725A1
WO2025193725A1 PCT/US2025/019407 US2025019407W WO2025193725A1 WO 2025193725 A1 WO2025193725 A1 WO 2025193725A1 US 2025019407 W US2025019407 W US 2025019407W WO 2025193725 A1 WO2025193725 A1 WO 2025193725A1
Authority
WO
WIPO (PCT)
Prior art keywords
hypervisor
control plane
bare metal
vcn
compute service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/019407
Other languages
French (fr)
Inventor
Sean Matthew OSBORNE
Nima JAFROODI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle International Corp filed Critical Oracle International Corp
Publication of WO2025193725A1 publication Critical patent/WO2025193725A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4416Network booting; Remote initial program loading [RIPL]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44536Selecting among different versions

Definitions

  • CSPs Cloud service providers
  • Cloud service providers can offer computing infrastructure for customers using resources in several data centers.
  • CSPs can improve the availability of cloud resources by scaling the data centers.
  • scaling can result in large data center footprints with a significant number of computing devices requiring a commensurate amount of resources to operate as well as reserving significant computing resources for the effective management of the cloud resources themselves.
  • Embodiments of the present disclosure relate to cloud computing networks. More particularly, the present disclosure describes architectures, infrastructure, and related techniques for implementing a block storage service in a reduced footprint data center.
  • a typical CSP may provide cloud services to multiple customers. Each customer may have the ability to customize and configure the infrastructure provisioned to support their allocated cloud resources.
  • the CSP may reserve computing resources within a data center to provide certain "core" services to both customers and to other services operated by the CSP. For example, services like block storage, object storage, identity and access management, and key management and secrets services are implemented within a "service enclave" of the data center.
  • the service enclave may connect via a substrate network of computing devices (virtual machines and/or bare metal instances) hosted within the data center.
  • the substrate network may be a part of the "underlay network" of the data center, which includes the physical network connecting bare metal devices, smart network interface cards (SmartNICs) of the computing devices, and networking infrastructure like top-of-rack switches.
  • SmartNICs smart network interface cards
  • CSP customers have infrastructure provisioned in an "overlay network” comprising one or more VCNs of virtualized environments to provide resources for the customer (e.g., compute, storage, etc.).
  • the service enclave exists on dedicated hardware within the data center. Because of this, the services hosted within the service enclave are difficult to scale.
  • the dedicated computing resources for the service enclave are typically of a fixed size that depends on the largest predicted size of the data center. Expanding the service enclave can require a complicated addition of computing resources that may impact the availability of the core services to customers. Additionally, unused resources within the service enclave (e.g., if the service enclave is sized too large for the customer demand from the data center) cannot be easily made available to the customers, since the service enclave does not typically allow network access from the customer overlay network. [0006] Even as the demand for cloud services grows, CSPs may want to deploy data centers to meet that demand that initially have the smallest physical footprint possible.
  • Such a footprint can improve the ease of both deploying the physical components and configuring the initial infrastructure while still allowing the data center to scale to meet customer demand.
  • the "core services" that are hosted in the service enclave can instead be implemented in the overlay network. By doing so, the core services can be scaled as the data center footprint expands.
  • the computing devices used to construct the reduced footprint data center can be homogenized, improving the initial configuration and the easing the expansion of the footprint when additional, homogeneous devices are added.
  • flexible overlay network shapes are made available for both CSP core services and customers.
  • Moving compute service to the overlay can create a circular dependency with block storage service (BSS).
  • BSS block storage service
  • Embodiments described herein relate to methods, systems, and computer-readable media for implementing a Compute service in an Overlay network of a reduced footprint data center.
  • a method for a compute service in a reduced footprint data center can include executing, a compute service instance at a bare metal computing device of the reduced footprint data center.
  • a control plane of the compute service can use a live image to execute the compute service instance.
  • the method can also include receiving a first indication that the bare metal instance is successfully executing.
  • the compute control plane can receive the first indication from an agent executing in the bare metal instance.
  • the method can also include the control plane sending, to the agent, information identifying a hypervisor image accessible to the agent via a network connection of the reduced footprint data center and receiving, from the agent, a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device.
  • the method can also include initiating, by the control plane, a reboot of the compute service instance, the compute service instance configured to boot using the hypervisor image on the storage device.
  • a compute system including one or more processors and one or more memories storing computer-executable instructions that, when executed by the one or more processors, cause the computer system to perform the method described above.
  • Yet another embodiment is directed to a non-transitory computer-readable medium storing computer-executable instructions that, when executed by one or more processors of a computer system, cause the computer system to perform the method described above.
  • Example 1 A method for configuring virtual machine infrastructure in a reduced footprint data center.
  • the method can include implementing, by a compute service control plane executing in the reduced footprint data center at a bare metal computing device of the reduced footprint data center, an instance of the compute service, the bare metal computing device including computing resources; executing, by the compute service control plane at the instance, a hypervisor, the hypervisor including a configuration shape corresponding to the computing resources of the bare metal computing device, the configuration shape defining a shape family of virtual machine shapes; and provisioning, by the compute service control plane, a virtual machine at the hypervisor, the virtual machine corresponding to a virtual machine shape of the shape family, the virtual machine shape defining a portion of the computing resources of the bare metal computing device managed by the hypervisor and assigned to the virtual machine.
  • Example 1.1 The method of Example 1, wherein provisioning the virtual machine includes determining a virtual machine shape from the shape family, the virtual machine shape corresponding to a portion of the computing resources available from the bare metal computing device; updating a capacity object of the hypervisor, the capacity object maintaining the available capacity of the computing resources managed by the hypervisor; and executing the virtual machine using the portion of the computing resources of the virtual machine shape.
  • Example 1.2 The method of Example 1.1, further including provisioning a second virtual machine by at least: determining, using the capacity object of the hypervisor, a second virtual machine shape from the shape family; and executing the second virtual machine using a second portion of the computing resources defined by the second virtual machine shape.
  • Example 1.3 The method of Example 1,.2 further including updating the capacity object based on the second virtual machine shape.
  • Example 1.4 The method of Example 1, wherein the computing resources include a plurality of processor cores, a quantity of memory, and a plurality of storage volumes.
  • Example 1.5 The method of Example 1.4, wherein the configuration shape is a dense shape corresponding to all of the plurality of processor cores and the quantity of memory.
  • Example 1.6 The method of Example 1, wherein the virtual machine includes a ring 0 service virtual machine, and wherein the portion of the computing resources defined by the virtual machine shape includes a reserved storage volume of the plurality of storage volumes.
  • Example 2 A computing system including one or more processors, and one or more memories storing computer-executable instructions that, when executed by the one or more processors, cause the computing system to perform any of the methods of Examples 1-1.6 above.
  • Example 3 non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a computing system , cause the computing system to perform any of the methods of Examples 1-1.6 above.
  • embodiments may be implemented by using a computer program product, comprising computer program/instructions which, when executed by a processor, cause the processor to perform the methods of Examples 1-3.1.
  • FIG.1 is a block diagram illustrating an example system architecture of a reduced footprint data center including an initialization device, according to some embodiments.
  • FIG.2A is a block diagram illustrating a conventional data center including a plurality of server racks reserved for particular functionality, according to some embodiments.
  • FIG.2B is a block diagram illustrating a reduced footprint data center in which services are in an overlay network, according to some embodiments.
  • FIG.3 is a block diagram illustrating the expansion of a reduced footprint data center, according to some embodiments.
  • FIG.4 is a block diagram illustrating networking connections between an overlay network and an underlay network in a reduced footprint data center, according to some embodiments.
  • FIG.5 is a block diagram illustrating an example architecture of a reduced footprint data center with virtual machines booting from a local boot volume, according to some embodiments.
  • FIG.6 is a block diagram illustrating an example architecture for preparing a local volume to boot a hypervisor, according to some embodiments.
  • FIG.7 is example configuration data illustrating an example hypervisor shape, according to some embodiments.
  • FIG.8 is example configuration data illustrating example virtual machine shapes, according to some embodiments.
  • FIG.9 is example configuration data illustrating an example virtual machine shape selection, according to some embodiments.
  • FIG.10 is a flow diagram of an example process for a compute service in an overlay network of a reduced footprint data center, according to some embodiments.
  • FIG.11 is a flow diagram of an example process for determining virtual machine shapes, according to some embodiments.
  • FIG.12 is a block diagram illustrating one pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FIG.13 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FIG.14 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FIG.15 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FIG.16 is a block diagram illustrating an example computer system, according to at least one embodiment. DETAILED DESCRIPTION [0038] The adoption of cloud services has seen a rapid uptick in recent times. Various types of cloud services are now provided by various different cloud service providers (CSPs).
  • CSPs cloud service providers
  • cloud service is generally used to refer to a service or functionality that is made available by a CSP to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP.
  • systems and infrastructure cloud infrastructure
  • the servers and systems that make up the CSP's infrastructure and which is used to provide a cloud service to a customer are separate from the customer's own on-premises servers and systems.
  • Customers can thus avail themselves of cloud services provided by the CSP without having to purchase separate hardware and software resources for the services.
  • Cloud services are designed to provide a subscribing customer easy, scalable, and on-demand access to applications and computing resources without the customer having to invest in procuring the infrastructure that is used for providing the services or functions.
  • Various different types or models of cloud services may be offered such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and others.
  • SaaS Software-as-a-Service
  • PaaS Platform-as-a-Service
  • IaaS Infrastructure-as-a-Service
  • a customer can subscribe to one or more cloud services provided by a CSP.
  • the customer can be any entity such as an individual, an organization, an enterprise, and the like.
  • a CSP is responsible for providing the infrastructure and resources that are used for providing cloud services to subscribing customers.
  • the resources provided by the CSP can include both hardware and software resources.
  • Integrated lights out managers can be a processor or processing platform integrated with bare metal hosts in a data center that can provide functionality for managing and monitoring the hosts remotely in cases where the general functionality of the host may be impaired (e.g., fault occurrence).
  • the BIOS Device(s) may be designed to enable independent and resilient operations during various boot scenarios and network disruptions.
  • the BIOS Device(s) may be configured to facilitate the initial boot processes for the reduced footprint data center, provide essential services during recovery, and ensure the region's stability, especially in power-constrained environments.
  • the BIOS Device hosts a range of functions, all of which can allow the autonomous operation of the region. For example, these functions can include DNS resolution, NTP synchronization, DHCP/ZTP configuration, and various security and provisioning services. By offering these capabilities, the BIOS Device ensures that the rack can bootstrap itself, recover from power or network-related events, and maintain essential connectivity and management functions without relying on external resources.
  • each server rack can have one BIOS device, or can have two or three BIOS devices.
  • the BIOS device can have similar hardware specifications (e.g., number of processors, amount of memory, amount of attached storage devices) as other server devices on the rack.
  • a reduced footprint data center can have a new architecture for a region in which the initial network footprint is as small as feasible (e.g., six racks, four racks, and possibly even a single rack of server devices) while still providing core cloud services and scalability for customer demands.
  • a reduced footprint data center may not segregate resources for the Service Enclave (SE) from the Customer Enclave (CE). Instead, the Butterfly region will place SE services (e.g., Block Storage, Object Storage, Identity), which primarily operate in a Substrate Network, into an Overlay Network.
  • SE services e.g., Block Storage, Object Storage, Identity
  • FIGS.1-4 provide an overview of the concepts embodied by a reduced footprint data center.
  • FIG.1 is a block diagram illustrating an example system architecture of a reduced footprint data center 100 including an initialization device 102.
  • the reduced footprint data center 100 can include six racks of server devices.
  • the racks may be referred to as "Butterfly" racks.
  • the reduced footprint data center 100 can include Butterfly rack 110, Butterfly rack 120, Butterfly rack 130, Butterfly rack 140, Butterfly rack 150, and Butterfly rack 160.
  • the racks can be identical.
  • Butterfly rack 110 can include the same number of computing and/or networking devices as each other Butterfly rack 120-160.
  • Butterfly rack 110 can include two top-of-rack (TOR) switches 106, 108.
  • the TOR switches 106, 108 can each include one or more networking switches configured to provide network communication between the server devices and other computing devices within Butterfly rack 110 as well as one or more networking connections to the other Butterfly racks 120-160 and or other networks including customer network 114.
  • the Butterfly rack 110 can also include one or more BIOS device(s) 102.
  • the BIOS device can be a server device configured to execute one or more processes to provide a set of "core services" within the reduced footprint data center 100 during startup/boot processes.
  • the BIOS device(s) 102 can configure one or more components of the reduced footprint data center 100 during startup. For example, the BIOS device(s) 102 can send network configuration information to a networking device within the Butterfly racks 110-160.
  • the networking device can be a SmartNIC attached to a server device within the Butterfly racks 110-160.
  • the BIOS device(s) 102 can send network configuration information to a substrate access VCN.
  • the substrate access VCN can be deployed to one or more hosts within the reduced footprint data center 100.
  • VMs executing in the Butterfly racks 110-160 can be configured to be a substrate access VCN.
  • the substrate access VCN can be configured to provide networking routes between one or more other VCNs (e.g., customer VCNs) and the networking devices (e.g., SmartNICs) and other networking components of the substrate services that now execute in their own VCN in the Overlay.
  • the BIOS device(s) 102 can also be configured to host one or more services like a key exchange service (KeS), a device encryption key (DEK) service, or other core services.
  • the BIOS device(s) 102 can include boot volumes for VMs that are started on host devices in the reduced footprint data center 100.
  • BIOS device(s) 102 can provide boot volumes for VMs on hypervisors hosted on server device(s) 104.
  • the Butterfly rack 110 can include one or more additional server device(s) 104.
  • the server device(s) 104 can each include one or more processors and one or more memories that together can store and execute instructions for implementing computing services as described herein, including, for example, compute, storage, VMs, CSP services, customer services and/or applications, and the like.
  • each of the Butterfly racks 110-160 can include an identical complement of server device(s) and TORs.
  • each of the Butterfly racks 110-160 can include a BIOS device, although the techniques described herein can be implemented using only a single BIOS device within the reduced footprint data center 100.
  • Each server device of the server device(s) 104 can include a trusted platform module (TPM).
  • TPM trusted platform module
  • the TPM on each device can be a microcontroller or other processor (or multiple processors) along with storage for performing cryptographical operations like hashing, encryption/decryption, key and key pair generation, and key storage.
  • the TPM may generally conform to a standard characterizing such devices, for example, ISO/IEC 11889.
  • the reduced footprint data center 100 can also include a networking rack 112.
  • the networking rack 112 can include one or more networking devices including switches, gateways, routers, and the like for communicatively coupling the Butterfly racks 110-160 to each other and to customer network 114.
  • the customer network 114 can include an on-premises network connected to the reduced footprint data center 100. In some embodiments, the customer network 114 can provide network connectivity to a public network, including the Internet.
  • FIG.2A is a block diagram illustrating a conventional data center 200 including a plurality of server racks reserved for particular functionality.
  • the plurality of server racks can each include multiple server devices as well as networking equipment (e.g., TORs) and power supply and distribution equipment.
  • the conventional data center 200 shown in FIG.2A can have a standard footprint of 13 server racks as shown, although additional server racks are possible in larger data centers.
  • server racks 1-4 may be included as service enclave racks 202.
  • a portion of the server racks can be provided as a customer enclave, so that the computing devices on those server racks can host customer services, applications, and associated customer data.
  • Racks 5-7 can be part of the customer enclave racks 204 within conventional data center 200.
  • the isolation between the service enclave and the customer enclave can be enforced by software-defined perimeters that define edge devices and/or software within the enclave as distinguished from hardware/software elements outside of the enclave. Access into and out of each enclave may be controlled, monitored, and/or policy driven. For example, access to the service enclave may be based on authorization, limited to authorized clients of the CSP. Such access may be based on one or more credentials provided to the enclave.
  • the conventional data center 200 can also include database racks 206 (racks 8-9) and networking racks 208 (racks 10-13).
  • the database racks 206 can include computing devices and storage devices that provide storage and management for databases, data stores, object storage, and similar data persistence techniques within the conventional data center 200.
  • the networking racks 208 can include networking devices that provide connectivity to the computing devices within conventional data center 200 and to other networks (e.g., customer networks, the internet, etc.).
  • FIG.2B is a block diagram illustrating a reduced footprint data center 210 in which services are in an overlay network, according to some embodiments.
  • the reduced footprint data center 210 may be an example of reduced footprint data center 100 of FIG.1, including six Butterfly racks, each having a plurality of server devices, networking devices, and power distribution devices.
  • the reduced footprint data center 210 can have an Overlay network 212 that spans computing devices in all of the server racks.
  • server devices on Butterfly Rack 1 and Butterfly Rack 6 can host VMs for a VCN in the Overlay network 212.
  • the Overlay network 212 can then include both core services 214 and customer services 216.
  • the core services 214 can include one or more VCNs for the CSP services that would be hosted within the Service Enclave of conventional data center 200 (e.g., on service enclave racks 202).
  • the core services 214 can exist in the overlay network 212 on any one or more of the server devices within Butterfly racks.
  • customer services 216 can exist in the overlay network 212 on host devices on any of the Butterfly racks.
  • the core services 214 may be hosted on specific devices of the reduced footprint data center.
  • the core services 214 may be hosted on Butterfly racks 1-3, while the customer services 216 may be hosted on Butterfly racks 4-6.
  • the core services 214 and the customer services 216 may be hosted on any of the Butterfly racks, as depicted in FIG. 2B.
  • FIG.3 is a block diagram illustrating the expansion of a reduced footprint data center 300, according to some embodiments.
  • the reduced footprint data center 300 can include a plurality of reduced footprint server racks 302.
  • Each of the plurality of reduced footprint server racks 302 can be an example of one of the Butterfly racks 110-160 described above with respect to FIG.1.
  • the plurality of reduced footprint server racks 302 can be connected in a ring network 304 using directional network connection between each seat of TOR switches on each of the server racks. For example, a first TOR switch at each rack can be connected to a first TOR switch of two adjacent server racks, such that data communication from the server rack flows in one direction. A second TOR switch at each rack can be connected to a second TOR switch of two adjacent server racks, providing data communication between the racks in the opposite direction.
  • the first TOR switch and the second TOR switch at each rack can be connected to one another and to each server device on the rack, providing multiple, redundant network paths from any server device of any one server rack to another server device on another server rack.
  • the ring network 304 can therefore allow low latency and highly available network connections between resources hosted on any computing device (e.g., server device) in the plurality of reduced footprint server racks 302.
  • additional server racks can be connected to the reduced footprint server racks 302.
  • a networking rack 306 can be implemented at the reduced footprint data center 300.
  • the networking rack 306 can be an example of networking rack 112 described above with respect to FIG.1.
  • the networking rack 306 can include a plurality of networking ports that can be used to connect to one or more of the plurality of reduced footprint server racks.
  • the networking rack 306 can be connected to a first reduced footprint server rack using connection 308.
  • the networking rack 306 can be, for example, a two chassis system having 4 LCs each with 34x400G ports for a total of 384x100G links in each chassis.
  • the additional server racks 310 can be installed in the reduced footprint data center 300.
  • the additional server racks 310 can be different from the reduced footprint server racks of the plurality of reduced footprint server racks 302.
  • the additional server racks 310 can include a different number of server devices, with each server device including a different amount of computing and/or storage resources (e.g., processors, processing cores, dynamic memory, non-volatile storage, etc.).
  • computing and/or storage resources e.g., processors, processing cores, dynamic memory, non-volatile storage, etc.
  • FIG.4 is a block diagram illustrating an example network architecture of networking connections between one or more VCNs (substrate service VCNs) in an Overlay network 402 and the Underlay network 422 in a reduced footprint data center 400, according to some embodiments.
  • the reduced footprint data center 400 can be an example of other reduced footprint data centers described herein, including reduced footprint data center 100 of FIG.1.
  • substrate service VCN-1404 can be a VCN for a Compute service control plane
  • substrate service VCN-2406 can be a VCN for a PKI service
  • substrate service VCN-N 408 can be a VCN for a Block Storage service.
  • SE service control and data planes can be separated into different VCNs.
  • the substrate service VCNs 404-408 can exist in the Overlay network 402.
  • the Overlay network 402 can also include customer VCN(s) 416, which can be limited in their connectivity to the Underlay network 422.
  • Each substrate service VCN can have its own route table that defines the network traffic routing rules for forwarding network traffic within the network of the reduced footprint data center 400.
  • substrate service VCN-1 can have VCN-1 route table 410
  • substrate service VCN-2 can have VCN-2 route table 412
  • substrate service VCN-N 408 can have VCN-N route table 414.
  • the routing information of each of the substrate service VCNs 404-408 can be initially configured when the reduced footprint data center 400 is first built so that network traffic to/from the core SE services can be routed between the Overlay network 402 and the Underlay network 422.
  • the Underlay network 422 can include various devices and other networking endpoints that are connected via the physical networking components of the reduced footprint data center 400.
  • the Underlay network 422 can include, without limitation, ILOM(s) 424, Bastions 426, NTP server(s) 428, BIOS services 430, and VNIC(s) 432.
  • the ILOM(s) 424 can be computing devices and network targets that provide access to the server devices of reduced footprint data center 400 for both in-band and out-of-band management.
  • the ILOM(s) 424 can allow for remote management of the associated server devices within the server racks of reduced footprint data center 400 that is separate from the networking pathways defined for the region.
  • the Bastions 426 can be services executing on the server devices of the reduced footprint data center 400 that provide network access via the Underlay network 422 and do not have public network addresses.
  • the Bastions 426 can provide remote access to computing resources within the reduced footprint data center 400 in conjunction with a Bastion service that operates on the Underlay network 422.
  • the Bastion service may be an SE service that is not moved to the Overlay network 402 in the reduced footprint data center 400.
  • network time protocol (NTP) servicers 428 may operate in the Underlay network 422 to provide accurate timing to devices and services within the reduced footprint data center 400.
  • BIOS services 430 can include services that are hosted on the one or more initialization devices on the server racks in the reduced footprint data center 400.
  • BIOS services 430 can include a key encryption service usable to encrypt/decrypt data on the server devices of reduced footprint data center 400 during the initial boot process.
  • the BIOS services 430 can include a network configuration service that can provide the initial network configuration for devices within the reduced footprint data center 400.
  • the VNIC(s) 432 can include network interfaces defined by SmartNICs connected to the server devices within the reduced footprint data center 400.
  • the SE services may still need network connectivity with the Underlay network 422 to properly function.
  • a substrate access VCN 418 can be implemented within the reduced footprint data center 400.
  • the substrate access VCN 418 can include a dynamic routing gateway (DRG) that allows communication between the substrate service VCNs 404-408 and the Underlay network 422.
  • DSG dynamic routing gateway
  • the substrate access VCN 418 can then have a DRG route table 420 that can define a single route rule for reaching the Underlay network 422 from the substrate service VCNs 404-408.
  • an initialization device e.g., BIOS device 102 of FIG.1
  • BIOS device 102 of FIG.1 can be used to configure the network addresses and routes for a substrate access VCN 418, a dynamic route gateway within the substrate access VCN 418, and/or one or more SmartNICs of the Underlay network 422.
  • the substrate access VCN 418 can be deployed to communicatively connect the one or more substrate service VCNs 404-408 with the Underlay network 422.
  • the initialization device can send network configuration information to the substrate access VCN 418 to configure the DRG route table 420 to provide initial network addresses (e.g., IP addresses) for each endpoint of the substrate service VCNs 404-408 in the Overlay network 402 until a DHCP service and other networking services are available in their respective substrate service VCNs.
  • the initialization device can send networking configuration information to define one or more static routes for the dynamic routing gateway as part of the DRG routing table 420.
  • the static routes can characterize a networking connection between the Underlay network 422, including a SmartNIC connected to each server device of the reduced footprint data center (e.g., server device(s) 104 of FIG. 1), and each substrate service VCN 404-408.
  • the initialization device can send network configuration information to each of the SmartNICs to provide each SmartNIC a network address (e.g., a network address for the SmartNICs' endpoints in the Underlay network 422).
  • Compute Service in Overlay Network Multiple methods can be used to break the circular dependencies. One method is to implement a "Ring 0" concept, in which some or all of the Compute hypervisors that host the Block Storage data plane VMs will have local boot volumes.
  • Compute hypervisors that are configured to boot from a local boot volume are part of a Ring 0 of core services (potentially including DEK) that are brought online first in a reduced footprint data center using local boot volumes.
  • a primordial hypervisor and VMs boot using a local boot volume (e.g., a particular M.2 SSD device attached to the bare metal device hosting the hypervisors and VMs).
  • the M.2 devices will be configured with hypervisor images using live images built and maintained by to support the initialization of Compute and BSS.
  • the live images can be minimal images that contain all libraries required to partition and encrypt the M.2 devices before installing hypervisor images on them.
  • the live images can persist in Object storage or the BIOS host device on the Butterfly racks.
  • Compute control plane can first launch a bare metal instance with network boot to load the live image. Once Compute CP confirms that the instance has booted, Compute CP can indicate the version of the hypervisor to boot to an agent running in the host. The agent can then partition and configure the attached M.2 device and install the corresponding hypervisor image (e.g., from the BIOS device). Compute CP can then reboot the bare metal instance using the M.2 device as the boot volume. [0076] As a second option, all Compute hypervisors and all VMs can provisioned with boot volumes provided by the BSS DP. The launch workflows for both hypervisors and VMs is similar to non-Butterfly regions.
  • both hypervisors and Block Storage data-plane VMs are each required to maintain a rescue image and a small local storage.
  • Both hypervisors and BSS data plane VMs are each required to maintain a rescue image and a small local storage to break circular dependencies.
  • the services function as any other customer service.
  • a reduced footprint data center does not dedicate a significant fraction of its computing resources to CSP services from the beginning in an unchangeable way.
  • the CSP services can be scaled down to meet customer needs, the freed resources can be provided to the customer without the need for a physical scale-up.
  • CSP services can also scale-up in the same way as customer services, since the CSP services now reside in the CE Overlay network. In the particular case of Compute, the service can access the benefits of an Overlay service while preventing circular dependencies with other services like BSS.
  • FIG.5 is a block diagram illustrating an example architecture of a reduced footprint data center 500 with virtual machines booting from a local boot volume, according to some embodiments.
  • the reduced footprint data center 500 can include a plurality of server devices including server 1502, server 2520, and server 3550, which may be examples of server device(s) 104 of FIG.1.
  • the server devices can be bare metal hosts for software applications that are used to host and provision other software within the reduced footprint data center 500.
  • each server device can host a hypervisor that can manage a plurality of VMs on each server device.
  • server 1502 can host compute hypervisor 504, server 2520 can host compute hypervisor 524, and server 3550 can host compute hypervisor 554.
  • These compute hypervisors may be components of a Compute service that includes Compute data plane and Compute control plane components.
  • the hypervisors can be configured to each host a plurality of VMs for one or more services executing in the reduced footprint data center.
  • instances of a block storage service data plane can be hosted in VMs on hypervisors on each server device.
  • block storage data plane service 512 can be hosted in block storage data plane VM 508 on server 1502
  • block storage data plane service 532 can be hosted on block storage data plane VM 528 on server 2520
  • block storage data plane service 562 can be hosted on block storage data plane VM 558 on server 3550.
  • VMs for other services can be managed by the various hypervisors, including a block storage service management plane service 540 hosted on block storage management plane VM 538 on server 2 520 and a key managements service (KMS) 570 hosted on KMS VM 568 on server 3550.
  • KMS key managements service
  • These servers and the services hosted in the VMs may be considered the Ring 0 services for the reduced footprint data center 500.
  • Each server can include local boot volumes that are usable to boot both the bare metal hosts themselves as well as the VMs hosted in each hypervisor on each server device.
  • server 1502 can boot from boot volume 506, server 2520 can boot from boot volume 526, and server 3550 can boot from boot volume 556.
  • the boot volumes may be stored in a locally attached storage device to each server device.
  • an NVMe SSD attached to each server device can store the local boot volumes for initially booting the bare metal instances of server device.
  • the local boot volumes may be stored in a dedicated boot storage device for each server device, since the main storage devices for the server devices may be encrypted.
  • the BSS data plane instances may need to be operational to initially boot the Compute VMs that host the BSS data plane and other components of BSS (e.g., BSS management plane service 540). This situation creates a circular dependency between BSS and Compute service.
  • server 1502 can include a local boot volume 510 that is usable to boot block storage data plane VM 508.
  • server 2520 can include local boot volume 530 usable to boot block storage data plane VM 528
  • server 3550 can include local boot volume 560 usable to boot block storage data plane VM 558.
  • block storage data plane service 512 may manage and access data volumes including block storage management plane boot volume 516 and data volumes 514 that are stored on an attached NVMe SSD of server 1502.
  • the block storage management plane boot volume 516 and data volumes 514 may be encrypted, so that block storage data plane service 512 cannot vend the block storage management plane boot volume 516 to boot block storage management plane VM 538 to host block storage management plane service 540.
  • an initialization device like BIOS device 580 (e.g., BIOS server 102 of FIG. 1) can host KeS 582 to provide encryption keys for at least the boot volumes for the DEK services within the reduced footprint data center 500.
  • BIOS device 580 e.g., BIOS server 102 of FIG. 1
  • KeS 582 can provide encryption keys for at least the boot volumes for the DEK services within the reduced footprint data center 500.
  • the BSS can be used to boot a key management service KMS 570 at a KMS VM 568 (on server 3550).
  • the KMS VM 568 can initiate a remote iSCSI connection to KMS VM boot volume target 542, which is managed at server 2520.
  • the KMS boot volume target 542 can obtain an encrypted device encryption key that is encrypted with a key provided by KeS 582.
  • the KMS boot volume target 542 can obtain, from KeS 582 the encryption key usable to decrypt the device encryption key.
  • the KeS 582 decrypts the encrypted DEK provided by the KMS boot volume target 542.
  • the KMS boot volume target 542 can connect to backend storage (e.g., an NVMe SSD of server 3550 that includes encrypted KMS boot volume 566 as well as data volumes 564) and use the decrypted device encryption key to decrypt KMS boot volume 566.
  • the KMS boot volume target 542 can connect the KMS boot volume 566 to boot the KMS VM 568.
  • the KMS 570 can take over key vending/key exchange operations for services operating in the reduced footprint data center 500. In particular, boot volume targets may no longer need the KeS 582 to provide decryption of device encryption keys.
  • the block storage management plane service 540 can be initiated by botting block storage management plane VM 538.
  • block storage management plane VM 538 can initiate a remote iSCSI connection to block storage management plane boot volume target 572, which is managed by block storage data plane on server 3550.
  • the block storage management plane VM boot volume target 572 can obtain an encrypted device encryption key that is encrypted with a key provided by KMS 570.
  • the encrypted device encryption key and related context can be obtained from local boot volume 560.
  • FIG.6 is a block diagram illustrating an example architecture 600 for preparing a local volume to boot a hypervisor, according to some embodiments.
  • the example architecture includes a bare metal computing device 602 (e.g., one of server device(s) 104 of FIG. 1) that can host a bare metal instance (e.g., an operating system and related process that can execute software and perform operations described herein for configuring the bare metal computing device).
  • a bare metal computing device 602 can include two storage devices, storage device 622 and storage device 624.
  • Each of storage device 622, 624 may be M.2 solid state disk drives operating using a non-volatile memory express (NVMe) bus on the bare metal computing device 602.
  • NVMe non-volatile memory express
  • the Compute service can support provisioning both hypervisors and Block Storage virtual machines using local boot volumes.
  • the storage devices 622, 624 can be prepared with hypervisor images on them.
  • This preparation can be achieved by using live images that are built and maintained by the CSP and available to the bare metal instance from either an Object Storage service 608 or from a BIOS device 610 (e.g., BIOS device 102 of FIG.1).
  • a live image can be a minimal image that contains all libraries required to partition and encrypt the storage devices 622, 624 before obtaining and installing hypervisor images on them.
  • Live images can be persisted in Object Storage 608 or the BIOS device 610 so that they can be easily obtained when launching hypervisors. Similar to hypervisor images, live images can be versioned and the latest released version can be used. The live images will be architecture dependent and are not expected to have frequent updates.
  • the compute control plane 604 can launch the bare metal instance on the bare metal host 602, including executing the live image agent 614.
  • the bare metal instance can be enabled with a network boot option.
  • a custom pre-boot execution environment (iPXE) script can also be provided for the live image agent 614 that allows the network boot to load the latest available live image.
  • the live image can be obtained from either object storage 608 or from the BIOS device 610 and complete the boot of the bare metal instance using the live image.
  • Compute control plane 604 can wait until the bare metal instance boots using the live- image.
  • Compute control plane 604 can periodically poll the live image agent 614 that eventually becomes available on the bare metal instance.
  • the live image agent 614 can be configured to instruct logical volume managers on the bare metal host 602 to create and manage suitable logical volumes on the storage devices 622, 624 for use with the hypervisor images.
  • Compute control plane 604 can pass information identifying the hypervisor image to use to execute a hypervisor on the bare metal host 602.
  • the Compute control plane 604 can pass the information the live image agent 614 executing in the bare metal host 602. Compute control plane 604 can then wait until the live image agent 614 performs the following operations.
  • the live image agent 614 can partition the storage devices 622, 624 as appropriate for the volumes needed to support the hypervisor. As shown in FIG.6, the live image agent 614 can set up RAID 1 devices with partitions on each of storage device 622 and storage device 624. The live image agent 614 can work in conjunction with boot logical volume manager (LVM) 616, operating system LVM 618, and other LVM 620 to partition the storage devices 622, 624.
  • the RAID 1 device usable for booting the operating system under the hypervisor using the hypervisor image can include partition 1626 of storage device 622 and partition 1630 of storage device 624.
  • Other logical volumes can be partitioned with other LVM 620 and can include similar RAID 1 configurations, including partition 2628 on storage device 622 and partition 2632 on storage device 624 forming a RAID 1 device usable as storage accessible to the VMs on the hypervisor.
  • the live image agent 614 can obtain the hypervisor image using the identifying information provided by Compute control plane 604 and provision the volume with the hypervisor image.
  • the live image agent 614 can also update the boot loader information to reflect the correct boot to the hypervisor when the bare metal instance is rebooted.
  • the live image agent 614 can communicate to Compute control plane 604 to indicate that the configuration operations have been completed.
  • Compute control plane 604 can change the boot order of the bare metal instance to boot from the local volumes on storage devices 622, 624.
  • the Compute control plane 604 can make the boot order change vial ILOM 612 of the bare metal host 602.
  • the Compute control plane 604 can then initiate a reboot operations of the bare metal instance on the bare metal host 602.
  • the bare metal host 602 can reboot using the hypervisor image prepared on the volumes of storage devices 622, 624.
  • the bare metal instance can include an operating system 636 that hosts the hypervisor. Once booted, the hypervisor on bare metal host 602 can become available to manage VMs on the bare metal host 602.
  • FIG.7 is example configuration data illustrating an example hypervisor shape 700, according to some embodiments.
  • the hypervisor shape 700 can be configured to map to the computing resources available for the underlying bare metal computing device.
  • each server device e.g., server device(s) 104 of FIG.1
  • computing resources e.g., processors, processor cores, memory, storage, etc.
  • the hypervisor shape 700 can correspond to a computing device having 128 processor cores, 6 available NVMe storage devices, two NUMA nodes with two cores reserved for each node, and the corresponding network interface configuration for the network interface card of the computing device.
  • FIG.8 is example configuration data illustrating example virtual machine shapes 800, according to some embodiments.
  • the VM shapes can also be referred to as a VM shape family.
  • a shape family can be a logical categorization of virtual machine shapes that is assigned to a hypervisor once it is provisioned.
  • VM_DENSE_E5_FLEX is a shape family of a hypervisor host that is provisioned on E5.DENSE bare-metal instances and is able to host both VM_DENSE_E5_FLEX and VM_STANDARD_E5_FLEX virtual machines.
  • Each hypervisor shape can support multiple shape families, but it may support only one after the hypervisor is provisioned.
  • the VM shapes 800 show which shape families are supported on HV.DenseIO.E5.128 hypervisors and which VM shapes each shape families support. [0095] Every hypervisor shape family has a strict configuration that defines several capacity buckets. Each capacity bucket can define which virtual machine shape of the VM shapes 800 can be placed on the hypervisor.
  • the VM shapes 800 defines the capacity buckets available per each shape family of a HV.DenseIO.E5.128 hypervisor.
  • the shape family VM_DENSE_E5_FLEX defines four buckets, two per each NUMA node of the hypervisor.
  • the first and the third buckets specify that 48 cores and 576 GB of memory from each NUMA node can be used to provision VM_DENSE_E5_FLEX virtual machines.
  • the second and last buckets specify that the remaining 14 cores and 160 GB of memory from each NUMA node can be used to provision VM_STANDARD_E5_FLEX shape virtual machines.
  • the capacities defined for each bucket can be used to maximize the usage of the underlying computing resources.
  • a capacity object can exist for each bucket that a hypervisor can support. These capacity objects can be created when hypervisors are provisioned, for example by Compute control plane. Each capacity object will maintain remaining cores and memory of its associated bucket.
  • the placement logic can determine a set of capacity objects (possibly from different hypervisors) based on the shape families that can be used to provision the virtual machine at a particular hypervisor. The candidate capacity objects must have enough remaining cores and memories to fit the new VM instance.
  • disk resources can be proportional to the number of cores. For example, if a dense hypervisor has 6 NVMe drives, then a VM may use at least 48 cores to get all NVMe drives. For example, each NVMe may use 4 cores.
  • FIG.9 is example configuration data illustrating an example virtual machine shape selection 900, according to some embodiments.
  • a separate pool can be configured for Ring 0 hypervisors and VMs.
  • Core services use the new Ring 0 shapes when deploying VMs.
  • the total number of available local storage volumes per bucket can be configured to comport with the Ring 0 shapes (e.g., VM shapes 800).
  • the available volumes can be allocated as follows: each local volume can have a fixed size (e.g., 50 GB), each dense VM shape can be allocated a single volume, the hypervisor can be allocated a single 90 GB partition, and the remaining available storage can be divided equally between each other shape.
  • FIG.10 is a flow diagram of an example process 1000 for a compute service in an overlay network of a reduced footprint data center, according to some embodiments.
  • the process 1000 can be performed by components of the reduced footprint data center (e.g., reduced footprint data center 100 of FIG.1), including one or more computing devices like server device(s) 104 of FIG.
  • Rebooting the compute service instance can include power cycling the bare metal computing device.
  • the compute service instance can be configured to boot using the hypervisor image on the storage device.
  • the hypervisor image can include software executable by the bare metal computing device to run an operating system and host a hypervisor on the operating system.
  • the control plane can, prior to rebooting the bare metal instance, change a boot order of the compute service instance to boot using the storage device.
  • the control plane can also, after rebooting the compute service instance, poll a hypervisor agent at the compute service instance to determine whether the hypervisor is successfully executing at the bare metal instance. For example, if the compute service instance successfully reboots using the hypervisor image at the storage device, the hypervisor agent should begin executing at the compute service instance.
  • the bare metal compting device can be one of the server devices (e.g., server device(s) 104 of FIG.1), and the compute instance can include applications and other software (e.g., an operating system) that is configured to host compute service data plane resources (e.g., a hypervisor).
  • the bare metal computing device can include computing resources.
  • the bare metal computing device can include a plurality of processors, processor cores, a quantity of memory, and a plurality of storage volumes (e.g., partitions of one or more storage devices like NVMe devices).
  • the compute service control plane can execute a hypervisor at the instance on the bare metal computing device.
  • the hypervisor can include a configuration shape corresponding to the computing resources of the bare metal computing device.
  • the configuration shape is a dense shape corresponding to all of the plurality of processor cores and the quantity of memory.
  • the configuration shape for the hypervisor may allocate all the computing resources of the bare metal computing device for the hypervisor.
  • the configuration shape can define a shape family of virtual machine shapes (e.g., VM shapes 800 of FIG.8).
  • the shape family can define possible allocations of the computing resources to virtual machines deployed at the hypervisor.
  • the compute service control plane can provision a virtual machine at the hypervisor.
  • the virtual machine corresponding to a virtual machine shape of the shape family.
  • the virtual machine shape can correspond to a portion of the computing resources available from the bare metal computing device.
  • the compute service control plane can then update a capacity object of the hypervisor.
  • the capacity object can maintain the available capacity of the computing resources managed by the hypervisor.
  • the compute service control plane can then execute the virtual machine using the portion of the computing resources of the virtual machine shape.
  • the compute service control plane can provision a second virtual machine by at least determining a second virtual machine shape from the shape family.
  • the second virtual machine shape can be determined using the capacity object of the hypervisor. For example, based on the portion of the computing resources allocated to the virtual machine, the capacity object may be updated to reflect the remining computing resources available to the second virtual machine.
  • IaaS infrastructure as a service
  • the compute service control plane can then execute the second virtual machine using a second portion of the computing resources defined by the second virtual machine shape.
  • the compute service control plane can then update the capacity object to reflect the allocation of computing resources to the second virtual machine according to the second virtual machine shape.
  • Example Infrastructure as a Service Architectures [0113] As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like).
  • infrastructure components e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like).
  • an IaaS provider may also supply a variety of services to accompany those infrastructure components (example services include billing software, monitoring software, logging software, load balancing software, clustering software, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance.
  • IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack.
  • WAN wide area network
  • VMs virtual machines
  • OSs install operating systems
  • middleware such as databases
  • storage buckets for workloads and backups
  • enterprise software even install enterprise software into that VM.
  • Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc.
  • a cloud computing model may require the participation of a cloud provider.
  • the cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS.
  • An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services.
  • IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand)) or the like.
  • IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them.
  • provisioning In most cases, deployment does not include provisioning, and the provisioning may need to be performed first. [0118] In some cases, there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively.
  • an infrastructure may have many interconnected elements.
  • VPCs virtual private clouds
  • VMs virtual machines
  • Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like.
  • continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments.
  • service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world).
  • the infrastructure on which the code will be deployed may need to first be set up.
  • the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned.
  • FIG.12 is a block diagram 1200 illustrating an example pattern of an IaaS architecture, according to at least one embodiment.
  • Service operators 1202 can be communicatively coupled to a secure host tenancy 1204 that can include a virtual cloud network (VCN) 1206 and a secure host subnet 1208.
  • VCN virtual cloud network
  • the service operators 1202 may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 8, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry®, or other communication protocol enabled.
  • the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems.
  • the client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS.
  • client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN 1206 and/or the Internet.
  • the VCN 1206 can include a local peering gateway (LPG) 1210 that can be communicatively coupled to a secure shell (SSH) VCN 1212 via an LPG 1210 contained in the SSH VCN 1212.
  • the SSH VCN 1212 can include an SSH subnet 1214, and the SSH VCN 1212 can be communicatively coupled to a control plane VCN 1216 via the LPG 1210 contained in the control plane VCN 1216.
  • the SSH VCN 1212 can be communicatively coupled to a data plane VCN 1218 via an LPG 1210.
  • the control plane VCN 1216 and the data plane VCN 1218 can be contained in a service tenancy 1219 that can be owned and/or operated by the IaaS provider.
  • the control plane VCN 1216 can include a control plane demilitarized zone (DMZ) tier 1220 that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks).
  • the DMZ-based servers may have restricted responsibilities and help keep breaches contained.
  • the DMZ tier 1220 can include one or more load balancer (LB) subnet(s) 1222, a control plane app tier 1224 that can include app subnet(s) 1226, a control plane data tier 1228 that can include database (DB) subnet(s) 1230 (e.g., frontend DB subnet(s) and/or backend DB subnet(s)).
  • LB load balancer
  • the LB subnet(s) 1222 contained in the control plane DMZ tier 1220 can be communicatively coupled to the app subnet(s) 1226 contained in the control plane app tier 1224 and an Internet gateway 1234 that can be contained in the control plane VCN 1216, and the app subnet(s) 1226 can be communicatively coupled to the DB subnet(s) 1230 contained in the control plane data tier 1228 and a service gateway 1236 and a network address translation (NAT) gateway 1238.
  • the control plane VCN 1216 can include the service gateway 1236 and the NAT gateway 1238.
  • the control plane VCN 1216 can include a data plane mirror app tier 1240 that can include app subnet(s) 1226.
  • the app subnet(s) 1226 contained in the data plane mirror app tier 1240 can include a virtual network interface controller (VNIC) 1242 that can execute a compute instance 1244.
  • the compute instance 1244 can communicatively couple the app subnet(s) 1226 of the data plane mirror app tier 1240 to app subnet(s) 1226 that can be contained in a data plane app tier 1246.
  • the data plane VCN 1218 can include the data plane app tier 1246, a data plane DMZ tier 1248, and a data plane data tier 1250.
  • the data plane DMZ tier 1248 can include LB subnet(s) 1222 that can be communicatively coupled to the app subnet(s) 1226 of the data plane app tier 1246 and the Internet gateway 1234 of the data plane VCN 1218.
  • the app subnet(s) 1226 can be communicatively coupled to the service gateway 1236 of the data plane VCN 1218 and the NAT gateway 1238 of the data plane VCN 1218.
  • the data plane data tier 1250 can also include the DB subnet(s) 1230 that can be communicatively coupled to the app subnet(s) 1226 of the data plane app tier 1246.
  • the Internet gateway 1234 of the control plane VCN 1216 and of the data plane VCN 1218 can be communicatively coupled to a metadata management service 1252 that can be communicatively coupled to public Internet 1254.
  • Public Internet 1254 can be communicatively coupled to the NAT gateway 1238 of the control plane VCN 1216 and of the data plane VCN 1218.
  • the service gateway 1236 of the control plane VCN 1216 and of the data plane VCN 1218 can be communicatively coupled to cloud services 1256.
  • the service gateway 1236 of the control plane VCN 1216 or of the data plane VCN 1218 can make application programming interface (API) calls to cloud services 1256 without going through public Internet 1254.
  • API application programming interface
  • the API calls to cloud services 1256 from the service gateway 1236 can be one-way: the service gateway 1236 can make API calls to cloud services 1256, and cloud services 1256 can send requested data to the service gateway 1236. But, cloud services 1256 may not initiate API calls to the service gateway 1236.
  • the secure host tenancy 1204 can be directly connected to the service tenancy 1219, which may be otherwise isolated.
  • the secure host subnet 1208 can communicate with the SSH subnet 1214 through an LPG 1210 that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet 1208 to the SSH subnet 1214 may give the secure host subnet 1208 access to other entities within the service tenancy 1219.
  • the control plane VCN 1216 may allow users of the service tenancy 1219 to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN 1216 may be deployed or otherwise used in the data plane VCN 1218.
  • the control plane VCN 1216 can be isolated from the data plane VCN 1218, and the data plane mirror app tier 1240 of the control plane VCN 1216 can communicate with the data plane app tier 1246 of the data plane VCN 1218 via VNICs 1242 that can be contained in the data plane mirror app tier 1240 and the data plane app tier 1246.
  • users of the system, or customers can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet 1254 that can communicate the requests to the metadata management service 1252.
  • the metadata management service 1252 can communicate the request to the control plane VCN 1216 through the Internet gateway 1234.
  • the request can be received by the LB subnet(s) 1222 contained in the control plane DMZ tier 1220.
  • the LB subnet(s) 1222 may determine that the request is valid, and in response to this determination, the LB subnet(s) 1222 can transmit the request to app subnet(s) 1226 contained in the control plane app tier 1224.
  • the call to public Internet 1254 may be transmitted to the NAT gateway 1238 that can make the call to public Internet 1254.
  • Metadata that may be desired to be stored by the request can be stored in the DB subnet(s) 1230.
  • the data plane mirror app tier 1240 can facilitate direct communication between the control plane VCN 1216 and the data plane VCN 1218. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN 1218. Via a VNIC 1242, the control plane VCN 1216 can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN 1218.
  • control plane VCN 1216 and the data plane VCN 1218 can be contained in the service tenancy 1219.
  • the user, or the customer, of the system may not own or operate either the control plane VCN 1216 or the data plane VCN 1218.
  • the IaaS provider may own or operate the control plane VCN 1216 and the data plane VCN 1218, both of which may be contained in the service tenancy 1219.
  • This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users’, or other customers’, resources. Also, this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet 1254, which may not have a desired level of threat prevention, for storage.
  • the LB subnet(s) 1222 contained in the control plane VCN 1216 can be configured to receive a signal from the service gateway 1236.
  • the control plane VCN 1216 and the data plane VCN 1218 may be configured to be called by a customer of the IaaS provider without calling public Internet 1254.
  • Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy 1219, which may be isolated from public Internet 1254.
  • FIG.13 is a block diagram 1300 illustrating another example pattern of an IaaS architecture, according to at least one embodiment.
  • Service operators 1302 can be communicatively coupled to a secure host tenancy 1304 (e.g., the secure host tenancy 1204 of FIG. 12) that can include a virtual cloud network (VCN) 1306 (e.g., the VCN 1206 of FIG.12) and a secure host subnet 1308 (e.g., the secure host subnet 1208 of FIG.12).
  • VCN virtual cloud network
  • the VCN 1306 can include a local peering gateway (LPG) 1310 (e.g., the LPG 1210 of FIG.12) that can be communicatively coupled to a secure shell (SSH) VCN 1312 (e.g., the SSH VCN 1212 of FIG.12) via an LPG 1210 contained in the SSH VCN 1312.
  • LPG local peering gateway
  • SSH secure shell
  • the SSH VCN 1312 can include an SSH subnet 1314 (e.g., the SSH subnet 1214 of FIG.12), and the SSH VCN 1312 can be communicatively coupled to a control plane VCN 1316 (e.g., the control plane VCN 1216 of FIG.12) via an LPG 1310 contained in the control plane VCN 1316.
  • the control plane VCN 1316 can be contained in a service tenancy 1319 (e.g., the service tenancy 1219 of FIG. 12), and the data plane VCN 1318 (e.g., the data plane VCN 1218 of FIG. 12) can be contained in a customer tenancy 1321 that may be owned or operated by users, or customers, of the system.
  • the control plane VCN 1316 can include a control plane DMZ tier 1320 (e.g., the control plane DMZ tier 1220 of FIG.
  • LB subnet(s) 1322 e.g., LB subnet(s) 1222 of FIG.12
  • a control plane app tier 1324 e.g., the control plane app tier 1224 of FIG. 12
  • app subnet(s) 1326 e.g., app subnet(s) 1226 of FIG.12
  • a control plane data tier 1328 e.g., the control plane data tier 1228 of FIG.12
  • DB subnet(s) 1330 e.g., similar to DB subnet(s) 1230 of FIG.12.
  • the LB subnet(s) 1322 contained in the control plane DMZ tier 1320 can be communicatively coupled to the app subnet(s) 1326 contained in the control plane app tier 1324 and an Internet gateway 1334 (e.g., the Internet gateway 1234 of FIG.12) that can be contained in the control plane VCN 1316, and the app subnet(s) 1326 can be communicatively coupled to the DB subnet(s) 1330 contained in the control plane data tier 1328 and a service gateway 1336 (e.g., the service gateway 1236 of FIG. 12) and a network address translation (NAT) gateway 1338 (e.g., the NAT gateway 1238 of FIG. 12).
  • NAT network address translation
  • the control plane VCN 1316 can include the service gateway 1336 and the NAT gateway 1338.
  • the control plane VCN 1316 can include a data plane mirror app tier 1340 (e.g., the data plane mirror app tier 1240 of FIG.12) that can include app subnet(s) 1326.
  • the app subnet(s) 1326 contained in the data plane mirror app tier 1340 can include a virtual network interface controller (VNIC) 1342 (e.g., the VNIC of 1242) that can execute a compute instance 1344 (e.g., similar to the compute instance 1244 of FIG.12).
  • VNIC virtual network interface controller
  • the compute instance 1344 can facilitate communication between the app subnet(s) 1326 of the data plane mirror app tier 1340 and the app subnet(s) 1326 that can be contained in a data plane app tier 1346 (e.g., the data plane app tier 1246 of FIG.12) via the VNIC 1342 contained in the data plane mirror app tier 1340 and the VNIC 1342 contained in the data plane app tier 1346.
  • the Internet gateway 1334 contained in the control plane VCN 1316 can be communicatively coupled to a metadata management service 1352 (e.g., the metadata management service 1252 of FIG. 12) that can be communicatively coupled to public Internet 1354 (e.g., public Internet 1254 of FIG.12).
  • Public Internet 1354 can be communicatively coupled to the NAT gateway 1338 contained in the control plane VCN 1316.
  • the service gateway 1336 contained in the control plane VCN 1316 can be communicatively coupled to cloud services 1356 (e.g., cloud services 1256 of FIG.12).
  • the data plane VCN 1318 can be contained in the customer tenancy 1321.
  • the IaaS provider may provide the control plane VCN 1316 for each customer, and the IaaS provider may, for each customer, set up a unique compute instance 1344 that is contained in the service tenancy 1319.
  • Each compute instance 1344 may allow communication between the control plane VCN 1316, contained in the service tenancy 1319, and the data plane VCN 1318 that is contained in the customer tenancy 1321.
  • the compute instance 1344 may allow resources, that are provisioned in the control plane VCN 1316 that is contained in the service tenancy 1319, to be deployed or otherwise used in the data plane VCN 1318 that is contained in the customer tenancy 1321.
  • the customer of the IaaS provider may have databases that live in the customer tenancy 1321.
  • the control plane VCN 1316 can include the data plane mirror app tier 1340 that can include app subnet(s) 1326.
  • the data plane mirror app tier 1340 can reside in the data plane VCN 1318, but the data plane mirror app tier 1340 may not live in the data plane VCN 1318. That is, the data plane mirror app tier 1340 may have access to the customer tenancy 1321, but the data plane mirror app tier 1340 may not exist in the data plane VCN 1318 or be owned or operated by the customer of the IaaS provider.
  • the data plane mirror app tier 1340 may be configured to make calls to the data plane VCN 1318 but may not be configured to make calls to any entity contained in the control plane VCN 1316.
  • the customer may desire to deploy or otherwise use resources in the data plane VCN 1318 that are provisioned in the control plane VCN 1316, and the data plane mirror app tier 1340 can facilitate the desired deployment, or other usage of resources, of the customer.
  • the customer of the IaaS provider can apply filters to the data plane VCN 1318.
  • the customer can determine what the data plane VCN 1318 can access, and the customer may restrict access to public Internet 1354 from the data plane VCN 1318.
  • the IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN 1318 to any outside networks or databases.
  • cloud services 1356 can be called by the service gateway 1336 to access services that may not exist on public Internet 1354, on the control plane VCN 1316, or on the data plane VCN 1318.
  • the connection between cloud services 1356 and the control plane VCN 1316 or the data plane VCN 1318 may not be live or continuous.
  • Cloud services 1356 may exist on a different network owned or operated by the IaaS provider. Cloud services 1356 may be configured to receive calls from the service gateway 1336 and may be configured to not receive calls from public Internet 1354.
  • Some cloud services 1356 may be isolated from other cloud services 1356, and the control plane VCN 1316 may be isolated from cloud services 1356 that may not be in the same region as the control plane VCN 1316.
  • the control plane VCN 1316 may be located in "Region 1," and cloud service "Deployment 12,” may be located in Region 1 and in "Region 2.” If a call to Deployment 12 is made by the service gateway 1336 contained in the control plane VCN 1316 located in Region 1, the call may be transmitted to Deployment 12 in Region 1.
  • the control plane VCN 1316, or Deployment 12 in Region 1 may not be communicatively coupled to, or otherwise in communication with, Deployment 12 in Region 2.
  • FIG.14 is a block diagram 1400 illustrating another example pattern of an IaaS architecture, according to at least one embodiment.
  • Service operators 1402 e.g., service operators 1202 of FIG.12
  • a secure host tenancy 1404 e.g., the secure host tenancy 1204 of FIG. 12
  • VCN virtual cloud network
  • a secure host subnet 1408 e.g., the secure host subnet 1208 of FIG.12.
  • the VCN 1406 can include an LPG 1410 (e.g., the LPG 1210 of FIG.12) that can be communicatively coupled to an SSH VCN 1412 (e.g., the SSH VCN 1212 of FIG.12) via an LPG 1410 contained in the SSH VCN 1412.
  • LPG 1410 e.g., the LPG 1210 of FIG.12
  • SSH VCN 1412 e.g., the SSH VCN 1212 of FIG.12
  • the SSH VCN 1412 can include an SSH subnet 1414 (e.g., the SSH subnet 1214 of FIG.12), and the SSH VCN 1412 can be communicatively coupled to a control plane VCN 1416 (e.g., the control plane VCN 1216 of FIG.12) via an LPG 1410 contained in the control plane VCN 1416 and to a data plane VCN 1418 (e.g., the data plane 1218 of FIG.12) via an LPG 1410 contained in the data plane VCN 1418.
  • the control plane VCN 1416 and the data plane VCN 1418 can be contained in a service tenancy 1419 (e.g., the service tenancy 1219 of FIG. 12).
  • the control plane VCN 1416 can include a control plane DMZ tier 1420 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include load balancer (LB) subnet(s) 1422 (e.g., LB subnet(s) 1222 of FIG.12), a control plane app tier 1424 (e.g., the control plane app tier 1224 of FIG.12) that can include app subnet(s) 1426 (e.g., similar to app subnet(s) 1226 of FIG. 12), a control plane data tier 1428 (e.g., the control plane data tier 1228 of FIG.12) that can include DB subnet(s) 1430.
  • LB load balancer
  • a control plane app tier 1424 e.g., the control plane app tier 1224 of FIG.12
  • app subnet(s) 1426 e.g., similar to app subnet(s) 1226 of FIG. 12
  • the LB subnet(s) 1422 contained in the control plane DMZ tier 1420 can be communicatively coupled to the app subnet(s) 1426 contained in the control plane app tier 1424 and to an Internet gateway 1434 (e.g., the Internet gateway 1234 of FIG. 12) that can be contained in the control plane VCN 1416, and the app subnet(s) 1426 can be communicatively coupled to the DB subnet(s) 1430 contained in the control plane data tier 1428 and to a service gateway 1436 (e.g., the service gateway of FIG. 12) and a network address translation (NAT) gateway 1438 (e.g., the NAT gateway 1238 of FIG. 12).
  • a service gateway 1436 e.g., the service gateway of FIG. 12
  • NAT network address translation
  • the control plane VCN 1416 can include the service gateway 1436 and the NAT gateway 1438.
  • the data plane VCN 1418 can include a data plane app tier 1446 (e.g., the data plane app tier 1246 of FIG. 12), a data plane DMZ tier 1448 (e.g., the data plane DMZ tier 1248 of FIG.12), and a data plane data tier 1450 (e.g., the data plane data tier 1250 of FIG.12).
  • the data plane DMZ tier 1448 can include LB subnet(s) 1422 that can be communicatively coupled to trusted app subnet(s) 1460 and untrusted app subnet(s) 1462 of the data plane app tier 1446 and the Internet gateway 1434 contained in the data plane VCN 1418.
  • the trusted app subnet(s) 1460 can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418, the NAT gateway 1438 contained in the data plane VCN 1418, and DB subnet(s) 1430 contained in the data plane data tier 1450.
  • the untrusted app subnet(s) 1462 can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418 and DB subnet(s) 1430 contained in the data plane data tier 1450.
  • the data plane data tier 1450 can include DB subnet(s) 1430 that can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418.
  • the untrusted app subnet(s) 1462 can include one or more primary VNICs 1464(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 1466(1)-(N).
  • Each tenant VM 1466(1)-(N) can be communicatively coupled to a respective app subnet 1467(1)-(N) that can be contained in respective container egress VCNs 1468(1)-(N) that can be contained in respective customer tenancies 1470(1)-(N).
  • Respective secondary VNICs 1472(1)-(N) can facilitate communication between the untrusted app subnet(s) 1462 contained in the data plane VCN 1418 and the app subnet contained in the container egress VCNs 1468(1)-(N).
  • Each container egress VCNs 1468(1)-(N) can include a NAT gateway 1438 that can be communicatively coupled to public Internet 1454 (e.g., public Internet 1254 of FIG.12).
  • the Internet gateway 1434 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 can be communicatively coupled to a metadata management service 1452 (e.g., the metadata management system 1252 of FIG. 12) that can be communicatively coupled to public Internet 1454.
  • Public Internet 1454 can be communicatively coupled to the NAT gateway 1438 contained in the control plane VCN 1416 and contained in the data plane VCN 1418.
  • the service gateway 1436 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 can be communicatively coupled to cloud services 1456.
  • the data plane VCN 1418 can be integrated with customer tenancies 1470.
  • This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code.
  • the customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects.
  • the IaaS provider may determine whether to run code given to the IaaS provider by the customer.
  • the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane app tier 1446. Code to run the function may be executed in the VMs 1466(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN 1418.
  • Each VM 1466(1)-(N) may be connected to one customer tenancy 1470.
  • Respective containers 1471(1)-(N) contained in the VMs 1466(1)-(N) may be configured to run the code.
  • there can be a dual isolation e.g., the containers 1471(1)-(N) running code, where the containers 1471(1)-(N) may be contained in at least the VM 1466(1)-(N) that are contained in the untrusted app subnet(s) 1462), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer.
  • the containers 1471(1)- (N) may be communicatively coupled to the customer tenancy 1470 and may be configured to transmit or receive data from the customer tenancy 1470.
  • the containers 1471(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN 1418.
  • the IaaS provider may kill or otherwise dispose of the containers 1471(1)-(N).
  • the trusted app subnet(s) 1460 may run code that may be owned or operated by the IaaS provider.
  • the trusted app subnet(s) 1460 may be communicatively coupled to the DB subnet(s) 1430 and be configured to execute CRUD operations in the DB subnet(s) 1430.
  • the untrusted app subnet(s) 1462 may be communicatively coupled to the DB subnet(s) 1430, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s) 1430.
  • the containers 1471(1)-(N) that can be contained in the VM 1466(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s) 1430.
  • the control plane VCN 1416 and the data plane VCN 1418 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 1416 and the data plane VCN 1418.
  • FIG.15 is a block diagram 1500 illustrating another example pattern of an IaaS architecture, according to at least one embodiment.
  • Service operators 1502 can be communicatively coupled to a secure host tenancy 1504 (e.g., the secure host tenancy 1204 of FIG. 12) that can include a virtual cloud network (VCN) 1506 (e.g., the VCN 1206 of FIG.12) and a secure host subnet 1508 (e.g., the secure host subnet 1208 of FIG.12).
  • the VCN 1506 can include an LPG 1510 (e.g., the LPG 1210 of FIG.12) that can be communicatively coupled to an SSH VCN 1512 (e.g., the SSH VCN 1212 of FIG.12) via an LPG 1510 contained in the SSH VCN 1512.
  • the SSH VCN 1512 can include an SSH subnet 1514 (e.g., the SSH subnet 1214 of FIG.12), and the SSH VCN 1512 can be communicatively coupled to a control plane VCN 1516 (e.g., the control plane VCN 1216 of FIG.12) via an LPG 1510 contained in the control plane VCN 1516 and to a data plane VCN 1518 (e.g., the data plane 1218 of FIG.12) via an LPG 1510 contained in the data plane VCN 1518.
  • the control plane VCN 1516 and the data plane VCN 1518 can be contained in a service tenancy 1519 (e.g., the service tenancy 1219 of FIG. 12).
  • the control plane VCN 1516 can include a control plane DMZ tier 1520 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include LB subnet(s) 1522 (e.g., LB subnet(s) 1222 of FIG.12), a control plane app tier 1524 (e.g., the control plane app tier 1224 of FIG.
  • a control plane DMZ tier 1520 e.g., the control plane DMZ tier 1220 of FIG. 12
  • LB subnet(s) 1522 e.g., LB subnet(s) 1222 of FIG.12
  • a control plane app tier 1524 e.g., the control plane app tier 1224 of FIG.
  • app subnet(s) 1526 e.g., app subnet(s) 1226 of FIG.12
  • a control plane data tier 1528 e.g., the control plane data tier 1228 of FIG.12
  • DB subnet(s) 1530 e.g., DB subnet(s) 1430 of FIG.14
  • the LB subnet(s) 1522 contained in the control plane DMZ tier 1520 can be communicatively coupled to the app subnet(s) 1526 contained in the control plane app tier 1524 and to an Internet gateway 1534 (e.g., the Internet gateway 1234 of FIG.
  • the control plane VCN 1516 can include the service gateway 1536 and the NAT gateway 1538.
  • the data plane VCN 1518 can include a data plane app tier 1546 (e.g., the data plane app tier 1246 of FIG.
  • the data plane DMZ tier 1548 can include LB subnet(s) 1522 that can be communicatively coupled to trusted app subnet(s) 1560 (e.g., trusted app subnet(s) 1460 of FIG. 14) and untrusted app subnet(s) 1562 (e.g., untrusted app subnet(s) 1462 of FIG. 14) of the data plane app tier 1546 and the Internet gateway 1534 contained in the data plane VCN 1518.
  • trusted app subnet(s) 1560 e.g., trusted app subnet(s) 1460 of FIG. 14
  • untrusted app subnet(s) 1562 e.g., untrusted app subnet(s) 1462 of FIG. 14
  • the trusted app subnet(s) 1560 can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518, the NAT gateway 1538 contained in the data plane VCN 1518, and DB subnet(s) 1530 contained in the data plane data tier 1550.
  • the untrusted app subnet(s) 1562 can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518 and DB subnet(s) 1530 contained in the data plane data tier 1550.
  • the data plane data tier 1550 can include DB subnet(s) 1530 that can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518.
  • the untrusted app subnet(s) 1562 can include primary VNICs 1564(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 1566(1)-(N) residing within the untrusted app subnet(s) 1562.
  • VMs virtual machines
  • Each tenant VM 1566(1)-(N) can run code in a respective container 1567(1)-(N), and be communicatively coupled to an app subnet 1526 that can be contained in a data plane app tier 1546 that can be contained in a container egress VCN 1568.
  • Respective secondary VNICs 1572(1)-(N) can facilitate communication between the untrusted app subnet(s) 1562 contained in the data plane VCN 1518 and the app subnet contained in the container egress VCN 1568.
  • the container egress VCN can include a NAT gateway 1538 that can be communicatively coupled to public Internet 1554 (e.g., public Internet 1254 of FIG.12).
  • public Internet 1554 e.g., public Internet 1254 of FIG.12
  • the Internet gateway 1534 contained in the control plane VCN 1516 and contained in the data plane VCN 1518 can be communicatively coupled to a metadata management service 1552 (e.g., the metadata management system 1252 of FIG. 12) that can be communicatively coupled to public Internet 1554.
  • Public Internet 1554 can be communicatively coupled to the NAT gateway 1538 contained in the control plane VCN 1516 and contained in the data plane VCN 1518.
  • the service gateway 1536 contained in the control plane VCN 1516 and contained in the data plane VCN 1518 can be communicatively coupled to cloud services 1556.
  • the pattern illustrated by the architecture of block diagram 1500 of FIG.15 may be considered an exception to the pattern illustrated by the architecture of block diagram 1400 of FIG.14 and may be desirable for a customer of the IaaS provider if the IaaS provider cannot directly communicate with the customer (e.g., a disconnected region).
  • the respective containers 1567(1)-(N) that are contained in the VMs 1566(1)-(N) for each customer can be accessed in real-time by the customer.
  • the containers 1567(1)-(N) may be configured to make calls to respective secondary VNICs 1572(1)-(N) contained in app subnet(s) 1526 of the data plane app tier 1546 that can be contained in the container egress VCN 1568.
  • the secondary VNICs 1572(1)-(N) can transmit the calls to the NAT gateway 1538 that may transmit the calls to public Internet 1554.
  • the containers 1567(1)-(N) that can be accessed in real- time by the customer can be isolated from the control plane VCN 1516 and can be isolated from other entities contained in the data plane VCN 1518.
  • the containers 1567(1)-(N) may also be isolated from resources from other customers.
  • the customer can use the containers 1567(1)-(N) to call cloud services 1556.
  • the customer may run code in the containers 1567(1)-(N) that requests a service from cloud services 1556.
  • the containers 1567(1)-(N) can transmit this request to the secondary VNICs 1572(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet 1554.
  • Public Internet 1554 can transmit the request to LB subnet(s) 1522 contained in the control plane VCN 1516 via the Internet gateway 1534.
  • the LB subnet(s) can transmit the request to app subnet(s) 1526 that can transmit the request to cloud services 1556 via the service gateway 1536.
  • IaaS architectures 1200, 1300, 1400, 1500 depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
  • the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner.
  • An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee.
  • FIG.16 illustrates an example computer system 1600, in which various embodiments may be implemented. The system 1600 may be used to implement any of the computer systems described above. As shown in the figure, computer system 1600 includes a processing unit 1604 that communicates with a number of peripheral subsystems via a bus subsystem 1602.
  • peripheral subsystems may include a processing acceleration unit 1606, an I/O subsystem 1608, a storage subsystem 1618 and a communications subsystem 1624.
  • Storage subsystem 1618 includes tangible computer-readable storage media 1622 and a system memory 1610.
  • Bus subsystem 1602 provides a mechanism for letting the various components and subsystems of computer system 1600 communicate with each other as intended. Although bus subsystem 1602 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 1602 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • Such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard.
  • Processing unit 1604 which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system 1600.
  • processors may be included in processing unit 1604. These processors may include single core or multicore processors.
  • processing unit 1604 may be implemented as one or more independent processing units 1632 and/or 1634 with single or multicore processors included in each processing unit. In other embodiments, processing unit 1604 may also be implemented as a quad-core processing unit formed by integrating two dual- core processors into a single chip. [0163] In various embodiments, processing unit 1604 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s) 1604 and/or in storage subsystem 1618. Through suitable programming, processor(s) 1604 can provide various functionalities described above.
  • Computer system 1600 may additionally include a processing acceleration unit 1606, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like.
  • I/O subsystem 1608 may include user interface input devices and user interface output devices.
  • User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices.
  • User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands.
  • User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®).
  • user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
  • User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices.
  • user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices.
  • User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like.
  • User interface output devices may include a display subsystem, indicator lights, or non- visual displays such as audio output devices, etc.
  • the display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • plasma display a projection device
  • touch screen a touch screen
  • output device is intended to include all possible types of devices and mechanisms for outputting information from computer system 1600 to a user or other computer.
  • user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems.
  • Computer system 1600 may comprise a storage subsystem 1618 that provides a tangible non-transitory computer-readable storage medium for storing software and data constructs that provide the functionality of the embodiments described in this disclosure.
  • the software can include programs, code, instructions, scripts, etc., that when executed by one or more cores or processors of processing unit 1604 provide the functionality described above.
  • Storage subsystem 1618 may also provide a repository for storing data used in accordance with the present disclosure.
  • storage subsystem 1618 can include various components including a system memory 1610, computer-readable storage media 1622, and a computer readable storage media reader 1620.
  • System memory 1610 may store program instructions that are loadable and executable by processing unit 1604.
  • System memory 1610 may also store data that is used during the execution of the instructions and/or data that is generated during the execution of the program instructions.
  • Various different kinds of programs may be loaded into system memory 1610 including but not limited to client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), virtual machines, containers, etc.
  • RDBMS relational database management systems
  • System memory 1610 may also store an operating system 1616.
  • operating system 1616 may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® OS, and Palm® OS operating systems.
  • the virtual machines along with their guest operating systems (GOSs) may be loaded into system memory 1610 and executed by one or more processors or cores of processing unit 1604.
  • System memory 1610 can come in different configurations depending upon the type of computer system 1600.
  • system memory 1610 may be volatile memory (such as random access memory (RAM)) and/or non-volatile memory (such as read-only memory (ROM), flash memory, etc.) Different types of RAM configurations may be provided including a static random access memory (SRAM), a dynamic random access memory (DRAM), and others.
  • system memory 1610 may include a basic input/output system (BIOS) containing basic routines that help to transfer information between elements within computer system 1600, such as during start-up.
  • BIOS basic input/output system
  • Computer-readable storage media 1622 may represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, computer-readable information for use by computer system 1600 including instructions executable by processing unit 1604 of computer system 1600.
  • Computer-readable storage media 1622 can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information.
  • This can include tangible computer- readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media.
  • computer-readable storage media 1622 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media.
  • Computer-readable storage media 1622 may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like.
  • Computer-readable storage media 1622 may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs.
  • SSD solid-state drives
  • volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs.
  • the disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program services, and other data for computer system 1600.
  • Machine-readable instructions executable by one or more processors or cores of processing unit 1604 may be stored on a non-transitory computer-readable storage medium.
  • a non-transitory computer-readable storage medium can include physically tangible memory or storage devices that include volatile memory storage devices and/or non-volatile storage devices.
  • Examples of non-transitory computer-readable storage medium include magnetic storage media (e.g., disk or tapes), optical storage media (e.g., DVDs, CDs), various types of RAM, ROM, or flash memory, hard drives, floppy drives, detachable memory drives (e.g., USB drives), or other type of storage device.
  • Communications subsystem 1624 provides an interface to other computer systems and networks. Communications subsystem 1624 serves as an interface for receiving data from and transmitting data to other systems from computer system 1600. For example, communications subsystem 1624 may enable computer system 1600 to connect to one or more devices via the Internet.
  • communications subsystem 1624 can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof)), global positioning system (GPS) receiver components, and/or other components.
  • RF radio frequency
  • communications subsystem 1624 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface.
  • communications subsystem 1624 may also receive input communication in the form of structured and/or unstructured data feeds 1626, event streams 1628, event updates 1630, and the like on behalf of one or more users who may use computer system 1600.
  • communications subsystem 1624 may be configured to receive data feeds 1626 in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources.
  • RSS Rich Site Summary
  • communications subsystem 1624 may also be configured to receive data in the form of continuous data streams, which may include event streams 1628 of real-time events and/or event updates 1630, that may be continuous or unbounded in nature with no explicit end.
  • applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like.
  • Communications subsystem 1624 may also be configured to output the structured and/or unstructured data feeds 1626, event streams 1628, event updates 1630, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system 1600.
  • Computer system 1600 can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system.
  • a handheld portable device e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA
  • a wearable device e.g., a Google Glass® head mounted display
  • PC personal computer system
  • workstation e.g., a workstation
  • mainframe e.g., a mainframe
  • a kiosk e.g., a server rack
  • server rack e.g., a server rack
  • Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
  • the specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
  • Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
  • Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

Techniques are disclosed for implementing a compute service in a reduced footprint data center. A control plane of the compute service can execute a compute service instance at a bare metal computing device of the reduced footprint data center. The control plane can receive a first indication that the bare metal instance is successfully executing from an agent executing in the bare metal instance. The control plane can then send to the agent information identifying a hypervisor image accessible to the agent via a network connection of the reduced footprint data center and receive, a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device. The control plane can then initiate a reboot of the compute service instance.

Description

PATENT Attorney Docket No.: 088325-1461163 (429230PC) Client Reference No.: ORC24138805-WO-PCT (IaaS #728.12) TECHNIQUES FOR COMPUTE SERVICE IN AN OVERLAY NETWORK [0001] This international application claims priority to and the benefit of the following applications, the entire contents of which are hereby incorporated by reference in their entirety for all purposes: 1. U.S. Provisional Patent Application 63/564,195, filed on March 12, 2024, entitled "SCALABLE FOOTPRINT FOR DEDICATED CLOUD TECHNIQUES"; 2. U.S. Provisional Patent Application 63/568,061, filed on March 21, 2024, entitled "NETWORKING FOR A SCALABLE DEDICATED CLOUD FOOTPRINT"; 3. U.S. Provisional Patent Application 63/568,234, filed on March 21, 2024, entitled "SCALABLE FOOTPRINT FOR DEDICATED CLOUD TECHNIQUES"; 4. U.S. Provisional Patent Application 63/633,966, filed on April 15, 2024, entitled "SCALABLE FOOTPRINT FOR DEDICATED CLOUD TECHNIQUES"; 5. U.S. Provisional Patent Application 63/637,691, filed on April 23, 2024, entitled "SCALABLE FOOTPRINT FOR DEDICATED CLOUD TECHNIQUES"; 6. U.S. Provisional Patent Application 63/660,377, filed on June 14, 2024, entitled "DRCC ARCHITECTURE UPDATE"; 7. U.S. Provisional Patent Application 63/690,270, filed on September 3, 2024, entitled "TECHNIQUES FOR VIRTUAL MACHINE INFRASTRUCTURE IN A REDUCED FOOTPRINT DATA CENTER"; 8. U.S. Provisional Patent Application 63/691,174, filed on September 5, 2024, entitled "TECHNIQUES FOR COMPUTE SERVICE IN AN OVERLAY NETWORK"; and 9. U.S. Patent Application No.19/075,724, filed on March 10, 2025, entitled "TECHNIQUES FOR COMPUTE SERVICE IN AN OVERLAY NETWORK." FIELD [0002] This disclosure is generally concerned with data centers. More specifically, this disclosure relates to data centers in which a Compute service is implemented in an overlay network of the data center. BACKGROUND [0003] Cloud service providers (CSPs) can offer computing infrastructure for customers using resources in several data centers. As cloud computing demand increases, CSPs can improve the availability of cloud resources by scaling the data centers. However, scaling can result in large data center footprints with a significant number of computing devices requiring a commensurate amount of resources to operate as well as reserving significant computing resources for the effective management of the cloud resources themselves. BRIEF SUMMARY [0004] Embodiments of the present disclosure relate to cloud computing networks. More particularly, the present disclosure describes architectures, infrastructure, and related techniques for implementing a block storage service in a reduced footprint data center. A typical CSP may provide cloud services to multiple customers. Each customer may have the ability to customize and configure the infrastructure provisioned to support their allocated cloud resources. To manage the infrastructure provisioning for multiple customers, the CSP may reserve computing resources within a data center to provide certain "core" services to both customers and to other services operated by the CSP. For example, services like block storage, object storage, identity and access management, and key management and secrets services are implemented within a "service enclave" of the data center. The service enclave may connect via a substrate network of computing devices (virtual machines and/or bare metal instances) hosted within the data center. The substrate network may be a part of the "underlay network" of the data center, which includes the physical network connecting bare metal devices, smart network interface cards (SmartNICs) of the computing devices, and networking infrastructure like top-of-rack switches. By contrast, CSP customers have infrastructure provisioned in an "overlay network" comprising one or more VCNs of virtualized environments to provide resources for the customer (e.g., compute, storage, etc.). [0005] The service enclave exists on dedicated hardware within the data center. Because of this, the services hosted within the service enclave are difficult to scale. Whereas additional racks and servers can be implemented within the data center to expand the resources available to CSP customers, the dedicated computing resources for the service enclave are typically of a fixed size that depends on the largest predicted size of the data center. Expanding the service enclave can require a complicated addition of computing resources that may impact the availability of the core services to customers. Additionally, unused resources within the service enclave (e.g., if the service enclave is sized too large for the customer demand from the data center) cannot be easily made available to the customers, since the service enclave does not typically allow network access from the customer overlay network. [0006] Even as the demand for cloud services grows, CSPs may want to deploy data centers to meet that demand that initially have the smallest physical footprint possible. Such a footprint can improve the ease of both deploying the physical components and configuring the initial infrastructure while still allowing the data center to scale to meet customer demand. In the reduced footprint, rather than dedicate a portion of the computing hardware to providing the service enclave, the "core services" that are hosted in the service enclave can instead be implemented in the overlay network. By doing so, the core services can be scaled as the data center footprint expands. The computing devices used to construct the reduced footprint data center can be homogenized, improving the initial configuration and the easing the expansion of the footprint when additional, homogeneous devices are added. In addition, by eliminating the substrate network, flexible overlay network shapes are made available for both CSP core services and customers. [0007] Moving compute service to the overlay can create a circular dependency with block storage service (BSS). All server devices for compute should be virtual machines on compute hypervisors running in the overlay network, which use BSS to provide their boot volumes. However, BSS may not be running in the Overlay initially because the BSS runs on VMs managed by the compute hypervisors. The techniques described herein describe breaking the circular dependencies in ways that are consistent with the overlay network architecture of the reduced footprint data center while providing appropriate virtual machine infrastructure that can appropriately utilize the resources of bare metal computing devices for core services like BSS while remaining dynamic for other services including Compute provided to customers. [0008] Embodiments described herein relate to methods, systems, and computer-readable media for implementing a Compute service in an Overlay network of a reduced footprint data center. A method for a compute service in a reduced footprint data center can include executing, a compute service instance at a bare metal computing device of the reduced footprint data center. A control plane of the compute service can use a live image to execute the compute service instance. The method can also include receiving a first indication that the bare metal instance is successfully executing. The compute control plane can receive the first indication from an agent executing in the bare metal instance. The method can also include the control plane sending, to the agent, information identifying a hypervisor image accessible to the agent via a network connection of the reduced footprint data center and receiving, from the agent, a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device. The method can also include initiating, by the control plane, a reboot of the compute service instance, the compute service instance configured to boot using the hypervisor image on the storage device. [0009] Another embodiment is directed to a compute system including one or more processors and one or more memories storing computer-executable instructions that, when executed by the one or more processors, cause the computer system to perform the method described above. [0010] Yet another embodiment is directed to a non-transitory computer-readable medium storing computer-executable instructions that, when executed by one or more processors of a computer system, cause the computer system to perform the method described above. In addition, embodiments may be implemented by using a computer program product, comprising computer program/instructions which, when executed by a processor, cause the processor to perform any of the methods described in the disclosure. [0011] Additional aspects of the disclosure include are provided in the following examples: [0012] Example 1: A method for configuring virtual machine infrastructure in a reduced footprint data center. The method can include implementing, by a compute service control plane executing in the reduced footprint data center at a bare metal computing device of the reduced footprint data center, an instance of the compute service, the bare metal computing device including computing resources; executing, by the compute service control plane at the instance, a hypervisor, the hypervisor including a configuration shape corresponding to the computing resources of the bare metal computing device, the configuration shape defining a shape family of virtual machine shapes; and provisioning, by the compute service control plane, a virtual machine at the hypervisor, the virtual machine corresponding to a virtual machine shape of the shape family, the virtual machine shape defining a portion of the computing resources of the bare metal computing device managed by the hypervisor and assigned to the virtual machine. [0013] Example 1.1: The method of Example 1, wherein provisioning the virtual machine includes determining a virtual machine shape from the shape family, the virtual machine shape corresponding to a portion of the computing resources available from the bare metal computing device; updating a capacity object of the hypervisor, the capacity object maintaining the available capacity of the computing resources managed by the hypervisor; and executing the virtual machine using the portion of the computing resources of the virtual machine shape. [0014] Example 1.2: The method of Example 1.1, further including provisioning a second virtual machine by at least: determining, using the capacity object of the hypervisor, a second virtual machine shape from the shape family; and executing the second virtual machine using a second portion of the computing resources defined by the second virtual machine shape. [0015] Example 1.3: The method of Example 1,.2 further including updating the capacity object based on the second virtual machine shape. [0016] Example 1.4: The method of Example 1, wherein the computing resources include a plurality of processor cores, a quantity of memory, and a plurality of storage volumes. [0017] Example 1.5: The method of Example 1.4, wherein the configuration shape is a dense shape corresponding to all of the plurality of processor cores and the quantity of memory. [0018] Example 1.6: The method of Example 1, wherein the virtual machine includes a ring 0 service virtual machine, and wherein the portion of the computing resources defined by the virtual machine shape includes a reserved storage volume of the plurality of storage volumes. [0019] Example 2: A computing system including one or more processors, and one or more memories storing computer-executable instructions that, when executed by the one or more processors, cause the computing system to perform any of the methods of Examples 1-1.6 above. [0020] Example 3: non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a computing system , cause the computing system to perform any of the methods of Examples 1-1.6 above. In addition, embodiments may be implemented by using a computer program product, comprising computer program/instructions which, when executed by a processor, cause the processor to perform the methods of Examples 1-3.1. BRIEF DESCRIPTION OF DRAWINGS [0021] FIG.1 is a block diagram illustrating an example system architecture of a reduced footprint data center including an initialization device, according to some embodiments. [0022] FIG.2A is a block diagram illustrating a conventional data center including a plurality of server racks reserved for particular functionality, according to some embodiments. [0023] FIG.2B is a block diagram illustrating a reduced footprint data center in which services are in an overlay network, according to some embodiments. [0024] FIG.3 is a block diagram illustrating the expansion of a reduced footprint data center, according to some embodiments. [0025] FIG.4 is a block diagram illustrating networking connections between an overlay network and an underlay network in a reduced footprint data center, according to some embodiments. [0026] FIG.5 is a block diagram illustrating an example architecture of a reduced footprint data center with virtual machines booting from a local boot volume, according to some embodiments. [0027] FIG.6 is a block diagram illustrating an example architecture for preparing a local volume to boot a hypervisor, according to some embodiments. [0028] FIG.7 is example configuration data illustrating an example hypervisor shape, according to some embodiments. [0029] FIG.8 is example configuration data illustrating example virtual machine shapes, according to some embodiments. [0030] FIG.9 is example configuration data illustrating an example virtual machine shape selection, according to some embodiments. [0031] FIG.10 is a flow diagram of an example process for a compute service in an overlay network of a reduced footprint data center, according to some embodiments. [0032] FIG.11 is a flow diagram of an example process for determining virtual machine shapes, according to some embodiments. [0033] FIG.12 is a block diagram illustrating one pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment. [0034] FIG.13 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment. [0035] FIG.14 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment. [0036] FIG.15 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment. [0037] FIG.16 is a block diagram illustrating an example computer system, according to at least one embodiment. DETAILED DESCRIPTION [0038] The adoption of cloud services has seen a rapid uptick in recent times. Various types of cloud services are now provided by various different cloud service providers (CSPs). The term cloud service is generally used to refer to a service or functionality that is made available by a CSP to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP. Typically, the servers and systems that make up the CSP's infrastructure and which is used to provide a cloud service to a customer are separate from the customer's own on-premises servers and systems. Customers can thus avail themselves of cloud services provided by the CSP without having to purchase separate hardware and software resources for the services. Cloud services are designed to provide a subscribing customer easy, scalable, and on-demand access to applications and computing resources without the customer having to invest in procuring the infrastructure that is used for providing the services or functions. Various different types or models of cloud services may be offered such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and others. A customer can subscribe to one or more cloud services provided by a CSP. The customer can be any entity such as an individual, an organization, an enterprise, and the like. [0039] As indicated above, a CSP is responsible for providing the infrastructure and resources that are used for providing cloud services to subscribing customers. The resources provided by the CSP can include both hardware and software resources. These resources can include, for example, compute resources (e.g., virtual machines, containers, applications, processors), memory resources (e.g., databases, data stores), networking resources (e.g., routers, host machines, load balancers), identity, and other resources. In certain implementations, the resources provided by a CSP for providing a set of cloud services CSP are organized into data centers. A data center may be configured to provide a particular set of cloud services. The CSP is responsible for equipping the data center with infrastructure and resources that are used to provide that particular set of cloud services. A CSP may build one or more data centers. [0040] The following definitions are useful for portions of a data center built by a CSP: [0041] Underlay Network - The physical network that sits below the Overlay Network and virtual cloud networks (VCNs) therein. The existing Substrate network is a portion of the Underlay Network. ILOM ports, management and SmartNIC substrate addresses are also part of the underlay network. [0042] Overlay Network – The network environment that is available for use by executing services and applications, including virtualization environments, that provide the functionality of the data center to both customers and the CSP. The Overlay Network can include VCN(s), virtualization environments, and networking connections from these VCNs in the reduced footprint data center to other cloud computing services of the CSP (e.g., services provided in other data center environments). Specific details about network virtualization and VCNs as part of Infrastructure as a Service are provided below with respect to FIGS.9-13. [0043] Substrate Network - A portion of the Underlay Network that contains host devices (e.g., bare metal computing devices and/or VMs) running only Substrate Services. In existing environments these host devices may not have SmartNICs. The host devices may be managed by service teams responsible for one or more of the Substrate Services. [0044] Substrate Services - The list of services that currently run in the Substrate Network, while most of these run in Service Enclave (Block Storage, Object Storage, Identity Service etc.) some substrate service live outside of the service enclave. Currently substrate services have a mix of services that must talk to the underlay network (e.g. Network Monitoring) and services that due to historical reasons reside in service enclave (e.g. Object Storage). With the elimination of dedicated substrate host, we expect substrate services to converge into only services that must communicate with the underlay network. [0045] SmartNIC – A computing component that combines a network interface card with additional functionality for network virtualization to create layers of network abstraction that can be run on top of the physical networking components (e.g., the Underlay Network). The SmartNIC can include processors and memory that can perform computing operations to provide the additional functionality. [0046] Integrated lights out managers (ILOMs) - An ILOM can be a processor or processing platform integrated with bare metal hosts in a data center that can provide functionality for managing and monitoring the hosts remotely in cases where the general functionality of the host may be impaired (e.g., fault occurrence). [0047] BIOS Device(s) – A computing device or a plurality of computing devices on a server rack in the reduced footprint data center. The BIOS Device(s) may be designed to enable independent and resilient operations during various boot scenarios and network disruptions. The BIOS Device(s) may be configured to facilitate the initial boot processes for the reduced footprint data center, provide essential services during recovery, and ensure the region's stability, especially in power-constrained environments. The BIOS Device hosts a range of functions, all of which can allow the autonomous operation of the region. For example, these functions can include DNS resolution, NTP synchronization, DHCP/ZTP configuration, and various security and provisioning services. By offering these capabilities, the BIOS Device ensures that the rack can bootstrap itself, recover from power or network-related events, and maintain essential connectivity and management functions without relying on external resources. For example, each server rack can have one BIOS device, or can have two or three BIOS devices. In various embodiments, the BIOS device can have similar hardware specifications (e.g., number of processors, amount of memory, amount of attached storage devices) as other server devices on the rack. [0048] A reduced footprint data center can have a new architecture for a region in which the initial network footprint is as small as feasible (e.g., six racks, four racks, and possibly even a single rack of server devices) while still providing core cloud services and scalability for customer demands. In particular, a reduced footprint data center may not segregate resources for the Service Enclave (SE) from the Customer Enclave (CE). Instead, the Butterfly region will place SE services (e.g., Block Storage, Object Storage, Identity), which primarily operate in a Substrate Network, into an Overlay Network. This means that a reduced footprint data center may not have dedicated hosts for the Substrate Network, but can require particular solutions for connectivity with the Substrate services now in the Overlay. In addition, a small portion of fundamental boot services is needed to ensure initial route configuration for the services in the Overlay during startup and/or recovery. Since the "core services" now operate in the Overlay network, services like Block Storage can have circular dependencies with other services like Compute. For example, Compute uses Block Storage to provide boot volumes for the VMs on which Block Storage operates, and Block Storage uses a Device Encryption Key service to decrypt the boot volumes that uses Compute for its VMs. [0049] FIGS.1-4 provide an overview of the concepts embodied by a reduced footprint data center. [0050] FIG.1 is a block diagram illustrating an example system architecture of a reduced footprint data center 100 including an initialization device 102. As shown in FIG.1, the reduced footprint data center 100 can include six racks of server devices. The racks may be referred to as "Butterfly" racks. The reduced footprint data center 100 can include Butterfly rack 110, Butterfly rack 120, Butterfly rack 130, Butterfly rack 140, Butterfly rack 150, and Butterfly rack 160. In some embodiments, the racks can be identical. For example, Butterfly rack 110 can include the same number of computing and/or networking devices as each other Butterfly rack 120-160. [0051] Butterfly rack 110 can include two top-of-rack (TOR) switches 106, 108. The TOR switches 106, 108 can each include one or more networking switches configured to provide network communication between the server devices and other computing devices within Butterfly rack 110 as well as one or more networking connections to the other Butterfly racks 120-160 and or other networks including customer network 114. [0052] The Butterfly rack 110 can also include one or more BIOS device(s) 102. The BIOS device can be a server device configured to execute one or more processes to provide a set of "core services" within the reduced footprint data center 100 during startup/boot processes. The BIOS device(s) 102 can configure one or more components of the reduced footprint data center 100 during startup. For example, the BIOS device(s) 102 can send network configuration information to a networking device within the Butterfly racks 110-160. The networking device can be a SmartNIC attached to a server device within the Butterfly racks 110-160. As another example, the BIOS device(s) 102 can send network configuration information to a substrate access VCN. The substrate access VCN can be deployed to one or more hosts within the reduced footprint data center 100. For example, VMs executing in the Butterfly racks 110-160 can be configured to be a substrate access VCN. The substrate access VCN can be configured to provide networking routes between one or more other VCNs (e.g., customer VCNs) and the networking devices (e.g., SmartNICs) and other networking components of the substrate services that now execute in their own VCN in the Overlay. [0053] In some embodiments, the BIOS device(s) 102 can also be configured to host one or more services like a key exchange service (KeS), a device encryption key (DEK) service, or other core services. In addition, the BIOS device(s) 102 can include boot volumes for VMs that are started on host devices in the reduced footprint data center 100. For example, BIOS device(s) 102 can provide boot volumes for VMs on hypervisors hosted on server device(s) 104. [0054] The Butterfly rack 110 can include one or more additional server device(s) 104. The server device(s) 104 can each include one or more processors and one or more memories that together can store and execute instructions for implementing computing services as described herein, including, for example, compute, storage, VMs, CSP services, customer services and/or applications, and the like. As depicted in FIG.1, each of the Butterfly racks 110-160 can include an identical complement of server device(s) and TORs. In some embodiments, each of the Butterfly racks 110-160 can include a BIOS device, although the techniques described herein can be implemented using only a single BIOS device within the reduced footprint data center 100. Each server device of the server device(s) 104 can include a trusted platform module (TPM). The TPM on each device can be a microcontroller or other processor (or multiple processors) along with storage for performing cryptographical operations like hashing, encryption/decryption, key and key pair generation, and key storage. The TPM may generally conform to a standard characterizing such devices, for example, ISO/IEC 11889. [0055] The reduced footprint data center 100 can also include a networking rack 112. The networking rack 112 can include one or more networking devices including switches, gateways, routers, and the like for communicatively coupling the Butterfly racks 110-160 to each other and to customer network 114. The customer network 114 can include an on-premises network connected to the reduced footprint data center 100. In some embodiments, the customer network 114 can provide network connectivity to a public network, including the Internet. As described below with respect to FIG. 2, the networking rack 112 may not be part of an initial "Small" reduced footprint data center and may be added to support the scaling of the reduced footprint data center 100. [0056] FIG.2A is a block diagram illustrating a conventional data center 200 including a plurality of server racks reserved for particular functionality. In a conventional data center 200, the plurality of server racks can each include multiple server devices as well as networking equipment (e.g., TORs) and power supply and distribution equipment. The conventional data center 200 shown in FIG.2A can have a standard footprint of 13 server racks as shown, although additional server racks are possible in larger data centers. [0057] To provide networking isolation between customer data and CSP data for CSP services executing in the conventional data center 200, a portion of the server racks can be reserved as a service enclave, so that the computing devices on those server racks can host and provide CSP services within the conventional data center 200 without also hosting customer data. As shown in FIG.2A, server racks 1-4 may be included as service enclave racks 202. [0058] Similarly, a portion of the server racks can be provided as a customer enclave, so that the computing devices on those server racks can host customer services, applications, and associated customer data. Racks 5-7 can be part of the customer enclave racks 204 within conventional data center 200. [0059] The isolation between the service enclave and the customer enclave can be enforced by software-defined perimeters that define edge devices and/or software within the enclave as distinguished from hardware/software elements outside of the enclave. Access into and out of each enclave may be controlled, monitored, and/or policy driven. For example, access to the service enclave may be based on authorization, limited to authorized clients of the CSP. Such access may be based on one or more credentials provided to the enclave. [0060] The conventional data center 200 can also include database racks 206 (racks 8-9) and networking racks 208 (racks 10-13). The database racks 206 can include computing devices and storage devices that provide storage and management for databases, data stores, object storage, and similar data persistence techniques within the conventional data center 200. The networking racks 208 can include networking devices that provide connectivity to the computing devices within conventional data center 200 and to other networks (e.g., customer networks, the internet, etc.). [0061] FIG.2B is a block diagram illustrating a reduced footprint data center 210 in which services are in an overlay network, according to some embodiments. The reduced footprint data center 210 may be an example of reduced footprint data center 100 of FIG.1, including six Butterfly racks, each having a plurality of server devices, networking devices, and power distribution devices. [0062] Unlike the conventional data center 200, in which particular server racks are reserved as service enclave racks 202 and customer enclave racks 204, the reduced footprint data center 210 can have an Overlay network 212 that spans computing devices in all of the server racks. For example, server devices on Butterfly Rack 1 and Butterfly Rack 6 can host VMs for a VCN in the Overlay network 212. The Overlay network 212 can then include both core services 214 and customer services 216. The core services 214 can include one or more VCNs for the CSP services that would be hosted within the Service Enclave of conventional data center 200 (e.g., on service enclave racks 202). In the reduced footprint data center 210, the core services 214 can exist in the overlay network 212 on any one or more of the server devices within Butterfly racks. Similarly, customer services 216 can exist in the overlay network 212 on host devices on any of the Butterfly racks. In some embodiments the core services 214 may be hosted on specific devices of the reduced footprint data center. For example, the core services 214 may be hosted on Butterfly racks 1-3, while the customer services 216 may be hosted on Butterfly racks 4-6. In other embodiments, the core services 214 and the customer services 216 may be hosted on any of the Butterfly racks, as depicted in FIG. 2B. [0063] FIG.3 is a block diagram illustrating the expansion of a reduced footprint data center 300, according to some embodiments. The reduced footprint data center 300 can include a plurality of reduced footprint server racks 302. Each of the plurality of reduced footprint server racks 302 can be an example of one of the Butterfly racks 110-160 described above with respect to FIG.1. [0064] The plurality of reduced footprint server racks 302 can be connected in a ring network 304 using directional network connection between each seat of TOR switches on each of the server racks. For example, a first TOR switch at each rack can be connected to a first TOR switch of two adjacent server racks, such that data communication from the server rack flows in one direction. A second TOR switch at each rack can be connected to a second TOR switch of two adjacent server racks, providing data communication between the racks in the opposite direction. The first TOR switch and the second TOR switch at each rack can be connected to one another and to each server device on the rack, providing multiple, redundant network paths from any server device of any one server rack to another server device on another server rack. The ring network 304 can therefore allow low latency and highly available network connections between resources hosted on any computing device (e.g., server device) in the plurality of reduced footprint server racks 302. [0065] To scale the reduced footprint data center 300 from the initial footprint provided by reduced footprint server racks 302, additional server racks can be connected to the reduced footprint server racks 302. A networking rack 306 can be implemented at the reduced footprint data center 300. The networking rack 306 can be an example of networking rack 112 described above with respect to FIG.1. The networking rack 306 can include a plurality of networking ports that can be used to connect to one or more of the plurality of reduced footprint server racks. For example, the networking rack 306 can be connected to a first reduced footprint server rack using connection 308. The networking rack 306 can be, for example, a two chassis system having 4 LCs each with 34x400G ports for a total of 384x100G links in each chassis. [0066] Once the networking rack 306 has been implemented and connected, the additional server racks 310 can be installed in the reduced footprint data center 300. The additional server racks 310 can be different from the reduced footprint server racks of the plurality of reduced footprint server racks 302. For example, the additional server racks 310 can include a different number of server devices, with each server device including a different amount of computing and/or storage resources (e.g., processors, processing cores, dynamic memory, non-volatile storage, etc.). Once the additional server racks 310 have been connected to the networking rack 306, a cloud service hosted on the plurality of reduced footprint server racks 302 can be expanded to utilize the computing resources of the additional server racks 310. As one example, a cloud service (e.g., Compute) hosted in the plurality of reduced footprint server racks 302 can have a portion of its data plane provisioned on one of the server devices of the additional server racks 310, thereby allowing the Compute service to instantiate VMs on the additional server racks 310. [0067] FIG.4 is a block diagram illustrating an example network architecture of networking connections between one or more VCNs (substrate service VCNs) in an Overlay network 402 and the Underlay network 422 in a reduced footprint data center 400, according to some embodiments. The reduced footprint data center 400 can be an example of other reduced footprint data centers described herein, including reduced footprint data center 100 of FIG.1. [0068] In the reduced footprint data center 400, the CSP services that were previously implemented in the SE (e.g., hosted on service enclave racks 202 of FIG.2) can now execute in one or more substrate service VCNs 404-408. For example, substrate service VCN-1404 can be a VCN for a Compute service control plane, substrate service VCN-2406 can be a VCN for a PKI service, and substrate service VCN-N 408 can be a VCN for a Block Storage service. SE service control and data planes can be separated into different VCNs. The substrate service VCNs 404-408 can exist in the Overlay network 402. The Overlay network 402 can also include customer VCN(s) 416, which can be limited in their connectivity to the Underlay network 422. [0069] Each substrate service VCN can have its own route table that defines the network traffic routing rules for forwarding network traffic within the network of the reduced footprint data center 400. As shown in FIG.4, substrate service VCN-1 can have VCN-1 route table 410, substrate service VCN-2 can have VCN-2 route table 412, and substrate service VCN-N 408 can have VCN-N route table 414. The routing information of each of the substrate service VCNs 404-408 can be initially configured when the reduced footprint data center 400 is first built so that network traffic to/from the core SE services can be routed between the Overlay network 402 and the Underlay network 422. [0070] The Underlay network 422 can include various devices and other networking endpoints that are connected via the physical networking components of the reduced footprint data center 400. As shown in FIG. 4, the Underlay network 422 can include, without limitation, ILOM(s) 424, Bastions 426, NTP server(s) 428, BIOS services 430, and VNIC(s) 432. The ILOM(s) 424 can be computing devices and network targets that provide access to the server devices of reduced footprint data center 400 for both in-band and out-of-band management. For example, the ILOM(s) 424 can allow for remote management of the associated server devices within the server racks of reduced footprint data center 400 that is separate from the networking pathways defined for the region. The Bastions 426 can be services executing on the server devices of the reduced footprint data center 400 that provide network access via the Underlay network 422 and do not have public network addresses. The Bastions 426 can provide remote access to computing resources within the reduced footprint data center 400 in conjunction with a Bastion service that operates on the Underlay network 422. The Bastion service may be an SE service that is not moved to the Overlay network 402 in the reduced footprint data center 400. Similarly, network time protocol (NTP) servicers 428 may operate in the Underlay network 422 to provide accurate timing to devices and services within the reduced footprint data center 400. BIOS services 430 can include services that are hosted on the one or more initialization devices on the server racks in the reduced footprint data center 400. For example, BIOS services 430 can include a key encryption service usable to encrypt/decrypt data on the server devices of reduced footprint data center 400 during the initial boot process. As another example, the BIOS services 430 can include a network configuration service that can provide the initial network configuration for devices within the reduced footprint data center 400. The VNIC(s) 432 can include network interfaces defined by SmartNICs connected to the server devices within the reduced footprint data center 400. [0071] With SE services moved from to the Overlay network 402, the SE services may still need network connectivity with the Underlay network 422 to properly function. To provide this connectivity, a substrate access VCN 418 can be implemented within the reduced footprint data center 400. The substrate access VCN 418 can include a dynamic routing gateway (DRG) that allows communication between the substrate service VCNs 404-408 and the Underlay network 422. The substrate access VCN 418 can then have a DRG route table 420 that can define a single route rule for reaching the Underlay network 422 from the substrate service VCNs 404-408. [0072] To avoid circular dependencies when the reduced footprint data center 400 is first built or recovers from a shutdown event, an initialization device (e.g., BIOS device 102 of FIG.1) can be used to configure the network addresses and routes for a substrate access VCN 418, a dynamic route gateway within the substrate access VCN 418, and/or one or more SmartNICs of the Underlay network 422. When the server devices of each reduced footprint data center 400 server racks are booted, the substrate access VCN 418 can be deployed to communicatively connect the one or more substrate service VCNs 404-408 with the Underlay network 422. The initialization device can send network configuration information to the substrate access VCN 418 to configure the DRG route table 420 to provide initial network addresses (e.g., IP addresses) for each endpoint of the substrate service VCNs 404-408 in the Overlay network 402 until a DHCP service and other networking services are available in their respective substrate service VCNs. [0073] In addition, the initialization device can send networking configuration information to define one or more static routes for the dynamic routing gateway as part of the DRG routing table 420. The static routes can characterize a networking connection between the Underlay network 422, including a SmartNIC connected to each server device of the reduced footprint data center (e.g., server device(s) 104 of FIG. 1), and each substrate service VCN 404-408. [0074] Finally, the initialization device can send network configuration information to each of the SmartNICs to provide each SmartNIC a network address (e.g., a network address for the SmartNICs' endpoints in the Underlay network 422). Configuring each of these components of the reduced footprint data center can be done in response to the initialization device receiving indications that the corresponding component has been brought up to an active state (e.g., SmartNIC powered on and reachable over the Underlay network 422, substrate access VCN 418 deployed to one or more hosts within the reduced footprint data center 400, etc.). Compute Service in Overlay Network [0075] Multiple methods can be used to break the circular dependencies. One method is to implement a "Ring 0" concept, in which some or all of the Compute hypervisors that host the Block Storage data plane VMs will have local boot volumes. The Compute hypervisors that are configured to boot from a local boot volume are part of a Ring 0 of core services (potentially including DEK) that are brought online first in a reduced footprint data center using local boot volumes. To boot the core service VMs, a primordial hypervisor and VMs boot using a local boot volume (e.g., a particular M.2 SSD device attached to the bare metal device hosting the hypervisors and VMs). The M.2 devices will be configured with hypervisor images using live images built and maintained by to support the initialization of Compute and BSS. The live images can be minimal images that contain all libraries required to partition and encrypt the M.2 devices before installing hypervisor images on them. The live images can persist in Object storage or the BIOS host device on the Butterfly racks. To boot a VM for BSS using the local device, Compute control plane (CP) can first launch a bare metal instance with network boot to load the live image. Once Compute CP confirms that the instance has booted, Compute CP can indicate the version of the hypervisor to boot to an agent running in the host. The agent can then partition and configure the attached M.2 device and install the corresponding hypervisor image (e.g., from the BIOS device). Compute CP can then reboot the bare metal instance using the M.2 device as the boot volume. [0076] As a second option, all Compute hypervisors and all VMs can provisioned with boot volumes provided by the BSS DP. The launch workflows for both hypervisors and VMs is similar to non-Butterfly regions. In order to break the circular dependencies in cases when the rack is powered on, both hypervisors and Block Storage data-plane VMs are each required to maintain a rescue image and a small local storage. Both hypervisors and BSS data plane VMs are each required to maintain a rescue image and a small local storage to break circular dependencies. [0077] Numerous advantages can be realized by removing services from the Substrate. Services teams can eliminate duplicated work (e.g., networking configuration for Underlay and Overlay connectivity), a CSP can completely eliminate some services, compute and storage capacity becomes fungible across all services in the Overlay, and service connectivity is greatly simplified. Services teams can be agnostic about the configuration of their services. If the hosts are provisioned to communicate with the Substrate Access VCN, then the services function as any other customer service. Importantly, a reduced footprint data center does not dedicate a significant fraction of its computing resources to CSP services from the beginning in an unchangeable way. If the CSP services can be scaled down to meet customer needs, the freed resources can be provided to the customer without the need for a physical scale-up. In a complementary way, CSP services can also scale-up in the same way as customer services, since the CSP services now reside in the CE Overlay network. In the particular case of Compute, the service can access the benefits of an Overlay service while preventing circular dependencies with other services like BSS. [0078] FIG.5 is a block diagram illustrating an example architecture of a reduced footprint data center 500 with virtual machines booting from a local boot volume, according to some embodiments. The reduced footprint data center 500 can include a plurality of server devices including server 1502, server 2520, and server 3550, which may be examples of server device(s) 104 of FIG.1. The server devices can be bare metal hosts for software applications that are used to host and provision other software within the reduced footprint data center 500. For example, each server device can host a hypervisor that can manage a plurality of VMs on each server device. As depicted, server 1502 can host compute hypervisor 504, server 2520 can host compute hypervisor 524, and server 3550 can host compute hypervisor 554. These compute hypervisors may be components of a Compute service that includes Compute data plane and Compute control plane components. The hypervisors can be configured to each host a plurality of VMs for one or more services executing in the reduced footprint data center. As depicted in FIG.5, instances of a block storage service data plane can be hosted in VMs on hypervisors on each server device. For example, block storage data plane service 512 can be hosted in block storage data plane VM 508 on server 1502, block storage data plane service 532 can be hosted on block storage data plane VM 528 on server 2520, and block storage data plane service 562 can be hosted on block storage data plane VM 558 on server 3550. In addition, VMs for other services can be managed by the various hypervisors, including a block storage service management plane service 540 hosted on block storage management plane VM 538 on server 2 520 and a key managements service (KMS) 570 hosted on KMS VM 568 on server 3550. These servers and the services hosted in the VMs may be considered the Ring 0 services for the reduced footprint data center 500. [0079] Each server can include local boot volumes that are usable to boot both the bare metal hosts themselves as well as the VMs hosted in each hypervisor on each server device. For example, server 1502 can boot from boot volume 506, server 2520 can boot from boot volume 526, and server 3550 can boot from boot volume 556. The boot volumes may be stored in a locally attached storage device to each server device. For example, an NVMe SSD attached to each server device can store the local boot volumes for initially booting the bare metal instances of server device. In some embodiments, the local boot volumes may be stored in a dedicated boot storage device for each server device, since the main storage devices for the server devices may be encrypted. [0080] Because each VM uses a boot volume, and the boot volumes are provided by BSS, the BSS data plane instances may need to be operational to initially boot the Compute VMs that host the BSS data plane and other components of BSS (e.g., BSS management plane service 540). This situation creates a circular dependency between BSS and Compute service. By providing local boot volumes for bare metal instances and/or BSS data plane VMs, this first circular dependency can be avoided. For example, server 1502 can include a local boot volume 510 that is usable to boot block storage data plane VM 508. Similarly, server 2520 can include local boot volume 530 usable to boot block storage data plane VM 528, and server 3550 can include local boot volume 560 usable to boot block storage data plane VM 558. [0081] Once the BSS VMs have booted from the local boot volume on each server device, there can still be a circular dependency with DEK services before BSS is fully operational, since the boot volumes in the BSS data plane (e.g., boot volumes in the Storage NVMe SSDs on each server device) may be encrypted and may not be accessible as boot volumes without obtaining the corresponding encryption key from DEK. For example, block storage data plane service 512 may manage and access data volumes including block storage management plane boot volume 516 and data volumes 514 that are stored on an attached NVMe SSD of server 1502. The block storage management plane boot volume 516 and data volumes 514 may be encrypted, so that block storage data plane service 512 cannot vend the block storage management plane boot volume 516 to boot block storage management plane VM 538 to host block storage management plane service 540. For BSS to become fully operational, an initialization device like BIOS device 580 (e.g., BIOS server 102 of FIG. 1) can host KeS 582 to provide encryption keys for at least the boot volumes for the DEK services within the reduced footprint data center 500. [0082] The process for breaking the circular dependency between BSS and DEK services in reduced footprint data center 500 generally follow a flow defined by the arrows 1-4 in FIG.5. As shown, after the BSS data plane instances boot from local boot volumes, the BSS can be used to boot a key management service KMS 570 at a KMS VM 568 (on server 3550). At step 1, the KMS VM 568 can initiate a remote iSCSI connection to KMS VM boot volume target 542, which is managed at server 2520. At step 2, the KMS boot volume target 542 can obtain an encrypted device encryption key that is encrypted with a key provided by KeS 582. At step 3, the KMS boot volume target 542 can obtain, from KeS 582 the encryption key usable to decrypt the device encryption key. In some embodiments, the KeS 582 decrypts the encrypted DEK provided by the KMS boot volume target 542. At step 4, the KMS boot volume target 542 can connect to backend storage (e.g., an NVMe SSD of server 3550 that includes encrypted KMS boot volume 566 as well as data volumes 564) and use the decrypted device encryption key to decrypt KMS boot volume 566. The KMS boot volume target 542 can connect the KMS boot volume 566 to boot the KMS VM 568. [0083] Once the KMS 570 is operating, the KMS 570 can take over key vending/key exchange operations for services operating in the reduced footprint data center 500. In particular, boot volume targets may no longer need the KeS 582 to provide decryption of device encryption keys. This allows BSS to provide access to boot volumes for other services, thereby allowing BSS to be fully operational without depending on the continued operation of the limited KeS 582. The arrows 5-8 generally outline the operations of the process for bringing additional services online using BSS to provide remote boot volumes. As an example, the block storage management plane service 540 can be initiated by botting block storage management plane VM 538. At step 5, block storage management plane VM 538 can initiate a remote iSCSI connection to block storage management plane boot volume target 572, which is managed by block storage data plane on server 3550. At step 6, the block storage management plane VM boot volume target 572 can obtain an encrypted device encryption key that is encrypted with a key provided by KMS 570. The encrypted device encryption key and related context can be obtained from local boot volume 560. At step 7, he block storage management plane VM boot volume target 572 can work in conjunction with KMS 570 to decrypt the device encryption key. At step 8, he block storage management plane VM boot volume target 572 can connect to backend storage (e.g., an NVMe SSD of server 1502 that includes encrypted block storage management plane boot volume 516) and use the decrypted device encryption key to decrypt block storage management plane boot volume 516. The block storage management plane VM boot volume target 572 can connect the block storage management plane boot volume 516 to boot the block storage management plane service VM 538. [0084] FIG.6 is a block diagram illustrating an example architecture 600 for preparing a local volume to boot a hypervisor, according to some embodiments. The example architecture includes a bare metal computing device 602 (e.g., one of server device(s) 104 of FIG. 1) that can host a bare metal instance (e.g., an operating system and related process that can execute software and perform operations described herein for configuring the bare metal computing device). [0085] As depicted in FIG.6, the bare metal computing device 602 can include two storage devices, storage device 622 and storage device 624. Each of storage device 622, 624 may be M.2 solid state disk drives operating using a non-volatile memory express (NVMe) bus on the bare metal computing device 602. Given that there exist two storage devices attached to the bare- metal computing device 602, and therefore accessible by a bare metal instance on the bare metal computing device 602, then the circular dependencies between hypervisors and Block Storage data-plane VMs can be broken by booting such instances using these local volumes. In order to achieve this, the Compute service can support provisioning both hypervisors and Block Storage virtual machines using local boot volumes. [0086] In order to launch hypervisors using the storage devices 622, 624, the storage devices 622, 624 can be prepared with hypervisor images on them. This preparation can be achieved by using live images that are built and maintained by the CSP and available to the bare metal instance from either an Object Storage service 608 or from a BIOS device 610 (e.g., BIOS device 102 of FIG.1). A live image can be a minimal image that contains all libraries required to partition and encrypt the storage devices 622, 624 before obtaining and installing hypervisor images on them. Live images can be persisted in Object Storage 608 or the BIOS device 610 so that they can be easily obtained when launching hypervisors. Similar to hypervisor images, live images can be versioned and the latest released version can be used. The live images will be architecture dependent and are not expected to have frequent updates. [0087] To launch a hypervisor on the bare metal instance of bare metal host 602, at step 1, the compute control plane 604 can launch the bare metal instance on the bare metal host 602, including executing the live image agent 614. The bare metal instance can be enabled with a network boot option. A custom pre-boot execution environment (iPXE) script can also be provided for the live image agent 614 that allows the network boot to load the latest available live image. The live image can be obtained from either object storage 608 or from the BIOS device 610 and complete the boot of the bare metal instance using the live image. [0088] Compute control plane 604 can wait until the bare metal instance boots using the live- image. Compute control plane 604 can periodically poll the live image agent 614 that eventually becomes available on the bare metal instance. The live image agent 614 can be configured to instruct logical volume managers on the bare metal host 602 to create and manage suitable logical volumes on the storage devices 622, 624 for use with the hypervisor images. [0089] At step 2, Compute control plane 604 can pass information identifying the hypervisor image to use to execute a hypervisor on the bare metal host 602. The Compute control plane 604 can pass the information the live image agent 614 executing in the bare metal host 602. Compute control plane 604 can then wait until the live image agent 614 performs the following operations. [0090] At step 3, the live image agent 614 can partition the storage devices 622, 624 as appropriate for the volumes needed to support the hypervisor. As shown in FIG.6, the live image agent 614 can set up RAID 1 devices with partitions on each of storage device 622 and storage device 624. The live image agent 614 can work in conjunction with boot logical volume manager (LVM) 616, operating system LVM 618, and other LVM 620 to partition the storage devices 622, 624. The RAID 1 device usable for booting the operating system under the hypervisor using the hypervisor image can include partition 1626 of storage device 622 and partition 1630 of storage device 624. Other logical volumes can be partitioned with other LVM 620 and can include similar RAID 1 configurations, including partition 2628 on storage device 622 and partition 2632 on storage device 624 forming a RAID 1 device usable as storage accessible to the VMs on the hypervisor. At step 4, the live image agent 614 can obtain the hypervisor image using the identifying information provided by Compute control plane 604 and provision the volume with the hypervisor image. The live image agent 614 can also update the boot loader information to reflect the correct boot to the hypervisor when the bare metal instance is rebooted. Once the local volumes have been correctly configured with the hypervisor image to support a local boot, the live image agent 614 can communicate to Compute control plane 604 to indicate that the configuration operations have been completed. [0091] At step 5, Compute control plane 604 can change the boot order of the bare metal instance to boot from the local volumes on storage devices 622, 624. The Compute control plane 604 can make the boot order change vial ILOM 612 of the bare metal host 602. The Compute control plane 604 can then initiate a reboot operations of the bare metal instance on the bare metal host 602. [0092] At step 6, the bare metal host 602 can reboot using the hypervisor image prepared on the volumes of storage devices 622, 624. The bare metal instance can include an operating system 636 that hosts the hypervisor. Once booted, the hypervisor on bare metal host 602 can become available to manage VMs on the bare metal host 602. At step 7, the Compute control plane 604 can poll a hypervisor agent 634 of the hypervisor to determine when the hypervisor becomes available. [0093] FIG.7 is example configuration data illustrating an example hypervisor shape 700, according to some embodiments. The hypervisor shape 700 can be configured to map to the computing resources available for the underlying bare metal computing device. For example, in the reduced footprint data center, each server device (e.g., server device(s) 104 of FIG.1) can be a computing device having as many computing resources (e.g., processors, processor cores, memory, storage, etc.) as feasible for a single device in a server rack. As depicted in FIG. 7, the hypervisor shape 700 can correspond to a computing device having 128 processor cores, 6 available NVMe storage devices, two NUMA nodes with two cores reserved for each node, and the corresponding network interface configuration for the network interface card of the computing device. [0094] FIG.8 is example configuration data illustrating example virtual machine shapes 800, according to some embodiments. The VM shapes can also be referred to as a VM shape family. A shape family can be a logical categorization of virtual machine shapes that is assigned to a hypervisor once it is provisioned. For example, VM_DENSE_E5_FLEX is a shape family of a hypervisor host that is provisioned on E5.DENSE bare-metal instances and is able to host both VM_DENSE_E5_FLEX and VM_STANDARD_E5_FLEX virtual machines. Each hypervisor shape can support multiple shape families, but it may support only one after the hypervisor is provisioned. The VM shapes 800 show which shape families are supported on HV.DenseIO.E5.128 hypervisors and which VM shapes each shape families support. [0095] Every hypervisor shape family has a strict configuration that defines several capacity buckets. Each capacity bucket can define which virtual machine shape of the VM shapes 800 can be placed on the hypervisor. For example, the VM shapes 800 defines the capacity buckets available per each shape family of a HV.DenseIO.E5.128 hypervisor. In this configuration, the shape family VM_DENSE_E5_FLEX defines four buckets, two per each NUMA node of the hypervisor. The first and the third buckets specify that 48 cores and 576 GB of memory from each NUMA node can be used to provision VM_DENSE_E5_FLEX virtual machines. Similarly, the second and last buckets specify that the remaining 14 cores and 160 GB of memory from each NUMA node can be used to provision VM_STANDARD_E5_FLEX shape virtual machines. [0096] For provisioning VMs within the VM shapes 800, the capacities defined for each bucket can be used to maximize the usage of the underlying computing resources. A capacity object can exist for each bucket that a hypervisor can support. These capacity objects can be created when hypervisors are provisioned, for example by Compute control plane. Each capacity object will maintain remaining cores and memory of its associated bucket. When a virtual machine is being provisioned, the placement logic can determine a set of capacity objects (possibly from different hypervisors) based on the shape families that can be used to provision the virtual machine at a particular hypervisor. The candidate capacity objects must have enough remaining cores and memories to fit the new VM instance. These candidates are then sorted in a way to pack as much as possible into already occupied hypervisors, and finally one of them is selected. A new VM attachment object is then created, and the remaining cores and memories of the chosen hypervisor capacity object is updated to reflect the placement and to prevent future launch failures. When the same virtual machine is terminated, its attachment object is deleted and the capacity object of the hypervisor on which the VM was placed is updated. [0097] In some examples, disk resources can be proportional to the number of cores. For example, if a dense hypervisor has 6 NVMe drives, then a VM may use at least 48 cores to get all NVMe drives. For example, each NVMe may use 4 cores. [0098] FIG.9 is example configuration data illustrating an example virtual machine shape selection 900, according to some embodiments. To deploy a Ring 0 hypervisor, a separate pool can be configured for Ring 0 hypervisors and VMs. Core services use the new Ring 0 shapes when deploying VMs. The total number of available local storage volumes per bucket can be configured to comport with the Ring 0 shapes (e.g., VM shapes 800). To distinguish between a dense shape and a regular shape, the available volumes can be allocated as follows: each local volume can have a fixed size (e.g., 50 GB), each dense VM shape can be allocated a single volume, the hypervisor can be allocated a single 90 GB partition, and the remaining available storage can be divided equally between each other shape. For example, for two SSD NVMe drives each of size 480 GB, then the allocation can include: nine total local volumes (e.g., 480 GB/50 GB per volume in a RAID configuration), one reserved for the hypervisor, two reserved for dense VMs (one for each NUMA node), and six reserved for standard VMs (three for each NUMA node). [0099] FIG.10 is a flow diagram of an example process 1000 for a compute service in an overlay network of a reduced footprint data center, according to some embodiments. The process 1000 can be performed by components of the reduced footprint data center (e.g., reduced footprint data center 100 of FIG.1), including one or more computing devices like server device(s) 104 of FIG. 1 configured to execute the control plane of a Compute service in the reduced footprint data center. [0100] The process 1000 can begin at block 1002 with control plane of the compute service executing a compute service instance at a bare metal computing device of a reduced footprint data center. The compute service instance can be a bare metal instance including an operating system and additional processes configured to perform operations to prepare one or more storage devices of the bare metal computing device. The compute service instance can be executed using a live image that includes software executable by the bare metal computing device to execute an agent (e.g., live image agent) to partition the storage device and provision a hypervisor image on the storage device. In some embodiments, the storage device can be an M.2 storage device. [0101] At block 1004, the control plane can receive a first indication that the bare metal instance is successfully executing. The first indication can be received from an agent executing in the bare metal instance. For example, the live image can be configured to execute a live image agent (e.g., a process) on the compute service instance once the compute service instance has been booted. [0102] At block 1006, the control plane can send information identifying a hypervisor image to the agent. The hypervisor image can be accessible to the agent via a network connection of the reduced footprint data center. For example, the hypervisor image can be stored at an initialization device (e.g., BIOS device 102 of FIG.1) that is connected to the bare metal computing device via a networking connection in the reduced footprint data center. [0103] In some embodiments, the agent can be configured to at least partition the storage device, obtain the hypervisor image from the initialization device of the reduced footprint data center, and provision the hypervisor image onto a partition of the storage device. The hypervisor image can be obtained using the information sent by the control plane to the agent. The information can identify the name of the hypervisor image and a version of the hypervisor image. [0104] At block 1008, the control plane can receive a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device. The second indication can be received from the agent. [0105] At block 1010, the control plane can reboot the compute service instance. Rebooting the compute service instance can be in response to receiving the second indication. Rebooting the compute service instance can include power cycling the bare metal computing device. The compute service instance can be configured to boot using the hypervisor image on the storage device. The hypervisor image can include software executable by the bare metal computing device to run an operating system and host a hypervisor on the operating system. [0106] In some embodiments, the control plane can, prior to rebooting the bare metal instance, change a boot order of the compute service instance to boot using the storage device. The control plane can also, after rebooting the compute service instance, poll a hypervisor agent at the compute service instance to determine whether the hypervisor is successfully executing at the bare metal instance. For example, if the compute service instance successfully reboots using the hypervisor image at the storage device, the hypervisor agent should begin executing at the compute service instance. The control plane can poll the hypervisor agent until a response is received indicating that the hypervisor agent is operational. [0107] FIG.11 is a flow diagram of an example process 1100 for determining virtual machine shapes, according to some embodiments. The process can be performed by components of the reduced footprint data center (e.g., reduced footprint data center 100 of FIG. 1), including one or more computing devices like server device(s) 104 of FIG.1 configured to execute the control plane of a Compute service in the reduced footprint data center. [0108] The process 1100 can begin at block 1102 with the compute service control plane implementing an instance of the compute service at a bare metal computing device. For example, the bare metal compting device can be one of the server devices (e.g., server device(s) 104 of FIG.1), and the compute instance can include applications and other software (e.g., an operating system) that is configured to host compute service data plane resources (e.g., a hypervisor). The bare metal computing device can include computing resources. For example, the bare metal computing device can include a plurality of processors, processor cores, a quantity of memory, and a plurality of storage volumes (e.g., partitions of one or more storage devices like NVMe devices). [0109] At block 1104, the compute service control plane can execute a hypervisor at the instance on the bare metal computing device. The hypervisor can include a configuration shape corresponding to the computing resources of the bare metal computing device. In some embodiments, the configuration shape is a dense shape corresponding to all of the plurality of processor cores and the quantity of memory. For example, the configuration shape for the hypervisor may allocate all the computing resources of the bare metal computing device for the hypervisor. The configuration shape can define a shape family of virtual machine shapes (e.g., VM shapes 800 of FIG.8). For example, the shape family can define possible allocations of the computing resources to virtual machines deployed at the hypervisor. [0110] At block 1106, the compute service control plane can provision a virtual machine at the hypervisor. The virtual machine corresponding to a virtual machine shape of the shape family. For example, the virtual machine can be provisioned using a dense shape of the shape family so that the virtual machine is allocated a local volume of a plurality of storage volumes of the computing resources. The virtual machine shape can define a portion of the computing resources of the bare metal computing device managed by the hypervisor and assigned to the virtual machine. For example, the virtual machine shape can specify the number of processor cores, memory, and storage volumes allocated to the virtual machine on the hypervisor. In some embodiments, the virtual machine can be a ring 0 service virtual machine, and wherein the portion of the computing resources defined by the virtual machine shape can include a reserved storage volume of the plurality of storage volumes. [0111] In some embodiments, provisioning the virtual machine can include determining a virtual machine shape from the shape family. The virtual machine shape can correspond to a portion of the computing resources available from the bare metal computing device. The compute service control plane can then update a capacity object of the hypervisor. The capacity object can maintain the available capacity of the computing resources managed by the hypervisor. The compute service control plane can then execute the virtual machine using the portion of the computing resources of the virtual machine shape. [0112] In some embodiments, the compute service control plane can provision a second virtual machine by at least determining a second virtual machine shape from the shape family. The second virtual machine shape can be determined using the capacity object of the hypervisor. For example, based on the portion of the computing resources allocated to the virtual machine, the capacity object may be updated to reflect the remining computing resources available to the second virtual machine. The compute service control plane can then execute the second virtual machine using a second portion of the computing resources defined by the second virtual machine shape. The compute service control plane can then update the capacity object to reflect the allocation of computing resources to the second virtual machine according to the second virtual machine shape. Example Infrastructure as a Service Architectures [0113] As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (example services include billing software, monitoring software, logging software, load balancing software, clustering software, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. [0114] In some instances, IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc. [0115] In most cases, a cloud computing model may require the participation of a cloud provider. The cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS. An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services. [0116] In some examples, IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand)) or the like. [0117] In some examples, IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first. [0118] In some cases, there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files. [0119] In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on-demand pool of configurable and/or shared computing resources), also known as a core network. In some examples, there may also be one or more inbound/outbound traffic group rules provisioned to define how the inbound and/or outbound traffic of the network will be set up and one or more virtual machines (VMs). Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve. [0120] In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed may need to first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned. [0121] FIG.12 is a block diagram 1200 illustrating an example pattern of an IaaS architecture, according to at least one embodiment. Service operators 1202 can be communicatively coupled to a secure host tenancy 1204 that can include a virtual cloud network (VCN) 1206 and a secure host subnet 1208. In some examples, the service operators 1202 may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 8, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry®, or other communication protocol enabled. Alternatively, the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems. The client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS. Alternatively, or in addition, client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN 1206 and/or the Internet. [0122] The VCN 1206 can include a local peering gateway (LPG) 1210 that can be communicatively coupled to a secure shell (SSH) VCN 1212 via an LPG 1210 contained in the SSH VCN 1212. The SSH VCN 1212 can include an SSH subnet 1214, and the SSH VCN 1212 can be communicatively coupled to a control plane VCN 1216 via the LPG 1210 contained in the control plane VCN 1216. Also, the SSH VCN 1212 can be communicatively coupled to a data plane VCN 1218 via an LPG 1210. The control plane VCN 1216 and the data plane VCN 1218 can be contained in a service tenancy 1219 that can be owned and/or operated by the IaaS provider. [0123] The control plane VCN 1216 can include a control plane demilitarized zone (DMZ) tier 1220 that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). The DMZ-based servers may have restricted responsibilities and help keep breaches contained. Additionally, the DMZ tier 1220 can include one or more load balancer (LB) subnet(s) 1222, a control plane app tier 1224 that can include app subnet(s) 1226, a control plane data tier 1228 that can include database (DB) subnet(s) 1230 (e.g., frontend DB subnet(s) and/or backend DB subnet(s)). The LB subnet(s) 1222 contained in the control plane DMZ tier 1220 can be communicatively coupled to the app subnet(s) 1226 contained in the control plane app tier 1224 and an Internet gateway 1234 that can be contained in the control plane VCN 1216, and the app subnet(s) 1226 can be communicatively coupled to the DB subnet(s) 1230 contained in the control plane data tier 1228 and a service gateway 1236 and a network address translation (NAT) gateway 1238. The control plane VCN 1216 can include the service gateway 1236 and the NAT gateway 1238. [0124] The control plane VCN 1216 can include a data plane mirror app tier 1240 that can include app subnet(s) 1226. The app subnet(s) 1226 contained in the data plane mirror app tier 1240 can include a virtual network interface controller (VNIC) 1242 that can execute a compute instance 1244. The compute instance 1244 can communicatively couple the app subnet(s) 1226 of the data plane mirror app tier 1240 to app subnet(s) 1226 that can be contained in a data plane app tier 1246. [0125] The data plane VCN 1218 can include the data plane app tier 1246, a data plane DMZ tier 1248, and a data plane data tier 1250. The data plane DMZ tier 1248 can include LB subnet(s) 1222 that can be communicatively coupled to the app subnet(s) 1226 of the data plane app tier 1246 and the Internet gateway 1234 of the data plane VCN 1218. The app subnet(s) 1226 can be communicatively coupled to the service gateway 1236 of the data plane VCN 1218 and the NAT gateway 1238 of the data plane VCN 1218. The data plane data tier 1250 can also include the DB subnet(s) 1230 that can be communicatively coupled to the app subnet(s) 1226 of the data plane app tier 1246. [0126] The Internet gateway 1234 of the control plane VCN 1216 and of the data plane VCN 1218 can be communicatively coupled to a metadata management service 1252 that can be communicatively coupled to public Internet 1254. Public Internet 1254 can be communicatively coupled to the NAT gateway 1238 of the control plane VCN 1216 and of the data plane VCN 1218. The service gateway 1236 of the control plane VCN 1216 and of the data plane VCN 1218 can be communicatively coupled to cloud services 1256. [0127] In some examples, the service gateway 1236 of the control plane VCN 1216 or of the data plane VCN 1218 can make application programming interface (API) calls to cloud services 1256 without going through public Internet 1254. The API calls to cloud services 1256 from the service gateway 1236 can be one-way: the service gateway 1236 can make API calls to cloud services 1256, and cloud services 1256 can send requested data to the service gateway 1236. But, cloud services 1256 may not initiate API calls to the service gateway 1236. [0128] In some examples, the secure host tenancy 1204 can be directly connected to the service tenancy 1219, which may be otherwise isolated. The secure host subnet 1208 can communicate with the SSH subnet 1214 through an LPG 1210 that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet 1208 to the SSH subnet 1214 may give the secure host subnet 1208 access to other entities within the service tenancy 1219. [0129] The control plane VCN 1216 may allow users of the service tenancy 1219 to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN 1216 may be deployed or otherwise used in the data plane VCN 1218. In some examples, the control plane VCN 1216 can be isolated from the data plane VCN 1218, and the data plane mirror app tier 1240 of the control plane VCN 1216 can communicate with the data plane app tier 1246 of the data plane VCN 1218 via VNICs 1242 that can be contained in the data plane mirror app tier 1240 and the data plane app tier 1246. [0130] In some examples, users of the system, or customers, can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet 1254 that can communicate the requests to the metadata management service 1252. The metadata management service 1252 can communicate the request to the control plane VCN 1216 through the Internet gateway 1234. The request can be received by the LB subnet(s) 1222 contained in the control plane DMZ tier 1220. The LB subnet(s) 1222 may determine that the request is valid, and in response to this determination, the LB subnet(s) 1222 can transmit the request to app subnet(s) 1226 contained in the control plane app tier 1224. If the request is validated and requires a call to public Internet 1254, the call to public Internet 1254 may be transmitted to the NAT gateway 1238 that can make the call to public Internet 1254. Metadata that may be desired to be stored by the request can be stored in the DB subnet(s) 1230. [0131] In some examples, the data plane mirror app tier 1240 can facilitate direct communication between the control plane VCN 1216 and the data plane VCN 1218. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN 1218. Via a VNIC 1242, the control plane VCN 1216 can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN 1218. [0132] In some embodiments, the control plane VCN 1216 and the data plane VCN 1218 can be contained in the service tenancy 1219. In this case, the user, or the customer, of the system may not own or operate either the control plane VCN 1216 or the data plane VCN 1218. Instead, the IaaS provider may own or operate the control plane VCN 1216 and the data plane VCN 1218, both of which may be contained in the service tenancy 1219. This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users’, or other customers’, resources. Also, this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet 1254, which may not have a desired level of threat prevention, for storage. [0133] In other embodiments, the LB subnet(s) 1222 contained in the control plane VCN 1216 can be configured to receive a signal from the service gateway 1236. In this embodiment, the control plane VCN 1216 and the data plane VCN 1218 may be configured to be called by a customer of the IaaS provider without calling public Internet 1254. Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy 1219, which may be isolated from public Internet 1254. [0134] FIG.13 is a block diagram 1300 illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators 1302 (e.g., service operators 1202 of FIG.12) can be communicatively coupled to a secure host tenancy 1304 (e.g., the secure host tenancy 1204 of FIG. 12) that can include a virtual cloud network (VCN) 1306 (e.g., the VCN 1206 of FIG.12) and a secure host subnet 1308 (e.g., the secure host subnet 1208 of FIG.12). The VCN 1306 can include a local peering gateway (LPG) 1310 (e.g., the LPG 1210 of FIG.12) that can be communicatively coupled to a secure shell (SSH) VCN 1312 (e.g., the SSH VCN 1212 of FIG.12) via an LPG 1210 contained in the SSH VCN 1312. The SSH VCN 1312 can include an SSH subnet 1314 (e.g., the SSH subnet 1214 of FIG.12), and the SSH VCN 1312 can be communicatively coupled to a control plane VCN 1316 (e.g., the control plane VCN 1216 of FIG.12) via an LPG 1310 contained in the control plane VCN 1316. The control plane VCN 1316 can be contained in a service tenancy 1319 (e.g., the service tenancy 1219 of FIG. 12), and the data plane VCN 1318 (e.g., the data plane VCN 1218 of FIG. 12) can be contained in a customer tenancy 1321 that may be owned or operated by users, or customers, of the system. [0135] The control plane VCN 1316 can include a control plane DMZ tier 1320 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include LB subnet(s) 1322 (e.g., LB subnet(s) 1222 of FIG.12), a control plane app tier 1324 (e.g., the control plane app tier 1224 of FIG. 12) that can include app subnet(s) 1326 (e.g., app subnet(s) 1226 of FIG.12), a control plane data tier 1328 (e.g., the control plane data tier 1228 of FIG.12) that can include database (DB) subnet(s) 1330 (e.g., similar to DB subnet(s) 1230 of FIG.12). The LB subnet(s) 1322 contained in the control plane DMZ tier 1320 can be communicatively coupled to the app subnet(s) 1326 contained in the control plane app tier 1324 and an Internet gateway 1334 (e.g., the Internet gateway 1234 of FIG.12) that can be contained in the control plane VCN 1316, and the app subnet(s) 1326 can be communicatively coupled to the DB subnet(s) 1330 contained in the control plane data tier 1328 and a service gateway 1336 (e.g., the service gateway 1236 of FIG. 12) and a network address translation (NAT) gateway 1338 (e.g., the NAT gateway 1238 of FIG. 12). The control plane VCN 1316 can include the service gateway 1336 and the NAT gateway 1338. [0136] The control plane VCN 1316 can include a data plane mirror app tier 1340 (e.g., the data plane mirror app tier 1240 of FIG.12) that can include app subnet(s) 1326. The app subnet(s) 1326 contained in the data plane mirror app tier 1340 can include a virtual network interface controller (VNIC) 1342 (e.g., the VNIC of 1242) that can execute a compute instance 1344 (e.g., similar to the compute instance 1244 of FIG.12). The compute instance 1344 can facilitate communication between the app subnet(s) 1326 of the data plane mirror app tier 1340 and the app subnet(s) 1326 that can be contained in a data plane app tier 1346 (e.g., the data plane app tier 1246 of FIG.12) via the VNIC 1342 contained in the data plane mirror app tier 1340 and the VNIC 1342 contained in the data plane app tier 1346. [0137] The Internet gateway 1334 contained in the control plane VCN 1316 can be communicatively coupled to a metadata management service 1352 (e.g., the metadata management service 1252 of FIG. 12) that can be communicatively coupled to public Internet 1354 (e.g., public Internet 1254 of FIG.12). Public Internet 1354 can be communicatively coupled to the NAT gateway 1338 contained in the control plane VCN 1316. The service gateway 1336 contained in the control plane VCN 1316 can be communicatively coupled to cloud services 1356 (e.g., cloud services 1256 of FIG.12). [0138] In some examples, the data plane VCN 1318 can be contained in the customer tenancy 1321. In this case, the IaaS provider may provide the control plane VCN 1316 for each customer, and the IaaS provider may, for each customer, set up a unique compute instance 1344 that is contained in the service tenancy 1319. Each compute instance 1344 may allow communication between the control plane VCN 1316, contained in the service tenancy 1319, and the data plane VCN 1318 that is contained in the customer tenancy 1321. The compute instance 1344 may allow resources, that are provisioned in the control plane VCN 1316 that is contained in the service tenancy 1319, to be deployed or otherwise used in the data plane VCN 1318 that is contained in the customer tenancy 1321. [0139] In other examples, the customer of the IaaS provider may have databases that live in the customer tenancy 1321. In this example, the control plane VCN 1316 can include the data plane mirror app tier 1340 that can include app subnet(s) 1326. The data plane mirror app tier 1340 can reside in the data plane VCN 1318, but the data plane mirror app tier 1340 may not live in the data plane VCN 1318. That is, the data plane mirror app tier 1340 may have access to the customer tenancy 1321, but the data plane mirror app tier 1340 may not exist in the data plane VCN 1318 or be owned or operated by the customer of the IaaS provider. The data plane mirror app tier 1340 may be configured to make calls to the data plane VCN 1318 but may not be configured to make calls to any entity contained in the control plane VCN 1316. The customer may desire to deploy or otherwise use resources in the data plane VCN 1318 that are provisioned in the control plane VCN 1316, and the data plane mirror app tier 1340 can facilitate the desired deployment, or other usage of resources, of the customer. [0140] In some embodiments, the customer of the IaaS provider can apply filters to the data plane VCN 1318. In this embodiment, the customer can determine what the data plane VCN 1318 can access, and the customer may restrict access to public Internet 1354 from the data plane VCN 1318. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN 1318 to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN 1318, contained in the customer tenancy 1321, can help isolate the data plane VCN 1318 from other customers and from public Internet 1354. [0141] In some embodiments, cloud services 1356 can be called by the service gateway 1336 to access services that may not exist on public Internet 1354, on the control plane VCN 1316, or on the data plane VCN 1318. The connection between cloud services 1356 and the control plane VCN 1316 or the data plane VCN 1318 may not be live or continuous. Cloud services 1356 may exist on a different network owned or operated by the IaaS provider. Cloud services 1356 may be configured to receive calls from the service gateway 1336 and may be configured to not receive calls from public Internet 1354. Some cloud services 1356 may be isolated from other cloud services 1356, and the control plane VCN 1316 may be isolated from cloud services 1356 that may not be in the same region as the control plane VCN 1316. For example, the control plane VCN 1316 may be located in "Region 1," and cloud service "Deployment 12," may be located in Region 1 and in "Region 2." If a call to Deployment 12 is made by the service gateway 1336 contained in the control plane VCN 1316 located in Region 1, the call may be transmitted to Deployment 12 in Region 1. In this example, the control plane VCN 1316, or Deployment 12 in Region 1, may not be communicatively coupled to, or otherwise in communication with, Deployment 12 in Region 2. [0142] FIG.14 is a block diagram 1400 illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators 1402 (e.g., service operators 1202 of FIG.12) can be communicatively coupled to a secure host tenancy 1404 (e.g., the secure host tenancy 1204 of FIG. 12) that can include a virtual cloud network (VCN) 1406 (e.g., the VCN 1206 of FIG.12) and a secure host subnet 1408 (e.g., the secure host subnet 1208 of FIG.12). The VCN 1406 can include an LPG 1410 (e.g., the LPG 1210 of FIG.12) that can be communicatively coupled to an SSH VCN 1412 (e.g., the SSH VCN 1212 of FIG.12) via an LPG 1410 contained in the SSH VCN 1412. The SSH VCN 1412 can include an SSH subnet 1414 (e.g., the SSH subnet 1214 of FIG.12), and the SSH VCN 1412 can be communicatively coupled to a control plane VCN 1416 (e.g., the control plane VCN 1216 of FIG.12) via an LPG 1410 contained in the control plane VCN 1416 and to a data plane VCN 1418 (e.g., the data plane 1218 of FIG.12) via an LPG 1410 contained in the data plane VCN 1418. The control plane VCN 1416 and the data plane VCN 1418 can be contained in a service tenancy 1419 (e.g., the service tenancy 1219 of FIG. 12). [0143] The control plane VCN 1416 can include a control plane DMZ tier 1420 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include load balancer (LB) subnet(s) 1422 (e.g., LB subnet(s) 1222 of FIG.12), a control plane app tier 1424 (e.g., the control plane app tier 1224 of FIG.12) that can include app subnet(s) 1426 (e.g., similar to app subnet(s) 1226 of FIG. 12), a control plane data tier 1428 (e.g., the control plane data tier 1228 of FIG.12) that can include DB subnet(s) 1430. The LB subnet(s) 1422 contained in the control plane DMZ tier 1420 can be communicatively coupled to the app subnet(s) 1426 contained in the control plane app tier 1424 and to an Internet gateway 1434 (e.g., the Internet gateway 1234 of FIG. 12) that can be contained in the control plane VCN 1416, and the app subnet(s) 1426 can be communicatively coupled to the DB subnet(s) 1430 contained in the control plane data tier 1428 and to a service gateway 1436 (e.g., the service gateway of FIG. 12) and a network address translation (NAT) gateway 1438 (e.g., the NAT gateway 1238 of FIG. 12). The control plane VCN 1416 can include the service gateway 1436 and the NAT gateway 1438. [0144] The data plane VCN 1418 can include a data plane app tier 1446 (e.g., the data plane app tier 1246 of FIG. 12), a data plane DMZ tier 1448 (e.g., the data plane DMZ tier 1248 of FIG.12), and a data plane data tier 1450 (e.g., the data plane data tier 1250 of FIG.12). The data plane DMZ tier 1448 can include LB subnet(s) 1422 that can be communicatively coupled to trusted app subnet(s) 1460 and untrusted app subnet(s) 1462 of the data plane app tier 1446 and the Internet gateway 1434 contained in the data plane VCN 1418. The trusted app subnet(s) 1460 can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418, the NAT gateway 1438 contained in the data plane VCN 1418, and DB subnet(s) 1430 contained in the data plane data tier 1450. The untrusted app subnet(s) 1462 can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418 and DB subnet(s) 1430 contained in the data plane data tier 1450. The data plane data tier 1450 can include DB subnet(s) 1430 that can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418. [0145] The untrusted app subnet(s) 1462 can include one or more primary VNICs 1464(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 1466(1)-(N). Each tenant VM 1466(1)-(N) can be communicatively coupled to a respective app subnet 1467(1)-(N) that can be contained in respective container egress VCNs 1468(1)-(N) that can be contained in respective customer tenancies 1470(1)-(N). Respective secondary VNICs 1472(1)-(N) can facilitate communication between the untrusted app subnet(s) 1462 contained in the data plane VCN 1418 and the app subnet contained in the container egress VCNs 1468(1)-(N). Each container egress VCNs 1468(1)-(N) can include a NAT gateway 1438 that can be communicatively coupled to public Internet 1454 (e.g., public Internet 1254 of FIG.12). [0146] The Internet gateway 1434 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 can be communicatively coupled to a metadata management service 1452 (e.g., the metadata management system 1252 of FIG. 12) that can be communicatively coupled to public Internet 1454. Public Internet 1454 can be communicatively coupled to the NAT gateway 1438 contained in the control plane VCN 1416 and contained in the data plane VCN 1418. The service gateway 1436 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 can be communicatively coupled to cloud services 1456. [0147] In some embodiments, the data plane VCN 1418 can be integrated with customer tenancies 1470. This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code. The customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects. In response to this, the IaaS provider may determine whether to run code given to the IaaS provider by the customer. [0148] In some examples, the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane app tier 1446. Code to run the function may be executed in the VMs 1466(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN 1418. Each VM 1466(1)-(N) may be connected to one customer tenancy 1470. Respective containers 1471(1)-(N) contained in the VMs 1466(1)-(N) may be configured to run the code. In this case, there can be a dual isolation (e.g., the containers 1471(1)-(N) running code, where the containers 1471(1)-(N) may be contained in at least the VM 1466(1)-(N) that are contained in the untrusted app subnet(s) 1462), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer. The containers 1471(1)- (N) may be communicatively coupled to the customer tenancy 1470 and may be configured to transmit or receive data from the customer tenancy 1470. The containers 1471(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN 1418. Upon completion of running the code, the IaaS provider may kill or otherwise dispose of the containers 1471(1)-(N). [0149] In some embodiments, the trusted app subnet(s) 1460 may run code that may be owned or operated by the IaaS provider. In this embodiment, the trusted app subnet(s) 1460 may be communicatively coupled to the DB subnet(s) 1430 and be configured to execute CRUD operations in the DB subnet(s) 1430. The untrusted app subnet(s) 1462 may be communicatively coupled to the DB subnet(s) 1430, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s) 1430. The containers 1471(1)-(N) that can be contained in the VM 1466(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s) 1430. [0150] In other embodiments, the control plane VCN 1416 and the data plane VCN 1418 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 1416 and the data plane VCN 1418. However, communication can occur indirectly through at least one method. An LPG 1410 may be established by the IaaS provider that can facilitate communication between the control plane VCN 1416 and the data plane VCN 1418. In another example, the control plane VCN 1416 or the data plane VCN 1418 can make a call to cloud services 1456 via the service gateway 1436. For example, a call to cloud services 1456 from the control plane VCN 1416 can include a request for a service that can communicate with the data plane VCN 1418. [0151] FIG.15 is a block diagram 1500 illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators 1502 (e.g., service operators 1202 of FIG.12) can be communicatively coupled to a secure host tenancy 1504 (e.g., the secure host tenancy 1204 of FIG. 12) that can include a virtual cloud network (VCN) 1506 (e.g., the VCN 1206 of FIG.12) and a secure host subnet 1508 (e.g., the secure host subnet 1208 of FIG.12). The VCN 1506 can include an LPG 1510 (e.g., the LPG 1210 of FIG.12) that can be communicatively coupled to an SSH VCN 1512 (e.g., the SSH VCN 1212 of FIG.12) via an LPG 1510 contained in the SSH VCN 1512. The SSH VCN 1512 can include an SSH subnet 1514 (e.g., the SSH subnet 1214 of FIG.12), and the SSH VCN 1512 can be communicatively coupled to a control plane VCN 1516 (e.g., the control plane VCN 1216 of FIG.12) via an LPG 1510 contained in the control plane VCN 1516 and to a data plane VCN 1518 (e.g., the data plane 1218 of FIG.12) via an LPG 1510 contained in the data plane VCN 1518. The control plane VCN 1516 and the data plane VCN 1518 can be contained in a service tenancy 1519 (e.g., the service tenancy 1219 of FIG. 12). [0152] The control plane VCN 1516 can include a control plane DMZ tier 1520 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include LB subnet(s) 1522 (e.g., LB subnet(s) 1222 of FIG.12), a control plane app tier 1524 (e.g., the control plane app tier 1224 of FIG. 12) that can include app subnet(s) 1526 (e.g., app subnet(s) 1226 of FIG.12), a control plane data tier 1528 (e.g., the control plane data tier 1228 of FIG.12) that can include DB subnet(s) 1530 (e.g., DB subnet(s) 1430 of FIG.14). The LB subnet(s) 1522 contained in the control plane DMZ tier 1520 can be communicatively coupled to the app subnet(s) 1526 contained in the control plane app tier 1524 and to an Internet gateway 1534 (e.g., the Internet gateway 1234 of FIG. 12) that can be contained in the control plane VCN 1516, and the app subnet(s) 1526 can be communicatively coupled to the DB subnet(s) 1530 contained in the control plane data tier 1528 and to a service gateway 1536 (e.g., the service gateway of FIG.12) and a network address translation (NAT) gateway 1538 (e.g., the NAT gateway 1238 of FIG.12). The control plane VCN 1516 can include the service gateway 1536 and the NAT gateway 1538. [0153] The data plane VCN 1518 can include a data plane app tier 1546 (e.g., the data plane app tier 1246 of FIG. 12), a data plane DMZ tier 1548 (e.g., the data plane DMZ tier 1248 of FIG.12), and a data plane data tier 1550 (e.g., the data plane data tier 1250 of FIG.12). The data plane DMZ tier 1548 can include LB subnet(s) 1522 that can be communicatively coupled to trusted app subnet(s) 1560 (e.g., trusted app subnet(s) 1460 of FIG. 14) and untrusted app subnet(s) 1562 (e.g., untrusted app subnet(s) 1462 of FIG. 14) of the data plane app tier 1546 and the Internet gateway 1534 contained in the data plane VCN 1518. The trusted app subnet(s) 1560 can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518, the NAT gateway 1538 contained in the data plane VCN 1518, and DB subnet(s) 1530 contained in the data plane data tier 1550. The untrusted app subnet(s) 1562 can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518 and DB subnet(s) 1530 contained in the data plane data tier 1550. The data plane data tier 1550 can include DB subnet(s) 1530 that can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518. [0154] The untrusted app subnet(s) 1562 can include primary VNICs 1564(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 1566(1)-(N) residing within the untrusted app subnet(s) 1562. Each tenant VM 1566(1)-(N) can run code in a respective container 1567(1)-(N), and be communicatively coupled to an app subnet 1526 that can be contained in a data plane app tier 1546 that can be contained in a container egress VCN 1568. Respective secondary VNICs 1572(1)-(N) can facilitate communication between the untrusted app subnet(s) 1562 contained in the data plane VCN 1518 and the app subnet contained in the container egress VCN 1568. The container egress VCN can include a NAT gateway 1538 that can be communicatively coupled to public Internet 1554 (e.g., public Internet 1254 of FIG.12). [0155] The Internet gateway 1534 contained in the control plane VCN 1516 and contained in the data plane VCN 1518 can be communicatively coupled to a metadata management service 1552 (e.g., the metadata management system 1252 of FIG. 12) that can be communicatively coupled to public Internet 1554. Public Internet 1554 can be communicatively coupled to the NAT gateway 1538 contained in the control plane VCN 1516 and contained in the data plane VCN 1518. The service gateway 1536 contained in the control plane VCN 1516 and contained in the data plane VCN 1518 can be communicatively coupled to cloud services 1556. [0156] In some examples, the pattern illustrated by the architecture of block diagram 1500 of FIG.15 may be considered an exception to the pattern illustrated by the architecture of block diagram 1400 of FIG.14 and may be desirable for a customer of the IaaS provider if the IaaS provider cannot directly communicate with the customer (e.g., a disconnected region). The respective containers 1567(1)-(N) that are contained in the VMs 1566(1)-(N) for each customer can be accessed in real-time by the customer. The containers 1567(1)-(N) may be configured to make calls to respective secondary VNICs 1572(1)-(N) contained in app subnet(s) 1526 of the data plane app tier 1546 that can be contained in the container egress VCN 1568. The secondary VNICs 1572(1)-(N) can transmit the calls to the NAT gateway 1538 that may transmit the calls to public Internet 1554. In this example, the containers 1567(1)-(N) that can be accessed in real- time by the customer can be isolated from the control plane VCN 1516 and can be isolated from other entities contained in the data plane VCN 1518. The containers 1567(1)-(N) may also be isolated from resources from other customers. [0157] In other examples, the customer can use the containers 1567(1)-(N) to call cloud services 1556. In this example, the customer may run code in the containers 1567(1)-(N) that requests a service from cloud services 1556. The containers 1567(1)-(N) can transmit this request to the secondary VNICs 1572(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet 1554. Public Internet 1554 can transmit the request to LB subnet(s) 1522 contained in the control plane VCN 1516 via the Internet gateway 1534. In response to determining the request is valid, the LB subnet(s) can transmit the request to app subnet(s) 1526 that can transmit the request to cloud services 1556 via the service gateway 1536. [0158] It should be appreciated that IaaS architectures 1200, 1300, 1400, 1500 depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components. [0159] In certain embodiments, the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee. [0160] FIG.16 illustrates an example computer system 1600, in which various embodiments may be implemented. The system 1600 may be used to implement any of the computer systems described above. As shown in the figure, computer system 1600 includes a processing unit 1604 that communicates with a number of peripheral subsystems via a bus subsystem 1602. These peripheral subsystems may include a processing acceleration unit 1606, an I/O subsystem 1608, a storage subsystem 1618 and a communications subsystem 1624. Storage subsystem 1618 includes tangible computer-readable storage media 1622 and a system memory 1610. [0161] Bus subsystem 1602 provides a mechanism for letting the various components and subsystems of computer system 1600 communicate with each other as intended. Although bus subsystem 1602 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 1602 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard. [0162] Processing unit 1604, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system 1600. One or more processors may be included in processing unit 1604. These processors may include single core or multicore processors. In certain embodiments, processing unit 1604 may be implemented as one or more independent processing units 1632 and/or 1634 with single or multicore processors included in each processing unit. In other embodiments, processing unit 1604 may also be implemented as a quad-core processing unit formed by integrating two dual- core processors into a single chip. [0163] In various embodiments, processing unit 1604 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s) 1604 and/or in storage subsystem 1618. Through suitable programming, processor(s) 1604 can provide various functionalities described above. Computer system 1600 may additionally include a processing acceleration unit 1606, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like. [0164] I/O subsystem 1608 may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands. [0165] User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like. [0166] User interface output devices may include a display subsystem, indicator lights, or non- visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term "output device" is intended to include all possible types of devices and mechanisms for outputting information from computer system 1600 to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems. [0167] Computer system 1600 may comprise a storage subsystem 1618 that provides a tangible non-transitory computer-readable storage medium for storing software and data constructs that provide the functionality of the embodiments described in this disclosure. The software can include programs, code, instructions, scripts, etc., that when executed by one or more cores or processors of processing unit 1604 provide the functionality described above. Storage subsystem 1618 may also provide a repository for storing data used in accordance with the present disclosure. [0168] As depicted in the example in FIG. 16, storage subsystem 1618 can include various components including a system memory 1610, computer-readable storage media 1622, and a computer readable storage media reader 1620. System memory 1610 may store program instructions that are loadable and executable by processing unit 1604. System memory 1610 may also store data that is used during the execution of the instructions and/or data that is generated during the execution of the program instructions. Various different kinds of programs may be loaded into system memory 1610 including but not limited to client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), virtual machines, containers, etc. [0169] System memory 1610 may also store an operating system 1616. Examples of operating system 1616 may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® OS, and Palm® OS operating systems. In certain implementations where computer system 1600 executes one or more virtual machines, the virtual machines along with their guest operating systems (GOSs) may be loaded into system memory 1610 and executed by one or more processors or cores of processing unit 1604. [0170] System memory 1610 can come in different configurations depending upon the type of computer system 1600. For example, system memory 1610 may be volatile memory (such as random access memory (RAM)) and/or non-volatile memory (such as read-only memory (ROM), flash memory, etc.) Different types of RAM configurations may be provided including a static random access memory (SRAM), a dynamic random access memory (DRAM), and others. In some implementations, system memory 1610 may include a basic input/output system (BIOS) containing basic routines that help to transfer information between elements within computer system 1600, such as during start-up. [0171] Computer-readable storage media 1622 may represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, computer-readable information for use by computer system 1600 including instructions executable by processing unit 1604 of computer system 1600. [0172] Computer-readable storage media 1622 can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer- readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media. [0173] By way of example, computer-readable storage media 1622 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media 1622 may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media 1622 may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program services, and other data for computer system 1600. [0174] Machine-readable instructions executable by one or more processors or cores of processing unit 1604 may be stored on a non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can include physically tangible memory or storage devices that include volatile memory storage devices and/or non-volatile storage devices. Examples of non-transitory computer-readable storage medium include magnetic storage media (e.g., disk or tapes), optical storage media (e.g., DVDs, CDs), various types of RAM, ROM, or flash memory, hard drives, floppy drives, detachable memory drives (e.g., USB drives), or other type of storage device. [0175] Communications subsystem 1624 provides an interface to other computer systems and networks. Communications subsystem 1624 serves as an interface for receiving data from and transmitting data to other systems from computer system 1600. For example, communications subsystem 1624 may enable computer system 1600 to connect to one or more devices via the Internet. In some embodiments communications subsystem 1624 can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof)), global positioning system (GPS) receiver components, and/or other components. In some embodiments communications subsystem 1624 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface. [0176] In some embodiments, communications subsystem 1624 may also receive input communication in the form of structured and/or unstructured data feeds 1626, event streams 1628, event updates 1630, and the like on behalf of one or more users who may use computer system 1600. [0177] By way of example, communications subsystem 1624 may be configured to receive data feeds 1626 in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources. [0178] Additionally, communications subsystem 1624 may also be configured to receive data in the form of continuous data streams, which may include event streams 1628 of real-time events and/or event updates 1630, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like. [0179] Communications subsystem 1624 may also be configured to output the structured and/or unstructured data feeds 1626, event streams 1628, event updates 1630, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system 1600. [0180] Computer system 1600 can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system. [0181] Due to the ever-changing nature of computers and networks, the description of computer system 1600 depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. [0182] Although specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the disclosure. Embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. Various features and aspects of the above-described embodiments may be used individually or jointly. [0183] Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof. The various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or services are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times. [0184] The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims. [0185] The use of the terms "a" and "an" and "the" and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms "comprising," "having," "including," and "containing" are to be construed as open-ended terms (i.e., meaning "including, but not limited to,") unless otherwise noted. The term "connected" is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. [0186] Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. [0187] Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. Those of ordinary skill should be able to employ such variations as appropriate and the disclosure may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein. [0188] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. [0189] In the foregoing specification, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the disclosure is not limited thereto. Various features and aspects of the above-described disclosure may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive.

Claims

WHAT IS CLAIMED IS: 1. A method for a compute service in a reduced footprint data center, the method comprising: executing, by a control plane of the compute service using a live image, a compute service instance at a bare metal computing device of the reduced footprint data center; receiving, from an agent executing in the bare metal instance, a first indication that the bare metal instance is successfully executing; sending, to the agent, information identifying a hypervisor image accessible to the agent via a network connection of the reduced footprint data center; receiving, from the agent, a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device; and initiating, by the control plane, a reboot of the compute service instance, the compute service instance configured to boot using the hypervisor image on the storage device.
2. The method of claim 1, wherein the agent is configured to at least: partition the storage device; obtain, using the information, the hypervisor image from an initialization device of the reduced footprint data center; and provision the hypervisor image onto a partition of the storage device.
3. The method of claim 1, further comprising, prior to rebooting the bare metal instance, changing, by the control plane, a boot order of the compute service instance to boot using the storage device.
4. The method of claim 1, further comprising, after rebooting the compute service instance, polling, by the control plane, a hypervisor agent at the compute service instance to determine whether the hypervisor is successfully executing at the compute service instance.
5. The method of claim 1, wherein the hypervisor image comprises software executable by the bare metal computing device to run an operating system and host a hypervisor on the operating system.
6. The method of claim 1, wherein the live image comprises software executable by the bare metal computing device to execute the agent to partition the storage device and provision the hypervisor image on the storage device.
7. The method of claim 1, wherein the storage device comprises a solid state storage device.
8. A computer system comprising: one or more processors; and one or more memories storing computer-executable instructions that, when executed by the one or more processors, cause the computer system to at least: execute, by a control plane of a compute service using a live image, a compute service instance at a bare metal computing device of a reduced footprint data center; receive, from an agent executing in the bare metal instance, a first indication that the bare metal instance is successfully executing; send, to the agent, information identifying a hypervisor image accessible to the agent via a network connection of the reduced footprint data center; receive, from the agent, a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device; and initiate, by the control plane, a reboot of the compute service instance, the compute service instance configured to boot using the hypervisor image on the storage device.
9. The computer system of claim 8, wherein the agent is configured to at least: partition the storage device; obtain, using the information, the hypervisor image from an initialization device of the reduced footprint data center; and provision the hypervisor image onto a partition of the storage device.
10. The computer system of claim 8, further comprising, prior to rebooting the bare metal instance, changing, by the control plane, a boot order of the compute service instance to boot using the storage device.
11. The computer system of claim 8, further comprising, after rebooting the compute service instance, polling, by the control plane, a hypervisor agent at the compute service instance to determine whether the hypervisor is successfully executing at the compute service instance.
12. The computer system of claim 8, wherein the hypervisor image comprises software executable by the bare metal computing device to run an operating system and host a hypervisor on the operating system.
13. The computer system of claim 8, wherein the live image comprises software executable by the bare metal computing device to execute the agent to partition the storage device and provision the hypervisor image on the storage device.
14. The computer system of claim 8, wherein the storage device comprises an M.2 storage device.
15. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a computer system, cause the computer system to at least: execute, by a control plane of a compute service using a live image, a compute service instance at a bare metal computing device of a reduced footprint data center; receive, from an agent executing in the bare metal instance, a first indication that the bare metal instance is successfully executing; send, to the agent, information identifying a hypervisor image accessible to the agent via a network connection of the reduced footprint data center; receive, from the agent, a second indication that the hypervisor image has been successfully provisioned on a storage device of the bare metal computing device; and initiate, by the control plane, a reboot of the compute service instance, the compute service instance configured to boot using the hypervisor image on the storage device.
16. The non-transitory computer-readable medium of claim 15, wherein the agent is configured to at least: partition the storage device; obtain, using the information, the hypervisor image from an initialization device of the reduced footprint data center; and provision the hypervisor image onto a partition of the storage device.
17. The non-transitory computer-readable medium of claim 15, further comprising, prior to rebooting the bare metal instance, changing, by the control plane, a boot order of the compute service instance to boot using the storage device.
18. The non-transitory computer-readable medium of claim 15, further comprising, after rebooting the compute service instance, polling, by the control plane, a hypervisor agent at the compute service instance to determine whether the hypervisor is successfully executing at the compute service instance.
19. The non-transitory computer-readable medium of claim 15, wherein the hypervisor image comprises software executable by the bare metal computing device to run an operating system and host a hypervisor on the operating system.
20. The non-transitory computer-readable medium of claim 15, wherein the live image comprises software executable by the bare metal computing device to execute the agent to partition the storage device and provision the hypervisor image on the storage device.
PCT/US2025/019407 2024-03-12 2025-03-11 Techniques for compute service in an overlay network Pending WO2025193725A1 (en)

Applications Claiming Priority (18)

Application Number Priority Date Filing Date Title
US202463564195P 2024-03-12 2024-03-12
US63/564,195 2024-03-12
US202463568234P 2024-03-21 2024-03-21
US202463568061P 2024-03-21 2024-03-21
US63/568,234 2024-03-21
US63/568,061 2024-03-21
US202463633966P 2024-04-15 2024-04-15
US63/633,966 2024-04-15
US202463637691P 2024-04-23 2024-04-23
US63/637,691 2024-04-23
US202463660377P 2024-06-14 2024-06-14
US63/660,377 2024-06-14
US202463690270P 2024-09-03 2024-09-03
US63/690,270 2024-09-03
US202463691174P 2024-09-05 2024-09-05
US63/691,174 2024-09-05
US202519075724A 2025-03-10 2025-03-10
US19/075,724 2025-03-10

Publications (1)

Publication Number Publication Date
WO2025193725A1 true WO2025193725A1 (en) 2025-09-18

Family

ID=95249054

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/019407 Pending WO2025193725A1 (en) 2024-03-12 2025-03-11 Techniques for compute service in an overlay network

Country Status (1)

Country Link
WO (1) WO2025193725A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110293097A1 (en) * 2010-05-27 2011-12-01 Maino Fabio R Virtual machine memory compartmentalization in multi-core architectures
US20160364252A1 (en) * 2015-06-15 2016-12-15 International Business Machines Corporation Migrating servers into a secured environment
US20200409600A1 (en) * 2019-06-28 2020-12-31 Amazon Technologies, Inc. Virtualized block storage servers in cloud provider substrate extension
US20230367607A1 (en) * 2019-01-29 2023-11-16 Walmart Apollo, Llc Methods and apparatus for hypervisor boot up

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110293097A1 (en) * 2010-05-27 2011-12-01 Maino Fabio R Virtual machine memory compartmentalization in multi-core architectures
US20160364252A1 (en) * 2015-06-15 2016-12-15 International Business Machines Corporation Migrating servers into a secured environment
US20230367607A1 (en) * 2019-01-29 2023-11-16 Walmart Apollo, Llc Methods and apparatus for hypervisor boot up
US20200409600A1 (en) * 2019-06-28 2020-12-31 Amazon Technologies, Inc. Virtualized block storage servers in cloud provider substrate extension

Similar Documents

Publication Publication Date Title
US11327673B1 (en) Techniques for persisting data across instances of a cloud shell
CN118414806B (en) Method, system, and non-transitory computer-readable medium for edge attestation of computing node authorization in a cloud infrastructure system
US20240296230A1 (en) Secure Boot Partition For Cloud Compute Nodes
WO2025193725A1 (en) Techniques for compute service in an overlay network
JP2024543002A (en) Home Region Switching
US20250293876A1 (en) Techniques for a key management service in an overlay network
WO2025193491A1 (en) Techniques for scaling a reduced footprint data center
US12001350B2 (en) Dynamically configurable motherboard
US12483530B2 (en) Techniques for rotating network addresses in prefab regions
WO2025193722A1 (en) Techniques for block storage service in an overlay network
WO2025193724A1 (en) Techniques for a certificates service in an overlay network
WO2025193727A1 (en) Techniques for a initializing a certificates service in a reduced footprint data center
US12229026B2 (en) Replicating resources between regional data centers
US12481795B2 (en) Techniques for validating cloud regions built at a prefab factory
US20250385887A1 (en) Static resource identifiers in pre-fabricated scalable footprint data centers
US20250266988A1 (en) Techniques for device encryption in prefab region data centers
US20240314026A1 (en) Techniques for building cloud regions at a prefab factory
US20240314038A1 (en) Static network fabric at a prefab factory
WO2025193731A1 (en) Techniques for a key management service in an overlay network
US20250030542A1 (en) Replication of customer keys stored in a virtual vault
US20240275769A1 (en) Managing an encrypted connection with a cloud service provider
WO2025174990A1 (en) Techniques for device encryption in prefab region data centers
WO2024192393A1 (en) Techniques for building cloud regions at a prefab factory
WO2024211165A1 (en) Platform-agnostic compute instance launches

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25715988

Country of ref document: EP

Kind code of ref document: A1