[go: up one dir, main page]

WO2016121879A1 - Appareil de commande de virtualisation, procédé de sélection de destination d'installation et programme - Google Patents

Appareil de commande de virtualisation, procédé de sélection de destination d'installation et programme Download PDF

Info

Publication number
WO2016121879A1
WO2016121879A1 PCT/JP2016/052514 JP2016052514W WO2016121879A1 WO 2016121879 A1 WO2016121879 A1 WO 2016121879A1 JP 2016052514 W JP2016052514 W JP 2016052514W WO 2016121879 A1 WO2016121879 A1 WO 2016121879A1
Authority
WO
WIPO (PCT)
Prior art keywords
vim
selection
vnf
selection policy
nfvo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2016/052514
Other languages
English (en)
Japanese (ja)
Inventor
麻代 大平
淳一 極樂寺
直哉 吉川
亮太 壬生
茂人 竹森
博一 篠澤
芳紀 菊池
直哉 籔下
裕貴 吉村
一廣 江頭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of WO2016121879A1 publication Critical patent/WO2016121879A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the present invention is based on a Japanese patent application: Japanese Patent Application No. 2015-015972 (filed on January 29, 2015), and the entire contents of this application are incorporated in the present specification by reference.
  • the present invention relates to a virtualization control apparatus, an arrangement destination selection method, and a program related to network virtualization.
  • FIG. 19 is a diagram cited from FIG. 5.1 (The NFV-MANO architecture framework with reference points) on page 23 of Non-Patent Document 1.
  • VNF Virtualized Network Function
  • MME Mobility Management Entity
  • S-GW Serving Gateway
  • P-GW Packet Gateway
  • EPC Evolved Packet Core
  • LTE Long Term Evolution
  • EM Element Manager
  • NFVI Network Function Virtualization Infrastructure
  • servers such as computing, storage, and network functions in a virtualization layer such as hypervisor.
  • This is a platform that can be flexibly handled as virtual hardware resources such as virtual storage and virtual network.
  • NFV MANO Management & Orchestration
  • NFVO NFV-Orchestrator
  • VNFM VNF-Manager
  • VIP Virtualized Infrastructure Manager
  • NFV-Orchestrator performs NFVI resource orchestration and NS (Network Service) lifecycle management (instantiation, scaling, termination, update, etc. of NS instances). It also manages NS catalogs (NSD / VLD / VNFFGD) and VNF catalogs (VNFD / VM images / manifest files, etc.) and has a repository for NFV instances and a repository for NFVI resources.
  • VNFM VNF-Manager
  • VNF lifecycle management instantiation, update, query, scaling, termination, etc.
  • event notification event notification
  • the Virtualized Infrastructure Manager controls NFVI through the virtualization layer (computing, storage, network resource management, NFVI execution platform NFVI fault monitoring, resource information monitoring, etc.).
  • OSS Operations Support Systems
  • BSS Business Support Systems
  • information systems devices, software, mechanisms, etc.
  • NS catalog represents a network service (NS) repository.
  • NS catalog supports creation and management of network service (NS) deployment templates (Network Service Descriptor (NSD), Virtual Link Descriptor (VLD), VNF Forwarding Graph Descriptor (VNFFGD)).
  • NSD Network Service Descriptor
  • VLD Virtual Link Descriptor
  • VNFFGD VNF Forwarding Graph Descriptor
  • Deployment refers to customization according to, for example, required specifications and deployment in an actual usage environment.
  • VNF catalog represents a repository of VNF packages.
  • VNF catalog supports the creation and management of VNF packages such as VNF Descriptor (VNFD), software images, and manifest files.
  • VNFD VNF Descriptor
  • NFV instance repository holds instance information of all VNFs and all network services (NS).
  • the VNF instance and NS instance are described in the VNF and NS records, respectively. These records are updated to reflect the execution results of the VNF life cycle management operation and NS life cycle management operation in the life cycle of each instance.
  • the NFVI Resource Repository holds information on available / reserved / allocated resources extracted by VIM beyond the operator's infrastructure domain.
  • reference point Os-Ma-nfvo is a reference point between OSS (Operation Service Systems) / BSS (Business Service Systems) and NFVO.
  • Network service life cycle management request VNF life cycle management request
  • NFV Used for transferring related status information, exchanging policy management information, etc.
  • the reference point Vnfm-Vi is used for resource allocation requests from VNFM, virtual resource configuration and status information exchange.
  • the reference point Ve-Vnfm-em is VNF instantiation, VNF instance search, update, termination, scale out / in, scale up / down, configuration from EM to VNFM, event forwarding, VNFM from EM and VNFM Used for VNF configuration, event notification, etc. to VNF.
  • Ve-Vnfm-Vnf is VNF instantiation between VNF and VNFM, VNF instance search, update, termination, scale out / in, scale up / down, VNF to VNFM configuration, event forwarding, VNFM Used for VNF configuration to VNF, event notification, etc.
  • the reference point Nf-Vi is the allocation of virtual resources in response to resource allocation requests, such as VM allocation, VM resource allocation update, VM migration, VM termination, and creation / deletion of connections between VMs, together with instructions for computing / storage resources. It is used for transfer of status information of virtual resources, exchange of hardware resource configuration and status information, and the like.
  • the reference point Vn-Nf represents the execution environment provided to VNF by NFVI.
  • the reference point Nfvo-Vnfm is used for resource-related requests (authentication, reservation, allocation, etc.) by VNF-Manager (VNFM), transfer of configuration information to VNFM, and collection of VNF status information.
  • VNFM VNF-Manager
  • the reference point Nfvo-Vi is used for resource reservation / allocation request from NFVO, virtual resource configuration and status information exchange (see Non-Patent Document 1 for details).
  • Figure 20 is a citation of Figure 6.2 (Information elements in different context) on page 40 of Non-Patent Document 1. Instantiation input parameters are entered.
  • a network service descriptor (Network Service Descriptor: NSD) is a network service deployment template that refers to another descriptor that describes a part that forms part of the network service (NS).
  • VNF Descriptor (VNF Descriptor: VNFD) is a deployment template that describes VNF in terms of requirements for deployment and operational behavior.
  • VNFD is mainly used by VNFM in VNF instantiation (realization, instantiation) and VNF instance lifecycle management.
  • VNFD is used by NFVO to manage and orchestrate network services and virtual resources on NFVI (automated deployment / configuration / management of computer systems / middleware / services). It includes connectivity, interface, and key performance indicators (KPI) requirements used in NFVO for building virtual links between VNFC instances in NFVI or VNF instances and endpoints to other network functions.
  • KPI key performance indicators
  • VNFFGD The VNF Forwarding Graph Descriptor (VNFFGD) is a deployment template that describes the network service topology or part of it by referring to VNF, PNF, and Virtual Link that connects them.
  • the virtual link descriptor (Virtual Link Descriptor) is a deployment template that describes the resource requirements required for links between VNFs, PNFs, and NS endpoints that can be used with NFVI.
  • the Physical Network Function Descriptor (PNFD) describes the connectivity (connectivity), interface, and KPI requirements of the virtual link to the attached physical network function. Necessary when a physical device is incorporated into NS and facilitates network expansion.
  • NSD NSD
  • VNFFGD VNFD
  • VNFD is included in the VNF catalog as a VNF package.
  • NS Network service
  • NSR Network Service Record
  • VNFFGR VNFFG Record
  • VLR Virtual Link Record
  • VNF Virtualized Network Function
  • VNFR Virtualized Network Function Record
  • PNF Physical Network Function
  • the NSR, VNFR, VNFFGR, and VLR information elements provide a set of data items necessary for modeling the state of NS, VNF, VNFFG, and VL instances.
  • the PNF record represents an instance related to the PNF that has existed before the NS part, and includes the runtime attribute of the PNF information (connectivity to the NFVO).
  • VNF Voice Network Function
  • VNFC VNF Components
  • VDU Virtualization Deployment Unit
  • FIG. 21 schematically illustrates an example in which a VNFC is set for each logical interface in a VNF in which an S-GW (Serving gateway) is virtualized.
  • VDU is a construct used in an information model that supports the description of part or all of VNF's deployment and operation behavior.
  • the VNFI that provides the VNF execution platform includes virtual computing, virtual storage, and virtual network virtualized on a virtualization layer such as a hypervisor.
  • a virtual machine (virtual CPU (Central Processing Unit), virtual memory, virtual storage, guest OS (Operating System)) on the virtualization layer is provided, and an application is executed on the guest OS.
  • Compute, Storage, and “Network” below the virtualization layer schematically represent hardware resources such as CPU, storage, and network interface controller (NIC).
  • NIC network interface controller
  • VNFC VNFC
  • logical interfaces S11, Gxc, S5 / S8-C related to C-Plane are collectively defined as one VDU (VM)
  • the logical interfaces S1U, S5 / S8-U, and S12 related to U-Plane are collectively defined as one VDU (VM).
  • C in S5 / S8-C represents a control plane (Control Plane).
  • U in S1U and S5 / S8-U represents a user plane (User-Plane).
  • S11 is the control plane interface between MME and SGW in EPC
  • S5 / S8 is the user plane interface between SGW and PGW
  • S1U is the interface between eNodeB (evolved NodeB) and the core network
  • Gxc is an interface between SGX and Policy Rules Function (PCRF)
  • S11 is an interface between MME and S-GW
  • S12 is an interface between UTRAN (Universal Terrestrial Radio Access Network) and S-GW.
  • PCRF Policy Rules Function
  • NFV Network Function
  • PM Physical Machine
  • various network functions with different personalities are realized on one PM.
  • a redundant configuration may be employed in which one of a plurality of VMs is used as an active system and the other one VM is used as a standby system.
  • the active VM and the standby VM are built in the same PM, the significance of making the network redundant may be lost. For example, when a failure such as the power supply to the PM being interrupted occurs, the standby VM cannot be activated.
  • Non-Patent Document 1 and the like related to the standard specification of NFV-MANO do not define a selection method and selection requirements regarding the placement destination of VIM and PM. Therefore, in some cases, a PM of a VIM different from that desired by a user who provides a network service or a manager of a data center may be selected as a placement destination of a VM or the like.
  • the present invention has been developed in view of the above problems, and its main purpose is to provide a virtualization control device and a placement destination that allow a user to flexibly set a policy for selecting a placement destination such as a VM or VNF. To provide a selection method and a program.
  • NFVI Network Function Virtualization Infrastructure
  • VNF Network Function
  • NFVO NFV-Orchestrator
  • VIM Virtualized Infrastructure Manager
  • a placement destination selection method comprising: referring to selection information related to a placement destination of a component of a virtual network that realizes a network service; and selecting a placement destination of the component based on the selection information.
  • NFVI Network Function Virtualization Infrastructure
  • VNF Virtual Network Function
  • NFVO NFV-Orchestrator
  • VIM Virtualized Infrastructure Manager
  • This program can be recorded on a computer-readable storage medium.
  • the storage medium may be non-transient such as a semiconductor memory, a hard disk, a magnetic recording medium, an optical recording medium, or the like.
  • the present invention can also be embodied as a computer program product.
  • a virtualization control device an arrangement destination selection method, and a program that allow a user to flexibly set a policy for selecting an arrangement destination such as a VM or VNF.
  • the virtualization control device 100 includes an NFVO 101 and a VIM 102.
  • the NFVO 101 manages VNF, VIM, and NFVI resources on an NFVI (Network Function Virtualization Infrastructure) that provides an execution base for a virtual network function (Virtual Network Function: VNF) implemented by software running on a virtual machine. ⁇ Orchestration is performed to realize network functions.
  • the VIM 102 performs NFVI resource management and control.
  • at least one of the NFVO 101 and the VIM 102 in the virtualization control device 100 is based on selection information (for example, configuration information, selection policy, etc., which will be described later) regarding the placement destination of the virtual network component (for example, VM). To select where to place the component.
  • a user for example, a service administrator, an infrastructure administrator, etc.
  • a user of a virtual network system using the virtualization control device 100 reflects matters to be prioritized from each standpoint in the selection information regarding the placement destination, and the information is applied to the virtualization control device.
  • the virtualization control device 100 can provide a system environment in which a user can flexibly set a policy for selecting a placement destination such as a VM by selecting a placement destination such as a VM based on the selection information.
  • FIG. 2 is a diagram illustrating an example of a system configuration according to the first embodiment.
  • the system shown in FIG. 2 includes a terminal 10, a virtualization control device 20, and resource pools 30-1 to 30-3 including a plurality of PMs (Physical Machines).
  • the resource pool 30-1 includes PMs 31-1 to 31-l (l is a positive integer, the same applies hereinafter), and the resource pool 30-2 includes PM 31-1 + 1 to 31-m (m is a positive integer, the same applies hereinafter).
  • the resource pool 30-3 includes PMs 31-m + 1 to 31-n (n is a positive integer, the same applies hereinafter). If there is no particular reason for distinguishing PMs 31-1 to 31-n, they are simply written as “PM31”.
  • the terminal 10 is a terminal used by a user (service manager or the like). For example, VM management, setting, maintenance, and the like are performed via the terminal 10.
  • the terminal 10 communicates with the virtualization control device 20.
  • the virtualization control device 20 can correspond to the NFV Management Network and Network Orchestration (MANO) of the NVF reference architecture.
  • the virtualization control device 20 monitors and controls the VM on the PM 31.
  • the virtualization control device 20 communicates with a virtualization layer such as the hypervisor of the PM 31, exchanges VM configuration and state information, and virtual hardware resources such as a virtual CPU allocated to the VM. Exchange of configuration and status information, placement and control of VMs, and communication with the terminal 10 are performed.
  • the virtualization control device 20 sets the resource pool 30 as a unit for managing the PM 31.
  • one resource pool 30 is composed of a plurality of PMs arranged in a data center or the like.
  • three resource pools to be controlled by the virtualization control device 20 are shown, but this is not intended to limit the number of resource pools.
  • FIG. 3 is a diagram illustrating an example of the configuration of the PM 31.
  • the PM 31 includes hardware such as computing hardware (for example, CPU core), storage hardware (such as HDD (Hard Disk Drive), RAM (Random Access Memory)), and network hardware (Hardware: HW) resource 32 is provided.
  • the PM 31 further includes a virtualization layer 33 such as a hypervisor that constitutes a virtualization function, a virtual hardware resource 34 obtained by virtualizing the hardware resource 32 using the virtualization layer 33 such as a virtual CPU (vCPU), and a VM 35.
  • the VM 35 executes an application (not shown) on the guest OS 36, and realizes, for example, network function (NF) virtualization (NFV).
  • NF network function
  • the hardware specifications (for example, the number of CPU cores) of the PM 31 are registered as server specification information.
  • the virtualization control device 20 performs placement of the VM 35 in the PM 31 and application control on the guest OS 36 based on the server specification information and the like.
  • FIG. 4 is a diagram illustrating an example of the configuration of the virtualization control device 20.
  • the virtualization control device 20 includes an NFVO 21, a VNFM 22, and VIMs 23-1 to 23-3. If there is no particular reason for distinguishing between VIMs 23-1 to 23-3, it is simply expressed as “VIM23”.
  • NFVO21 performs orchestration and management of VNF and NFVI (Network Function Virtualization Infrastructure) that forms the execution base of VNF.
  • VNF Network Function Virtualization Infrastructure
  • NFVI Network Function Virtualization Infrastructure
  • the NFVI that forms the VNF execution base can flexibly handle, for example, the computing, storage, and network functions included in the hardware resource 32 of the PM 31 as a virtual hardware resource 34 virtualized by the virtualization layer 33. This is the foundation.
  • the VNFM 22 performs VNF life cycle management (installation, update, search, scaling, termination, etc.) and event notification. For example, the VNFM 22 arranges the VM 35 on the PM 31 via the virtualization layer 33 of the PM 31.
  • the VIM 23 performs NFVI resource management and control. Specifically, the VIM 23 manages resources such as computing, storage, and network functions included in the hardware resource 32 of the PM 31 (resource reservation, allocation according to request, monitoring of resource information, etc.).
  • Each of the VIMs 23-1 to 23-3 performs resource management and control of the NFVI composed of the PM 31 constituting each of the resource pools 30-1 to 30-3.
  • VNFM and VIM can have a many-to-many relationship.
  • a VNFM may be provided corresponding to each of the VIMs 23-1 to 23-3.
  • the structure which connects several VNFM with respect to one VIM may be sufficient, and the structure which connects several VIM to one VNFM may be sufficient.
  • FIG. 5 is a diagram illustrating an example of the internal configuration of the NFVO 21.
  • the NFVO 21 includes a configuration information registration unit 201, a first selection policy registration unit 202, a second selection policy registration unit 203, a resource management unit 204, a VIM & PM selection unit 205, various databases, A communication control unit 221 that controls communication with the terminal 10.
  • the various databases included in the NFVO 21 include a configuration information database (DB; Data Base) 211, a first selection policy database 212, a second selection policy database 213, an NFVI resource database 214, and a VNF instance database 215. Including.
  • DB Configuration Information Database
  • a user uses the terminal 10 to input configuration information regarding a network to which the NFV is applied to the virtualization control device (MANO) 20.
  • the configuration information registration unit 201 is a unit that acquires the configuration information via the communication control unit 221 and registers the acquired configuration information in the configuration information database 211.
  • a user sets a policy related to placement destination selection in placing the network component to which the NFV is applied in the virtualization control device 20 using the terminal 10.
  • a policy input by the service manager is referred to as a first selection policy.
  • Network components whose arrangement destination is determined according to the first selection policy input by the service manager include network service (NS), VM, VNF, VNFc, VDU, and the like.
  • NS network service
  • VM virtualized computing environment
  • VNF virtualized computing environment
  • VNFc virtualized computing environment
  • VDU virtualized computing environment
  • the service administrator can set a policy for selecting the location of the network component to which the NFV is applied in an arbitrary unit.
  • the first selection policy registration unit 202 acquires the first selection policy set by the service administrator via the communication control unit 221, and registers the acquired first selection policy in the first selection policy database 212.
  • a user determines a policy regarding how to select a VM or the like regarding selection of a placement destination of a VIM or PM. Is set in the virtualization control device 20. Specifically, when an infrastructure administrator adds VNF or VNFc to a network to which NFV is applied, the infrastructure administrator wants to concentrate VMs on a specific VIM or distribute VIMs. A policy is set that determines whether a VM or the like is to be placed. In addition, the infrastructure administrator also determines whether a PM or the like is to be centrally arranged in a specific PM or whether a VM or the like is to be distributed in a PM. In the following description, a policy input by the infrastructure administrator is referred to as a second selection policy.
  • the second selection policy registration unit 203 acquires the second selection policy set by the infrastructure administrator via the communication control unit 221, and registers the acquired second selection policy in the second selection policy database 213.
  • a user can set the first selection policy and the second selection policy in the virtualization control device (MANO) 20 at an arbitrary timing. That is, the service manager and the infrastructure manager can change the contents of the first selection policy and the second selection policy at an arbitrary timing.
  • MANO virtualization control device
  • the service administrator and the infrastructure administrator may input the first selection policy and the second selection policy to the virtualization control device 20 using any format.
  • a service administrator or the like defines a template describing requirements and constraints necessary for deploying NS and VNF, such as NSD (Network Service Descriptor), VNFD (VNF Descriptor), VDUD (VDU Descriptor), etc.
  • the first selection policy and the second selection policy can be set in the virtualization control device 20.
  • the service manager or the like may input a file describing the contents of the first selection policy and the second selection policy to the virtualization control device 20 or virtualize the contents of these selection policies using a command. You may input into the control apparatus 20.
  • the resource management unit 204 manages NFVI resource information. Specifically, the resource management unit 204 requests the VIM 23 to provide resource information of each NFVI.
  • the resource information provided by the VIM 23 includes the ID of the PM 31 (server group ID), information on the zone assigned to each PM 31, hardware specifications of the PM 31 (for example, the number of CPU clocks, memory capacity, number of physical NICs, etc.) , The number of VMs constructed in each VIM, usable network bandwidth information in the PM 31, and the like are included.
  • the resource management unit 204 acquires the various types of information exemplified above (denoted as NFVI resource information) from the VIM 23 and registers the acquired NFVI resource information in the NFVI resource database 214.
  • the VIM & PM selection unit 205 is a means for selecting a placement destination (VIM, PM, PM group) such as a VM based on at least one of the configuration information, the first selection policy, and the second selection policy. Specifically, the VIM & PM selection unit 205 generates a request for starting the system via the terminal 10, VNF instantiation, VNF healing, VNF scaling, and the like. When it is necessary to determine an arrangement destination such as the above, the configuration information and the selection policy are referred to. Based on the referenced information (configuration information, selection policy), the VIM & PM selection unit 205 selects the resource pool 30 to which the PM 31 that is the placement destination of the VM or the like belongs. In the following description, selection related to the resource pool 30 to which the PM 31 that is the placement destination of the VM or the like belongs is referred to as selection of the VIM 23.
  • the VIM & PM selection unit 205 requests the selected VIM 23 to select the PM 31 that is the placement destination of the VM or the like. Specifically, the VIM & PM selection unit 205 provides information necessary for PM selection by the VIM 23 (PM selection policy, NFVI resource information, etc. described later) to the VIM 23, and selects a PM to be a placement destination of the VM or the like. Ask.
  • the VIM & PM selection unit 205 provides the selection means (VIM selection unit 251) for selecting a VIM that is the placement destination of the VM and the like based on the selection policy, and the like by providing the selection policy and the like to the VIM 23.
  • Requesting means PM selection requesting unit 252 for requesting the VIM 23 to select a PM as a placement destination.
  • the VIM & PM selection unit 205 refers to not only the configuration information and the selection policy but also the NFVI resource information and the VNF instance information as necessary when selecting a placement destination such as VNF.
  • the configuration information database 211 is means for storing configuration information related to a network to which NFV is applied.
  • FIG. 6 is a diagram illustrating an example of the configuration information database 211. Referring to FIG. 6, the configuration information database 211 stores, for example, two types of information.
  • the first information is information related to the connection configuration between the PM under the VIM and the PNF (Physical Network Function).
  • the first row in FIG. 6A shows that PNF-1 is connected to the PM under VIM-1. Therefore, for example, when realizing the setting of connecting VNF-1 to PNF-1, referring to the information in FIG. 6A, PNF-1 is connected to the PM under VIM-1. Since it is recognized, it can be seen that VIM-1 can be selected as a VNF-1 placement destination candidate.
  • the second information is information related to the connection configuration between PMs under the VIM.
  • the first row in FIG. 6B shows that the PMs under the VIM-1 and the PMs under the VIM-2 are connected.
  • the VNF-2 is connected to the PM under the VIM-2. Since the VIM including the existing PM is recognized as VIM-1 and VIM-3, VIM-1 and VIM-3 are candidates for placement of VNF-3.
  • VNF is exemplified as a network component for which the placement destination is determined, but configuration information regarding other elements (for example, VNFc, VDU, etc.) may be registered in the configuration information database 211.
  • the first selection policy database 212 stores the first selection policy.
  • the first selection policy set by the service manager may have various contents depending on what items the service manager places importance (priority) on and wants to select a placement destination such as a VM. Specifically, a policy that prioritizes resources at the placement destination (resource priority selection policy), a policy that prioritizes the network bandwidth that can be secured in each PM (network bandwidth priority selection policy), and a policy that prioritizes the dependency of components ( Dependency priority selection policy), policies that prioritize placement in the same VIM (VIM matching priority selection policy), and the like.
  • FIG. 7 is a diagram illustrating an example of the first selection policy database 212.
  • the first selection policy shown in FIG. 7 is an example of a resource priority selection policy.
  • a policy VIP selection policy
  • PM selection policy
  • a policy PM selection when selecting a PM as a placement destination of a VM or the like Policy
  • VIM selection policy when selecting a VIM that is an NS-1 placement destination, priority is given to the number of VMs built in the VIM (priority is given to the VIMs that are built with few VMs). Indicates.
  • the first row in the figure shows that when the PM that is the placement destination of NS-1 is selected, priority is given to the number of VMs built in PM31.
  • both the VIM selection policy and the PM selection policy may be registered in the first selection policy, or only one of them may be registered (see the second stage, the fourth stage, etc. in FIG. 7). .
  • FIG. 8 is a diagram illustrating an example of the first selection policy database 212.
  • the first selection policy shown in FIG. 8 is an example of a network bandwidth priority selection policy.
  • priority is given to a network bandwidth when selecting a PM that is a VDU or VNF placement destination.
  • the first level in FIG. 8 shows that when VDU-1 is arranged, a PM having a free space of 1 Gbps or more is selected as the network bandwidth.
  • FIG. 9 is a diagram illustrating an example of the first selection policy database 212.
  • the first selection policy shown in FIG. 9 is an example of a dependency priority selection policy.
  • the dependency priority selection policy is a policy that determines an arrangement destination in accordance with a place where other components are arranged. That is, the policy for determining the placement destination of the VM or the like depending on the placement destination of other components is the dependency relationship priority selection policy.
  • VIM_ID1 is set as the ID of the placement destination VIM in the VIM selection policies of VNF-1 and VNF-2. This setting is interpreted so that VNFs are not arranged in VIMs having the same ID.
  • the placement destination of VNF-1 is a candidate VIM different from VIM-1.
  • the location of VNF-1 is selected depending on the location of VNF-2.
  • the dependency priority selection policy can also be applied when selecting a PM placement destination. For example, referring to the fifth and seventh stages in FIG. 9, it is shown that VNF-1 and VNF-3 are not arranged in the same PM 31 because PM_IDs of VNF-1 and VNF-3 match. Alternatively, by setting the zone ID (Zone_ID) in the PM selection policy, it can be set that PMs belonging to the same zone are not set as placement destinations. Referring to the ninth and eleventh stages in FIG.
  • VNF-1 and VNF-3 are not arranged in PMs belonging to the same zone.
  • the PM and zone priority in the PM selection policy, it is possible to select the same PM and zone as the placement destination of the VM as much as possible. For example, referring to the 13th to 15th stages in FIG. 9, since the same Zone_ID is set in VDU-2 and VDU-3, these are preferentially arranged in PMs belonging to the same zone. Become.
  • FIG. 10 is a diagram illustrating an example of the first selection policy database 212.
  • the first selection policy shown in FIG. 10 is an example of a VIM matching priority selection policy.
  • the VIM matching priority selection policy is a policy that preferentially selects the same VIM as an existing VM or the like as an arrangement destination. For example, when VNF is healed, when the VIM to which the PM for which the VM to be healed is built belongs is preferentially selected, or when VNF is scaled, the PM for which the VM constituting the current VNF is built When the VIM is preferentially selected, the VIM matching priority selection policy is set. For example, referring to the first and second stages in FIG. 10, “same VIM” is set in the VIM selection policies of NS-1 and NS-2.
  • NS-1 and NS-2 are preferentially arranged in the same VIM.
  • different PMs can be set as placement destinations while being preferentially placed in the same VIM.
  • VIM selection policies VNF-1 and VNF-2
  • PM different PMs
  • the same VIM is selected as the placement destination for VNF-1 and VNF-2, but different PMs are selected as the placement destination for PM.
  • the same VIM and the same PM are preferentially selected as the placement destinations.
  • the above four first selection policies are merely examples, and are not intended to limit the policies input by the service manager or the like.
  • the first selection policy is a matter that a user (for example, a service administrator) attaches importance to (prioritizes) when the virtualization control device (MANO) 20 selects a placement destination (VIM, PM, PM group) such as a VM. Anything that is reflected may be used.
  • 7 to 10 exemplify two policies, that is, the VIM selection policy and the PM selection policy as the policies set by the service administrator, but any one of the selection policies may be set as the first selection policy. Good.
  • the first selection policy may be a combination of a plurality of selection policies within a feasible range.
  • the first selection policy may be a combination of the VIM matching priority selection policy and the network bandwidth priority policy.
  • the second selection policy database 213 stores the second selection policy.
  • items that are important (prioritized) by the infrastructure administrator when the virtualization control device 20 selects the placement destination of the VM or the like are set.
  • the infrastructure administrator implements a reduction in system power by setting a second selection policy in the virtualization control device (MANO) 20 that concentrates VMs and the like on the same VIM and PM.
  • the infrastructure administrator implements distribution and balancing of the processing load by setting, in the virtualization control device 20, a second selection policy for distributing VIM and PM in which VMs are arranged.
  • FIG. 11 is a diagram illustrating an example of the second selection policy database 213. Similar to the first selection policy, the second selection policy is composed of a VIM selection policy (FIG. 11A) and a PM selection policy (FIG. 11B). For example, when selecting a VIM that is a placement destination of a VM or the like, the infrastructure administrator sets a policy such as whether to concentrate the VM or the like on the same VIM or to distribute the placement destination VIM.
  • FIG. 11A shows a policy for selecting a VIM so that a VM or the like is arranged in the same VIM. Note that, when it is desired to distribute the VIM that is the placement destination of the VM or the like, “distribution” is set in the policy shown in FIG. In FIG.
  • VMs or the like are concentratedly arranged on the same PM (first stage in FIG. 11B), or VMs are distributed and arranged (same as above).
  • the policy for determining whether to perform the second step in the figure is shown.
  • policies are illustrated as policies set by the infrastructure administrator, but any one selection policy may be set as the second selection policy.
  • the 1st selection policy and the 2nd selection policy were demonstrated as a policy which a user can set to the virtualization control apparatus 20, it is not the meaning which limits the number of the policies which a user sets. Either one of the first and second selection policies may be set, or three or more selection policies may be set.
  • the NFVI resource database 214 stores the NFVI resource information acquired from the VIM 23 by the resource management unit 204 as described above.
  • the NFVI resource information stored in the NFVI resource database 214 is information that is referred to when the placement destination (VIM, PM, PM group) is selected by the VIM & PM selection unit 205.
  • the VNF instance database 215 is a means for storing VNF instance information constructed in NFVI. For example, VM and accommodation link information is registered in the VNF instance database 215 as VNF instance information.
  • FIG. 12 is a diagram illustrating an example of the VNF instance database 215.
  • VNF-1 is composed of VDU-1 and VDU-2 belonging to VIM-1.
  • FIG. 12B it is possible to specify the PM in which the VNF is arranged.
  • FIG. 12B it can be recognized that VNF-1 is arranged in PM-1, and VNF-2 is arranged in PM-2. That is, by referring to the VNF instance database 215, link information regarding PM can be acquired.
  • the VNF instance information is mainly referred to when a placement destination (VIM, PM, PM group) such as a VM is selected using the VIM matching priority selection policy.
  • a placement destination such as a VM
  • VIM & PM selection unit 205 refers to the VNF instance database 215 to recognize the VIM in which VNF-2 is constructed and selects the VNF-2 placement destination when selecting the placement location of the VNF-1. In this case, the VIM in which VNF-1 is constructed is recognized.
  • FIG. 13 is a diagram illustrating an example of the internal configuration of the VIM 23.
  • the VIM 23 includes a resource collection unit 301, a PM selection unit 302, and a communication control unit 303 that controls communication with the PM 31.
  • the resource collection unit 301 is a unit that collects necessary resource information when receiving a request for providing resource information of each NFVI from the resource management unit 204 of the NFVO 21.
  • the resource collection unit 301 communicates with the virtualization layer 33 of the PM 31 and collects hardware specifications of each PM 31.
  • the hardware specifications of the PM 31 collected by the resource collection unit 301 include, for example, information on computing hardware such as the number of CPU clocks and the number of physical cores, storage hardware such as memory capacity, and network hardware such as the number of physical NICs.
  • the resource collection unit 301 may collect necessary resource information before receiving a request from the NFVO 21. For example, information relating to an available network bandwidth per PM may be information input in advance by a system administrator or the like.
  • the PM selection unit 302 reserves a PM that matches the PM selection policy and the NFVI resource information from the PM 31 under the control of each VIM. I do. If the PM selection unit 302 succeeds in reserving a PM that conforms to the PM selection policy and NFVI resource information provided from the NFVO 21, the PM selection unit 302 notifies the VIM & PM selection unit 205 of the NFVO 21 that the requested PM reservation has been successful. To do. When the PM selection unit 302 receives a notification from the VIM & PM selection unit 205 to cancel the reserved PM, the PM selection unit 302 cancels the reservation of the reserved PM.
  • FIG. 14 is a sequence diagram illustrating an example of the operation of the network system according to the first embodiment.
  • the service administrator sets the first selection policy in the virtualization control device (MANO) 20 or changes the first selection policy that has already been set (step S101).
  • the infrastructure administrator sets the second selection policy in the virtualization control device 20, or changes the second selection policy that has already been set (step S101).
  • the setting and changing of the selection policy by the service manager or the like is performed on the NFVO 21 via the terminal 10.
  • the selection policy may be set using OSS.
  • the system administrator sets the configuration information in the virtualization control device 20, or changes the configuration information that has already been set (step S102). Also in this case, setting and changing of the configuration information by the system administrator may be performed in the NFVO 21 via the terminal 10 or may be performed using OSS.
  • the setting of the selection policy (first selection policy, second selection policy) and configuration information may be performed before the network system is activated, or may be performed while the network is in operation. That is, the selection policy and configuration information can be set and changed at an arbitrary timing.
  • requests such as instantiation of VNF, healing of VNF, scaling of VNF, etc. are generated (step S103).
  • the VNFM 22 confirms the content of the request (step S104) and makes a virtual resource allocation request corresponding to the request to the NFVO 21 (step S105).
  • the NFVO 21 that has received the request narrows down VIMs that are placement destination candidates for VMs and the like based on the configuration information and the selection policy (first selection policy, second selection policy) (steps S106 to S108).
  • the VIM & PM selection unit 205 narrows down VIMs that are placement destination candidates such as VMs based on information such as VNFD (VNF Descriptor) and configuration information registered in the configuration information database 211 (step S106). For example, suppose that VNF-1 needs to be connected to PNF-1. Therefore, when selecting a VIM-1 VIM placement destination candidate, the VIM & PM selection unit 205 confirms the PM-PNF connection configuration under the VIM shown in FIG. 6A, and the PNF-1 is connected. VIM-1 is selected as one of VIM placement destination candidates.
  • VNFD VNF Descriptor
  • the VIM & PM selection unit 205 also arranges VMs and the like based on the NFVI resource information registered in the NFVI resource database 214 and the first selection policy (VIM selection policy) registered in the first selection policy database 212.
  • the candidate VIMs are narrowed down (step S107). For example, referring to the first row in FIG. 9, when selecting a VNF-1 destination candidate for VNF-1, VNF-1 needs to be placed in a VIM different from VNF-2.
  • the VIM in which VNF-2 is arranged is identified from the NFVI resource information, and a VIM other than the identified VIM is selected as a candidate for the arrangement destination of VNF-1.
  • the VIM & PM selection unit 205 narrows down VIMs that are placement destination candidates such as VNFs based on the NFVI resource information and the second selection policy (VIM selection policy) registered in the second selection policy database 213 (step) S108). For example, when the VIM & PM selection unit 205 refers to the second selection policy database 213 and a policy is set in the database to concentrate VNFs and the like on the same VIM (see FIG. 11A). VIM so that VNFs and the like are concentrated as much as possible from among the remaining VIMs as a result of narrowing down VIM placement destination candidates based on configuration information (step S106) and narrowing down VIM placement destination candidates based on the first selection policy (step S107). Select placement destination candidates.
  • the NFVO 21 makes a virtual resource reservation request to the remaining VIMs as a result of narrowing the VIM placement destination candidates in steps S106 to S108 (step S109).
  • VIM 23-1 and VIM 23-2 are treated as VIM placement destination candidates remaining as a result of the narrowing down.
  • the VIM & PM selection unit 205 obtains the PM selection policy of the first and second selection policies and the NFVI resource information necessary for application of the PM selection policy for the VIM placement destination candidate VIM. Pass along with the virtual resource reservation request.
  • VIMs (VIM 23-1 and VIM 23-2 in FIG. 14) that received the virtual resource reservation request are based on the selection policies (PM selection policy of the first selection policy, PM selection policy of the second selection policy), etc.
  • PM selection policy of the first selection policy PM selection policy of the second selection policy
  • a PM as a placement destination candidate is selected, and the resource of the selected PM is reserved (steps S110 and S111).
  • the PM selection unit 302 of the VIM 23 selects a PM that is a placement destination candidate such as a VM based on the PM selection policy of the first selection policy provided from the NFVO 21 and the NFVI resource information (step S110). For example, when the PM selection unit 302 acquires the PM selection policy shown in the fifth to seventh stages in FIG. 9 and selects a placement destination candidate for VNF-1, the PM in which VNF-3 is constructed A PM different from that is selected and set as a candidate for the placement destination of VNF-1.
  • the PM selection unit 302 of the VIM 23 selects a PM that is a placement destination candidate such as a VM based on the PM selection policy of the second selection policy provided from the NFVO 21 and the NFVI resource information (step S111). For example, if the VIM to select the PM placement destination candidate is managed as VIM-1 (see FIG. 11B), the VIM matches the PM placement destination candidate in step S110, and PM placement destination candidates are selected so that VMs and the like are concentrated on one PM. At that time, the PM selection unit 302 reserves the virtual resource of the PM selected as the placement destination of the VM or the like.
  • the VIM that has received the virtual resource reservation request from the NFVO 21 returns a result for the virtual resource reservation request (step S112). Specifically, the PM selection unit 302 of the VIM that has successfully reserved a PM having a resource that matches the virtual resource reservation request notifies the NFVO 21 to that effect (successful reservation). Alternatively, the PM selection unit 302 of the VIM that has failed to reserve a PM having a suitable resource notifies the NFVO 21 to that effect (reservation failure).
  • the NFVO 21 returns a result of the allocation request to the VNFM 22 that has made the virtual resource allocation request (step S113). Specifically, when the requested virtual resource allocation (resource reservation) is successful, the NFVO 21 responds to the VNFM 22 with a result including information on the VIM and PM that can be allocated the virtual resource. . Alternatively, when the requested virtual resource allocation has failed, the NFVO 21 responds to the VNFM 22 to that effect.
  • step S114 the VIM and PM for which virtual resource reservation has been made, and the VIM and PM that are not actually selected as the placement destination of the VM or the like are released from the resource reservation (step S114). However, there are cases where it is not necessary to cancel the resource reservation according to step S114, such as when there is no VIM that reserved the virtual resource.
  • the NFVO 21 narrows down the VIM based on the configuration information and the selection policy (steps S106 to S108).
  • FIG. 14 and the related description are not intended to limit the execution order of these processes.
  • the NFVO 21 provides each VIM 23 with a PM selection policy and NFVI resource information necessary for PM selection, requests a reservation of a virtual resource that conforms to the selection policy, and then based on the configuration information and the selection policy You may select the arrangement
  • the VIM is narrowed down by the first selection policy in step S107 and then the VIM is narrowed down by the second selection policy.
  • the second selection policy is narrowed. It may be impossible to satisfy.
  • the NFVO 21 may make a virtual resource reservation request to the VIM 23 using the VIM narrowing-down result based on the first selection policy.
  • PM selection PM narrowing down
  • the VIM 23 may reserve a virtual resource using the PM narrowing result based on the first selection policy.
  • step S107 and S110 after selecting the placement destination (VIM, PM, PM group, etc.) based on the first selection policy (steps S107 and S110), the placement destination selection based on the second selection policy (steps S108 and S111) is performed.
  • the order used for selecting the placement destination may be reversed. That is, after the selection of the placement destination by the second selection policy, the placement destination by the first selection policy may be selected. That is, in the sequence diagram shown in FIG. 14, a plurality of steps (processes) are described in order, but the execution order of the steps executed in each embodiment is not limited to the described order. In each embodiment, the order of the illustrated steps can be changed within a range that does not hinder the contents, for example, the processes are executed in parallel.
  • FIG. 15 is a diagram for explaining the first application example.
  • PMs 51-1 to 51-4 are accommodated in a ToR (Top Of Rack) switch 50-1.
  • PMs 51-5 to 51-8 are accommodated in the ToR switch 50-2.
  • PMs 51-1 to 51-8 are assigned to four groups (zones). Specifically, the group A includes PMs 51-1, 51-2, 51-5, and 51-6 that receive power supply from the power supply system A.
  • Group B includes PMs 51-3, 51-4, 51-7, and 51-8 that receive power supply from the power supply system B.
  • Group C includes PMs 51-1 to 51-4 accommodated in the ToR switch 50-1.
  • Group D includes PMs 51-5 to 51-8 accommodated in the ToR switch 50-2.
  • the service manager “(1) VMs in the same cluster (ACT / SBY) are distributed and arranged in different PMs and different power systems”, “(2) Considering the network bandwidth, It is assumed that the first selection policy of “arrange VMs so as not to cross the ToR switch as much as possible” is set.
  • group A is selected as the placement destination of the ACT (active) VM
  • group B is selected as the SBY (standby) VM placement destination.
  • the group C is selected according to the priority.
  • the infrastructure administrator sets the second selection policy “I want to distribute the PMs to be arranged as much as possible”.
  • PMs 51-2 and PM51-4 in which VMs are not yet arranged (or VMs are arranged little) are arranged so that the arrangement destination PMs are dispersed as much as possible. Selected.
  • FIG. 16 is a diagram for explaining the second application example.
  • PMs 61-1 to 61-5 are accommodated in the ToR switch 60-1
  • PMs 61-6 to 61-10 are accommodated in the ToR switch 60-2.
  • the ToR switches 60-1 and 60-2 are connected to an AR (Access Router) 62.
  • the maximum value of the network bandwidth per PM 61 is 10 Gbps (for example, the necessary network bandwidth is set in VNFD).
  • PMs 61-1 to 61-10 shown in FIG. 16 are allocated to two PMGs, and a high priority is given to the group A accommodated in the ToR switch 60-1.
  • the service administrator selects a PM so as to secure the network bandwidth specified by the user and preferentially places the PM in the PM accommodated in the same ToR switch. Assume that a selection policy has been set. As a result, when a new VM is allocated, a PM 61-2 having a sufficient network bandwidth is selected from the PMs 61-1 to 61-5 accommodated in the group A having a high priority.
  • the service administrator makes a setting that “PM can be selected across the ToR switch in an emergency” when the first selection policy is set.
  • the VM placement destination is selected from PMs 61-6 to 61-10 accommodated in the group B having a low priority.
  • FIG. 17 shows FIG. B. on page 115 of Non-Patent Document 1.
  • FIG. 18 shows FIG. B. of FIG. 115 of Non-Patent Document 1.
  • FIG. 17 discloses VNF instantiation by EM (Element Management)
  • FIG. 18 discloses a flow of VNF instantiation by NFVO.
  • an NFVO NFV Orchestrator
  • VNFM VNF Manager
  • VNFM VNF Manager
  • VNF Manager VNF Manager
  • free resources 3. Check Free resources resources are available
  • VIM Can reserve resources optionally reserves towards VIM
  • the NFVO (NFV Orchestrator) that has received the request for the new VNF instantiation checks available free resources (4. Check free resources are available), and provides resources to the VIM. Can be booked (optionally reserve towards VIM).
  • Non-Patent Document 1 relating to the standard specification of NFV-MANO only describes checking a free resource when selecting a VNF allocation destination, and how to select a VNF allocation destination. There is no mention about. Therefore, the placement destination (such as VMM, PM, PM group, etc.) of the VM or the like is determined only by the information related to the free resource, and a problem may occur depending on the case.
  • VMs adopting a redundant configuration are desired to be arranged in different PMs. However, it is assumed that the same PM is selected and neither the active system nor the standby system functions when the server fails.
  • VMs adopting a redundant configuration are desired to receive power supply from power sources of different systems. However, it is assumed that both the active and standby VMs are arranged in the same power supply system, and that neither the active system nor the standby system functions when the power supply fails.
  • VMs with a large amount of traffic are desired to be placed in the same PM, but if they are placed in different PMs, performance may be degraded.
  • VMs with a large amount of traffic are desired to be accommodated in the same ToR switch, but VMs are arranged across the ToR switch, and there is a possibility that switch congestion and performance are reduced.
  • the network system according to the first embodiment can provide a system environment in which the user can flexibly set a policy for selecting a placement destination (VIM, PM, PM group) such as a VM.
  • VIP placement destination
  • PM placement destination
  • NFVO NFV-Orchestrator
  • NFVI Network Function Virtualization Infrastructure
  • VNF Virtual Network Function
  • VIM Virtualized Infrastructure Manager
  • At least one of the NFVO and VIM is a virtualization control device that selects an arrangement destination of the constituent element based on selection information regarding an arrangement destination of the constituent element of the virtual network.
  • the virtualization control device according to mode 1 or 2, wherein the selection information includes configuration information related to a connection destination of the component.
  • the selection information includes 1st selection policy which consists of the 1st VIM selection policy about selection of the resource pool which becomes the arrangement place of the above-mentioned element, and the 1st PM selection policy about selection of the physical machine which becomes the above-mentioned arrangement place of the element is included
  • the virtualization control device according to mode 2 or 3.
  • the selection information includes A second VIM selection policy that determines whether to select a resource pool so that the components are distributed or to select a resource pool so that the components are concentrated, and a physical machine so that the components are distributed Or a second PM selection policy that determines whether to select a physical machine so that the components are concentrated, or a second selection policy comprising: The virtualization control device described.
  • the NFVO selects a resource pool as a placement destination of the component based on the first and / or second VIM selection policies, The virtualization control device according to mode 5, wherein the VIM selects a physical machine to which the component is to be arranged based on the first and / or second PM selection policy.
  • the NFVO provides the first and / or second PM selection policy to the VIM and requests the VIM to select a physical machine to which the component is to be placed.
  • Control device The virtualization control device according to mode 7, wherein the VIM reserves physical machine resources that conform to the first and / or second PM selection policies in response to a request from the NFVO.
  • the NFVO requests a plurality of the VIMs to select a physical machine to which the component is to be placed and receives a notification from the plurality of VIMs that the physical machine resource reservation has been successful.
  • the virtualization control device wherein one of a plurality of VIMs for which the resource reservation is successful is selected, and a resource reserved by a VIM other than the selected VIM is released.
  • NFVI Network Function Virtualization Infrastructure
  • VNF Virtual Network Function
  • NFVO NFV-Orchestrator
  • NFVI Network Function Virtualization Infrastructure
  • VNF Virtual Network Function
  • a program for causing a computer to control a virtualization control device comprising: A process for referring to selection information related to the placement destination of the components of the virtual network; A process of selecting an arrangement destination of the component based on the selection information; A program for causing the computer to execute.
  • Form 10 and form 11 can be developed into form 2 to form 9, as in form 1.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un appareil de commande de virtualisation qui permet à un utilisateur de régler avec souplesse une politique au moment de sélectionner une destination d'installation d'une machine virtuelle (VM), d'une fonction de réseau virtuelle (VNF), etc. Cet appareil de commande de virtualisation est doté d'un NFVO et d'un VIM. Le NFVO est mis en œuvre à l'aide d'un logiciel fonctionnant sur une VM pour assurer un service de réseau sur une infrastructure de virtualisation de fonction de réseau (NFVI), qui fournit une base d'exécution pour une VNF. Le VIM réalise la gestion de ressources et la commande de la NFVI. En outre, le NFVO et/ou le VIM dans l'appareil de commande de virtualisation sélectionnent une destination d'installation d'un élément sur la base d'informations de sélection (telles que, entre autres, des informations de configuration ou une politique de sélection, dont la description se trouve ci-après) appartenant à la destination d'installation de l'élément (par exemple une VM, etc.) du réseau virtuel.
PCT/JP2016/052514 2015-01-29 2016-01-28 Appareil de commande de virtualisation, procédé de sélection de destination d'installation et programme Ceased WO2016121879A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015015972 2015-01-29
JP2015-015972 2015-01-29

Publications (1)

Publication Number Publication Date
WO2016121879A1 true WO2016121879A1 (fr) 2016-08-04

Family

ID=56543484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/052514 Ceased WO2016121879A1 (fr) 2015-01-29 2016-01-28 Appareil de commande de virtualisation, procédé de sélection de destination d'installation et programme

Country Status (1)

Country Link
WO (1) WO2016121879A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016208068A (ja) * 2015-04-15 2016-12-08 株式会社Nttドコモ 機能部割当装置及び機能部割当方法
CN106856441A (zh) * 2017-01-23 2017-06-16 北京市天元网络技术股份有限公司 Nfvo中的vim选择方法和装置
CN110351104A (zh) * 2018-04-02 2019-10-18 中国移动通信有限公司研究院 一种vim选择方法及装置
WO2023032105A1 (fr) * 2021-09-01 2023-03-09 楽天モバイル株式会社 Système de commande de tâche et procédé de commande associé
CN115915205A (zh) * 2022-10-24 2023-04-04 西安电子科技大学 一种服务化无线接入网

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010026699A (ja) * 2008-07-17 2010-02-04 Kddi Corp ネットワーク運用管理方法および装置

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010026699A (ja) * 2008-07-17 2010-02-04 Kddi Corp ネットワーク運用管理方法および装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MASAAKI KOSUGI ET AL.: "Availability analysis of NFV-based Mobile Network System", IEICE TECHNICAL REPORT, vol. 114, no. 417, 19 January 2015 (2015-01-19), pages 7 - 12, ISSN: 0913-5685 *
MASASHI KANEKO ET AL.: "A robust VNF allocation method in NFV", IEICE TECHNICAL REPORT, vol. 114, no. 400, 15 January 2015 (2015-01-15), pages 29 - 34, ISSN: 0913-5685 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016208068A (ja) * 2015-04-15 2016-12-08 株式会社Nttドコモ 機能部割当装置及び機能部割当方法
CN106856441A (zh) * 2017-01-23 2017-06-16 北京市天元网络技术股份有限公司 Nfvo中的vim选择方法和装置
CN110351104A (zh) * 2018-04-02 2019-10-18 中国移动通信有限公司研究院 一种vim选择方法及装置
WO2023032105A1 (fr) * 2021-09-01 2023-03-09 楽天モバイル株式会社 Système de commande de tâche et procédé de commande associé
CN115915205A (zh) * 2022-10-24 2023-04-04 西安电子科技大学 一种服务化无线接入网

Similar Documents

Publication Publication Date Title
US11593149B2 (en) Unified resource management for containers and virtual machines
US10972542B2 (en) Data storage method and apparatus
JP6729399B2 (ja) システム、仮想化制御装置、仮想化制御装置の制御方法及びプログラム
JP6790835B2 (ja) ネットワーク機能仮想化管理およびオーケストレーション方法と装置とプログラム
JP6614340B2 (ja) ネットワーク機能仮想化管理オーケストレーション装置と方法とプログラム
EP3468151B1 (fr) Procédé et appareil de traitement de ressources d'accélération
CN112256423B (zh) 分布式资源管理系统中的动态租户结构调整的系统、设备和过程
CN107430528B (zh) 机会性资源迁移以优化资源放置
KR101932872B1 (ko) 네트워크 기능들 가상화의 관리 및 오케스트레이션을 위한 방법, 디바이스, 및 프로그램
JP6174716B2 (ja) 管理システム、全体管理ノード及び管理方法
JP6196322B2 (ja) 管理システム、仮想通信機能管理ノード及び管理方法
CN102947796B (zh) 用于在数据中心环境中移动虚拟资源的方法和装置
JP6658882B2 (ja) 制御装置、vnf配置先選択方法及びプログラム
WO2016039963A2 (fr) Partage de ressources entre deux systèmes d'attribution de ressources
US20180004563A1 (en) Orchestrator apparatus, system, virtual machine creation method, and computer-readable recording medium
JP6668658B2 (ja) ジョブ管理方法、ジョブ管理装置及びプログラム
WO2016121879A1 (fr) Appareil de commande de virtualisation, procédé de sélection de destination d'installation et programme
KR20200080458A (ko) 클라우드 멀티-클러스터 장치
US10572412B1 (en) Interruptible computing instance prioritization
KR20180000204A (ko) 자동 스케일을 제공하는 방법, 장치 및 시스템
CN110018898A (zh) 选择虚拟化基础设施管理器的方法及装置
Tiwari et al. Resource management using virtual machine migrations
JP2017058734A (ja) 仮想マシン管理方法、仮想マシン管理装置及び仮想マシン管理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16743482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16743482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP