CN114640554B - Multi-tenant communication isolation method and hybrid networking method - Google Patents
Multi-tenant communication isolation method and hybrid networking method Download PDFInfo
- Publication number
- CN114640554B CN114640554B CN202210139250.8A CN202210139250A CN114640554B CN 114640554 B CN114640554 B CN 114640554B CN 202210139250 A CN202210139250 A CN 202210139250A CN 114640554 B CN114640554 B CN 114640554B
- Authority
- CN
- China
- Prior art keywords
- tenant
- network
- virtual network
- network card
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000002955 isolation Methods 0.000 title claims abstract description 32
- 238000004891 communication Methods 0.000 title claims abstract description 29
- 230000006855 networking Effects 0.000 title claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims 3
- 101100513046 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) eth-1 gene Proteins 0.000 description 19
- 239000010410 layer Substances 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 6
- 230000002085 persistent effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000005641 tunneling Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4641—Virtual LANs, VLANs, e.g. virtual private networks [VPN]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4633—Interconnection of networks using encapsulation techniques, e.g. tunneling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/64—Hybrid switching systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The disclosure provides a multi-tenant communication isolation method and a hybrid networking method. The communication isolation method comprises the following steps: creating a first virtual network card and a second virtual network card, wherein the first virtual network card is positioned in a default network naming space; creating an outbound network namespace, and joining the second virtual network card into the outbound network namespace; creating a private network namespace and a first virtual network card pair for each tenant, wherein a first end of each first virtual network card pair is connected to the default network namespace, and a second end is connected to the private network namespace of the tenant; and creating a second pair of virtual network cards for each tenant, the first end of each second pair of virtual network cards being connected to the tenant's private network namespace and the second end being connected to the outbound network namespace. An example of implementing the method can be used as a vxlan gateway node, thereby enabling hybrid networking of devices not registered in the vpc with the original vpc.
Description
Technical Field
The disclosure relates to cloud technology, in particular to a multi-tenant communication isolation method and a hybrid networking method.
Background
As the server virtualization degree of the data center is rapidly improved, the agility and flexibility of the data center are also remarkably improved. Network virtualization and decoupling of virtual networks from physical networks makes management, automation, and orchestration simpler. After the server is virtualized, a plurality of virtual machine instances can be borne in one physical server, and each virtual machine instance has independent IP address and MAC address, which is equivalent to the double expansion of the server accessed into the data center.
Public clouds or other large virtualized cloud data centers often need to accommodate tens of thousands or even more tenants. With the increasing importance of data security, virtual Private Cloud (VPC) services are required to be provided for numerous tenants to realize isolation between tenant data. However, when devices that have not been registered in the VPC are required to be used to provide services for tenants, how to achieve interworking between these devices and the VPC while maintaining reliable isolation between the tenants becomes a technical problem that needs to be solved in the art.
To this end, there is a need for an improved multi-tenant communication isolation scheme.
Disclosure of Invention
One technical problem to be solved by the present disclosure is to provide a multi-tenant communication isolation scheme, which achieves an effect of interworking with respective VPC networks while isolating multiple tenants from each other by dividing multiple namespaces on one virtual machine instance, utilizing virtual network card pairs to achieve information transfer between namespaces, and using different virtual network cards to respectively take charge of receiving and transmitting data packets of the virtual machine instance and the outside.
According to a first aspect of the present disclosure, there is provided a multi-tenant communication isolation method, including: creating a first virtual network card and a second virtual network card, wherein the first virtual network card is positioned in a default network naming space; creating an outbound network namespace, and joining the second virtual network card into the outbound network namespace; creating a private network namespace and a first virtual network card pair for each tenant, wherein a first end of each first virtual network card pair is connected to the default network namespace, and a second end is connected to the private network namespace of the tenant; and creating a second pair of virtual network cards for each tenant, the first end of each second pair of virtual network cards being connected to the tenant's private network namespace and the second end being connected to the outbound network namespace.
Optionally, the method further comprises: creating a vxlan tunnel interface and a first bridge within the default network namespace; and adding the tunnel interface device and the first end of each first virtual network card pair into the first network bridge, and forwarding the network information received by the first virtual network card to the first end of the first virtual network card pair of the network information addressing tenant through the tunnel interface by the first network bridge.
Optionally, the method further includes creating a plurality of VLAN subinterfaces on the second end of the corresponding first pair of virtual network cards within the private network namespace of the at least one tenant; and configuring a gateway interface address of a corresponding VLAN for each VLAN sub-interface.
Optionally, the method further includes configuring IP translation rules to address translate packets having source IP addresses belonging to the VLAN subinterfaces and to the outbound network namespaces by the first end of the corresponding second pair of virtual network cards.
Optionally, address converting the data packet with the source IP address belonging to the VLAN sub-interface includes: and dynamically acquiring the IP address of the first end of the corresponding second virtual network card pair, and disguising the source IP address based on the dynamically acquired address segment.
Optionally, when the network information received by the first virtual network card is used for communication between different subnets within the same tenant, the network information is communicated with each other at a VLAN subinterface of the tenant, and is sent via the first virtual network card.
Optionally, the method further comprises creating a second bridge within the outbound network namespace; and adding the second virtual network card and the second end of each second virtual network card pair into the second network bridge, and delivering the network information to be sent by the second end of each second virtual network card pair of each tenant to the second network bridge and converging the network information to the first virtual network card for sending.
Optionally, the method further comprises configuring an FDB entry within the default network namespace for forwarding guidance for headers of data link layer broadcasts and unknown unicasts.
Optionally, the first virtual network card pair of each tenant is used for acquiring network information addressed to the tenant received by the first virtual network card; and the second virtual network card of each tenant transmits network information to be transmitted to the second virtual network card.
According to a second aspect of the present disclosure, there is provided a hybrid networking method, including: the first virtual network card of the virtual machine executing the method according to the first aspect obtains a data packet from a tenant in an unregistered node; and the virtual machine encapsulates the data packet into a data packet conforming to a vxlan protocol and sends the encapsulated data packet to a corresponding tenant resource in a registered node through the second virtual network card.
According to a third aspect of the present disclosure, there is provided a computing device comprising: a processor; and a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described in the first aspect above.
According to a fourth aspect of the present disclosure there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method as described in the first aspect above.
Therefore, the multi-tenant communication isolation scheme of the invention can provide a vxlan gateway system running in the virtual machine on the cloud, and the system can access the original VPC primary network in underlay when an overlay network supporting the multi-vlan pattern is constructed by using the vxlan by an upper system, so that devices which are not registered in the VPC can be mixed with the primary VPC for networking. The scheme supports the condition that multiple tenants exist, and allows isolation configuration to be conducted on the tenant network. The scheme is simple in configuration, no complex control system exists, and the configuration is completed by using the original linux assembly, so that the scheme is very light.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout exemplary embodiments of the disclosure.
Fig. 1 shows a schematic diagram of communication across virtual machine instances in an SDN network.
Fig. 2A-B illustrate examples of SDN networks communicating across virtual machine instances.
Fig. 3 shows a schematic flow chart of a multi-tenant communication isolation method according to one embodiment of the invention.
Fig. 4 illustrates a schematic diagram of multi-tenant communication isolation.
Fig. 5 shows a schematic diagram of a virtual machine instance as a multi-tenant vxlan gateway node.
Fig. 6 shows an example of hybrid networking with the vxlan gateway node of the present invention.
Fig. 7 illustrates a schematic architecture of a computing device that may be used to implement the multi-tenant communication isolation method or hybrid networking method described above according to an embodiment of the invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be understood that the use of "first" and "second" in this disclosure is used merely to distinguish between different instances of a homogeneous object, and does not imply any order or importance.
As previously mentioned, as the server virtualization of data centers increases rapidly, so too does the agility and flexibility. Network virtualization and decoupling of virtual networks from physical networks makes management, automation, and orchestration simpler. After the server is virtualized, a plurality of virtual machine instances can be borne in one physical server, and each virtual machine instance has independent IP address and MAC address, which is equivalent to the double expansion of the server accessed into the data center.
Public clouds or other large virtualized cloud data centers often need to accommodate tens of thousands or even more tenants, and traditional VLANs supporting only about 4000 available virtual networks are clearly struggled. The VXLAN can theoretically support VXLAN segments up to 16M by introducing a network identifier similar to VLAN ID, that is, 24-bit VXLAN network identifier VNI, into the frame header, so as to meet the identifier requirement between large-scale different networks.
In addition, since the firmware of the conventional network devices (switches, routers) is locked and controlled by the device manufacturer, the requirement of rapid change of each tenant of the data center cannot be satisfied. Therefore, SDN (software defined networking, software Defined Network) technology was introduced, and the network architecture for tightly coupling legacy network devices was split into an application, control, forwarding three-tier split architecture. The control functions are transferred to the server, and the upper layer application and the bottom layer forwarding facilities are abstracted into a plurality of logic entities. Therefore, the limitation of hardware on the network architecture can be eliminated by separating the network control from the physical network topology, so that a user can modify the network architecture like upgrading and installing software, and the requirements of adjusting, expanding or upgrading the whole website architecture are met. The hardware such as the bottom-layer switch and the router does not need to be replaced, so that a great amount of cost is saved, and the iteration period of the network architecture is greatly shortened.
With the continuous increase of data security requirements, an SDN technology has been used to implement a Virtual PRIVATE CL oud (VPC). Each virtual private cloud VPC is composed of a private network segment, a routing table and at least one subnet. Cloud resources (e.g., cloud servers, cloud databases, etc.) need to be deployed within a subnet. After the virtual private cloud VPC is created, one or more subnets may be partitioned for the virtual private cloud VPC. When the virtual private cloud VPC is created, the system can automatically generate a default routing table, and the default routing table has the function of ensuring the intercommunication of all the subnets under the same virtual private cloud VPC. When the routing policy in the default routing table cannot meet the requirement of an application (such as the cloud server of the unbound elastic public network IP needs to access the external network), the solution is also needed to be solved by creating the custom routing table.
The cloud facilitator may implement VPC per tenant. Here, tenant (tense) refers to a service, container, etc. resources deployed on the cloud may belong to different end users. One user may be one tenant. Network isolation may be required between each tenant, i.e., each tenant is isolated in a respective VPC. In a particular network arrangement, a tenant may be distributed across multiple virtual machine instances. If communication is required to be realized between virtual machine instances running on the cloud, the SDN system is required to know corresponding network five-tuple information (IP and port information and protocol information of a transceiver) in advance, and relevant flow table information is injected into the whole SDN network to communicate with each other. If a container system is running inside the virtual machine or additional virtual machine network cards are configured, these IP addresses cannot register with the SDN, then these container systems or other unintended devices cannot communicate with each other across virtual machine instances. If the software system running inside the virtual machine needs to add a vlan sub-interface to the network card, that is, the inside of the virtual machine should want to invade the base network and isolate the network by using the vlan, this type of message cannot be forwarded when communicating across the ECS.
To this end, fig. 1 shows a schematic diagram of communication across virtual machine instances in an SDN network. The Elastic Computing Service (ECS) normally creates virtual machine instances as the underlying resources, inside which they need to have normal management-level network cards whose IP addresses are automatically allocated and configured inside by the ECS, and records them as eth0 as shown. As shown in fig. 1, resources belonging to the same tenant (e.g., tenant a) are disposed on virtual machine instance 1 and instance 2. Resources on an instance may be implemented in, for example, multiple containers (not shown in fig. 1, see fig. 2A-B below for specific examples). Since the container is a stand alone system located above the operating system, the network card eth0 configured for the virtual machine cannot be seen. To this end, in order to achieve interworking between different instances within the same VPC, the following operations may be performed:
1. Creating a linux-provided driven vxlan tunnel interface inside each virtual machine (illustrated as examples 1 and 2), denoted: vxlan-0;
2. A bridge, such as the one illustrated as openvswitch-based, is created, denoted: mock-br;
3. configuring fdb entries (illustrated bridge fdb connection (application)) for guiding the forwarding of the first packet of the data link layer broadcast, unknown unicast (i.e., configuring the first packet broadcast entries, subsequent detailed forwarding entries self-learning);
4. A pair veth of network cards is created, one of which is denoted as veth-br, and the opposite end is named as the name of the network card that the corresponding service wants to actually use, for example: eth1;
5. Adding veth-br to the mock-br;
6. setting up the link states of all newly-built network cards and network bridge equipment interfaces;
At this time, eth1 is a virtual ethernet card similar to a common physical server, and a vlan sub-interface can be additionally created on the virtual ethernet card, and the virtual ethernet card or the vlan sub-interface can be added into a bridge of a specific service to complete various service requirements.
Fig. 2A-B illustrate examples of SDN networks communicating across virtual machine instances.
As shown in FIG. 2A, eth1 of FIG. 1 may be used as an uplink interface to a network of containers (containers 1 and 2 as shown). Thus, containers 1 and 2 in virtual machine instance 1 may communicate with other instances in the VPC (e.g., illustrated instance 2 … n) and with containers 1 and 2 through the bridge, via the eth1 and vxlan interfaces, to network card eth 0.
Further, when the VPC of the same tenant includes a plurality of subnets divided, there is a case where different containers in the same virtual machine instance belong to different subnets. At this time, as shown in fig. 2B, VLAN sub-interfaces belonging to respective VLANs may be created on the eth1 network card. It is known that VLANs are used to partition LANs, linux is able to receive packets with VLAN tags, and each VLAN id is considered a different network interface. For this reason, eth1.9 and eth1.11 are illustrated as corresponding to vlan id 9 and vlan id 11, respectively. Thus, container 1 belonging to VLAN 9 may be messaging via subinterface eth1.9, while containers 2, 3 and 4 belonging to VLAN 11 may be messaging via subinterface eth 1.11.
In the above example, the openvswitch may be used to provide a driven vxlan interface instead of the linux to provide a driven vxlan interface, or the openvswitch flow table entry may be used instead of the bridge fdb forwarding entry of the linux system. In addition, the implementation of the container upper link shown in fig. 2A and the vlan sub-interface shown in fig. 2B may also be included in one example.
Therefore, by configuring the first packet broadcast list item, the subsequent detailed forwarding list item self-learning mode is finally provided for users (namely, different tenants) to provide services which are completely the same as the use experience of the traditional network card, namely, transparent to the users, and meanwhile, frequent list item updating actions can be decoupled due to frequent dynamic changes in a specific service system, and the upper specific service system is not required to be perceived.
The example shown in fig. 1 and 2A-B above may be regarded as a device that has not yet been registered in vpc. In the arrangement of the VPC, it is required that network devices of a cross-node vxlan tunnel system running on the cloud can implement network interworking with the original VPC on the cloud service, and meanwhile, can implement customized multi-tenant network isolation.
Therefore, the scheme is provided, and the effect of interworking with respective VPC networks while multi-tenant isolation is achieved by dividing a plurality of namespaces on one virtual machine instance, utilizing virtual network card pairs to realize information transfer between namespaces, and using different virtual network cards to respectively take charge of receiving and transmitting data packets of the virtual machine instance and the outside.
Fig. 3 shows a schematic flow chart of a multi-tenant communication isolation method according to one embodiment of the invention. The method may be a method implemented on one virtual machine instance of the SDN network. Fig. 4 illustrates a schematic diagram of multi-tenant communication isolation.
In step S310, a first virtual network card (e.g., network card eth0 in fig. 4) and a second virtual network card (e.g., network card eth1 in fig. 4) are created. The first virtual network card eth0 is located in a default network namespace (defaultnatnetnamespace). Here, a Namespace (Namespace) may refer to a resource isolation technology of an operating system level in Linux, which is used to divide global resources of Linux into resources within a scope of namespaces, and the resources between different namespaces are transparent to each other, so that processes in different namespaces cannot perceive processes and resources in other namespaces.
In step S320, an outbound network namespace (i.e., EXT network namespace, EXTnatnetnamespace) is created and the second virtual network card eth1 created additionally as above is added to the outbound network namespace.
In step S330, a private network namespace and a first virtual network card pair are created for each tenant, with a first end of each first virtual network card pair connected to the default network namespace and a second end connected to the private network namespace of the tenant. As shown in fig. 4, assuming there are multiple tenants a-X, a dedicated network namespace may be created for each tenant, e.g., tenant a network namespace, tenant B network namespace, … up to tenant X network namespace, while a first virtual network card pair is created for each tenant (veth). Here, veth refers to a virtual ethernet card, which is a virtual network card that is provided in a linux system and appears in pairs, and the transceiver ends are directly connected together, that is, a pair of veth network cards is formed. The message sent from one end of the network card pair is directly accessed to the other end of the network card pair, and vice versa. As shown, a virtual network card pair veth-tenant-a, veth-tenant-a may be created for tenant a that includes two network cards (i.e., including two ends, one end corresponding to each of the pair of network cards), one network card connected to the default network namespace and the other Zhang Wangka connected to the tenant a network namespace that is specific to that tenant. Similarly, a virtual network card pair veth-tenant-B, veth-tenant-B may be created for tenant B, including two network cards (i.e., two ends), one network card connected to the default network namespace and the other Zhang Wangka connected to the tenant B network namespace that is specific to that tenant. One end of the virtual network card pair veth-tenant-X up to tenant X is connected to the default network namespace, and the other end is connected to the tenant X network namespace special for the tenant.
Further, in step S340, a second pair of virtual network cards is created for each tenant, with a first end of each second pair of virtual network cards connected to the private network namespace of the tenant and a second end connected to the outbound network namespace. As shown in fig. 4, a second virtual network card pair may be created for tenant a, tenant B network namespaces … up to tenant X network life (veth-ext). As shown, virtual network card pair veth-EXT-tenant-a, veth-EXT-tenant-a may be created for tenant a, including two network cards (i.e., two ends), one connected to tenant a network namespace specific to that tenant and the other connected to the EXT network namespace. Similarly, a virtual network card pair veth-EXT-tenant-B, veth-EXT-tenant-B may be created for tenant B, including two network cards (i.e., two ends), one connected to the tenant B network namespace specific to that tenant and the other connected to the EXT network namespace. One end of the virtual network card pair veth-EXT-tent-X up to tenant X is connected to the tenant X network naming space special for the tenant, and the other end is also connected to the EXT network naming space.
Thus, the invention provides that the EXT namespaces and the plurality of tenant namespaces are isolated outside the default namespaces to realize the isolation between tenants, meanwhile, the information exchange between the virtual machine and the outside is realized through the two set virtual network cards eth0 and eth1, and each tenant space can realize the information receiving from eth0 (for example, via a vxlan tunnel interface) respectively through two pairs veth, and the information sending realized via eth1 of the EXT namespaces.
Specifically, the first virtual network card pair of each tenant may be used to obtain network information addressed to the tenant received by the first virtual network card; and the second virtual network card of each tenant transmits network information to be transmitted to the second virtual network card. As shown in fig. 4, for example, veth-tenant-a in the network namespace of tenant a may receive network information belonging to the tenant a from eth0, and after a certain internal processing, send the network information to the EXT namespace through veth-EXT-tenant-a, and send the network information out of eth 1.
The structure of the multi-namespace multi-virtual network card is particularly suitable for realizing a vxlan gateway system in a virtual machine. The system enables an upper system on cloud computing service to access an original VPC primary network in an underley when an overlay network supporting a multi-vlan format constructed by using a vxlan is used, so that equipment which is not registered in a VPC can be mixed with the primary VPC for networking. When multiple tenants exist, the scheme allows for isolated configuration of the tenant network.
To this end, the method further comprises: creating a vxlan tunnel interface and a first bridge within the default network namespace; and joining the vxlan tunnel interface device and the first end of each first virtual network card pair to the first bridge, and forwarding network information received by a first virtual network card by the first bridge to the first end of the first virtual network card pair of the network information addressed tenant through the tunnel interface.
As shown in fig. 4, the default network namespace includes a Linuxbridge (first bridge) to which the vxlan tunnel interface device and veth-tenant-A, veth-tenant-B, … up to veth-tenant-X as described above are added. The vxlan is here a UDP-based network tunnel and quarantine technique, which encapsulates the original data link layer network data for transmission in UDP messages. Specifically, the vxlan tunneling protocol encapsulates layer 2 ethernet frames into layer 3 UDP packets to create virtualized layer 2 subnets or fragments across the physical layer 3 network. Each layer 2 subnet is uniquely identified by a VXLAN Network Identifier (VNI) that can segment the traffic. In other words, the vxlan tunnel interface shown in fig. 4 may encapsulate the packet acquired from eth0 in accordance with the vxlan tunnel protocol.
Because the data packet is encapsulated according with the vxlan tunneling protocol, the method can further comprise: and configuring FDB list items in the default network name space for forwarding guidance of the header of the data link layer broadcast and the unknown unicast. Thus, the self-learning of the follow-up detailed forwarding table items is facilitated by configuring the first packet broadcast table item.
Accordingly, to process data from each tenant namespace, the method may further include: creating a second bridge within the outbound network namespace; and adding the second virtual network card and the second end of each second virtual network card pair into the second network bridge, and delivering the network information to be sent by the second end of each second virtual network card pair of each tenant to the second network bridge and converging the network information to the first virtual network card for sending. Also, as shown in FIG. 4, the EXT network namespace includes a Linuxbridge (second bridge) to which eth1 and veth-EXT-tenant-A, veth-EXT-tenant-B, … up to veth-EXT-tenant-X as described above are added. Thus, eth1 is enabled to correctly transmit network data from different tenant namespaces.
Furthermore, the invention can further support the subnet division in the tenant. To this end, the method may further comprise: creating a plurality of VLAN subinterfaces on the second end of the corresponding first virtual network card pair in the private network naming space of at least one tenant; and configuring a gateway interface address of a corresponding VLAN for each VLAN sub-interface. Specifically, IP translation rules may be configured to address translate packets having source IP addresses belonging to the VLAN subinterfaces and sent to the outbound network namespaces by the first end of the corresponding second virtual network card pair. For example, the IP address of the first end of the corresponding second virtual network card pair may be dynamically acquired, and masquerading the source IP address based on the dynamically acquired address field.
Fig. 5 shows a schematic diagram of a virtual machine instance as a multi-tenant vxlan gateway node. Similar to fig. 4, fig. 5 also includes a default network namespace, an EXT network namespace, and network namespaces specific to individual tenants. Because of space limitations, only tenant a and tenant B network namespaces, and the configuration of the corresponding virtual ethernet cards, are shown in fig. 5, but it will be appreciated that in the example shown in fig. 5, more tenant dedicated network namespaces may be included on the virtual machine as a vlan gateway. In addition, the multi-tenant vxlan gateway node shown in fig. 5 also supports partitioning of individual tenant internal subnets (corresponding to different vlans), unlike fig. 4.
The computing service on the cloud can normally create virtual machine instances as basic resources, network cards with normal management levels are needed to be arranged in the virtual machines, and ip addresses of the network cards are automatically distributed and configured into the virtual machines by the cloud computing service. These network cards may be recorded as eth0, for example.
For a virtual machine instance that serves as a multi-tenant vxlan gateway node, the following operations need to be performed:
0. the virtual machine as the vxlan gateway needs to apply 2 virtual network cards, namely eth0 and eth1;
1. Creating an EXT network namespace, moving eth1 into the private network namespace, creating a specialized linux bridge (i.e., corresponding to the second bridge in FIG. 4) within the namespace, and adding eth1 to the bridge;
2. Creating a linux-provided driven vxlan tunnel interface in a default network naming space;
3. Creating a bridge device based on a linux bridge (i.e., corresponding to the first bridge in fig. 4) within a default network namespace, adding a vxlan device to the linux bridge;
4. Configuring fdb list items in default network name space for first packet forwarding guidance of data link layer broadcast and unknown unicast (due to the adoption of vxlan protocol encapsulation for data packets);
5. creating a pair veth of network cards for each tenant in a default network naming space, and adding one of each pair veth into the linux bridge;
6. creating a dedicated network namespace at each individual tenant, tenter a, tanant b.. tenant Xnat NET NAMESAPCE (i.e., tenant a, tenant B … tenant X namespaces), moving another veth card of the corresponding tenant into the network namespace to which the corresponding tenant belongs;
7, creating vlan sub-interfaces which belong to each vlan on veth network cards in each single tenant private network naming space, wherein the specific vlan value has actual demands of tenants and gives consideration to global configuration decision, vlan ids can not conflict with other tenants, and the interfaces are configured with gateway interface addresses corresponding to CIDR (no category inter-domain routing) held by the vlan;
8. creating a pair veth-EXT network card for each tenant in the EXT network namespace, and adding one of each pair veth into a linux bridge of the namespace;
9. in the EXT network naming space, another veth-EXT network card of the corresponding tenant moves into the network naming space of the corresponding tenant;
10. In each independent tenant private network naming space, veth-ext is configured as an external IP allocated to the tenant, the IP address is actually an IP address which belongs to VPC legal registration supported by cloud computing service, if a plurality of IP addresses are needed, an auxiliary IP function is needed, so that the data (ext-IP for agent x, belongs to ecs VPC) cannot be discarded in the sdn network, and in each independent tenant private network naming space, a gateway IP address with a default route pointing to VPC is configured so that the virtual machine can normally communicate with other virtual machines in the VPC;
11. within each individual tenant private network namespace, configuring an ip_forward capability of the open linux (thereby enabling forwarding corresponding to different vlan ports);
12. Within each individual tenant private network namespace, iptables rules are configured to specify that packets of CIDR whose source IP address belongs to vlan 1-vlanx are IP address segment masquerading MASQUERADE when issued via its dedicated veth-ext network card, e.g., packets of CIDR 1 whose source IP address belongs to vlana are dynamically acquired IP addresses of veth-ext-tent-A network card when issued via veth-ext-tent-A network card to masquerade IP address segments.
At this time, the data packets received by the vxlan in the default network namespace may be further forwarded to the eth1 link to the vpc network of the tenant in the cloud for communication by masquerading as the IP of the dedicated veth-ext network card after the processing of the internal system of the network namespace of each individual tenant. Thereby creating a multi-tenant network isolation, but at the same time being able to interwork with the respective vpc network.
Fig. 6 shows an example of hybrid networking with the vxlan gateway node of the present invention. As shown in examples 1 and 2 on the left side of fig. 6, multiple tenants on unregistered vpc's nodes use multiple vlan sub-interfaces based on eth1 (driven veth) for cross-cloud network communication over overlay with vlan. In order to interwork with a conventional virtual machine which is positioned in the sdn network of the vpc and is not positioned above the overlay, the virtual machine which is realized as the vxlan gateway node by the multi-tenant isolation method of the invention forwards the data packet in the corresponding naming space to realize direct interwork. For example, the data packets from each tenant of examples 1 and 2 can normally access the nodes in the vpc on the right side through the ext namespace of the vxlan gateway node, and the containers of different tenants on the left side cannot be directly communicated.
Therefore, the invention can be also realized as a hybrid networking method, comprising: the first virtual network card of the virtual machine executing the multi-tenant communication isolation method acquires a data packet from a tenant in an unregistered node; and the virtual machine encapsulates the data packet into a data packet conforming to the vxlan protocol and sends the encapsulated data packet to a corresponding tenant resource in a registered node (i.e., a node of the original vpc) through the second virtual network card.
It should be understood that, in the example shown in fig. 6, if the tenant container in the virtual machine that is not registered in the vpc and is located on the left side wants to communicate with the same tenant container that is registered in the vpc on the right side, the message needs to be received by eth0 of the vlan gateway node in the figure through the vlan gateway node proposed by the present invention, and then eth1 receives the message. In yet another embodiment, if a tenant container in a virtual machine located on the left side that is not registered in a vpc wants to interact with a container in a different subnet of the same tenant (also an unregistered virtual machine located on the left side of the drawing), the sending and receiving of the message still needs to go through the illustrated vxlan gateway node, but at this time eth0 receives the message, eth0 receives the message. Specifically, when the containers of the same tenant communicate across vlanID, that is, as shown by the inter-subnet communication in the tenant that does not communicate with the CIDR, the message still enters from eth0, interworks to the vlan sub-interface in the tenant, and then is sent out from eth0, and does not go to ext veth and eth1. In other words, the vxlan gateway node provided by the invention can be used for the intercommunication between the unregistered virtual machine and the original vpc network and the isolation between different tenants, and can also be used as a gateway for cross-subnet communication inside each tenant in the unregistered virtual machine (through vlan sub-interface intercommunication inside the tenant naming space shown in fig. 5), thereby further improving the communication performance of the mixed network.
Fig. 7 illustrates a schematic architecture of a computing device that may be used to implement the multi-tenant communication isolation method described above according to an embodiment of the invention.
Referring to fig. 7, a computing device 700 includes a memory 710 and a processor 720.
Processor 720 may be a multi-core processor or may include multiple processors. In some embodiments, processor 720 may include a general-purpose host processor and one or more special coprocessors such as, for example, a Graphics Processor (GPU), a Digital Signal Processor (DSP), etc. In some embodiments, processor 720 may be implemented using custom circuitry, for example, an Application SPECIFIC INTEGRATED Circuit (ASIC) or a field programmable gate array (FPGA, field Programmable GATE ARRAYS).
Memory 710 may include various types of storage units, such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions that are required by the processor 720 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 710 may include any combination of computer-readable storage media including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some embodiments, memory 710 may include readable and/or writable removable storage devices such as Compact Discs (CDs), digital versatile discs (e.g., DVD-ROMs, dual layer DVD-ROMs), blu-ray discs read only, super-density discs, flash memory cards (e.g., SD cards, min SD cards, micro-SD cards, etc.), magnetic floppy disks, and the like. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 710 has stored thereon executable code that, when processed by the processor 720, causes the processor 720 to perform the multi-tenant communication isolation method described above.
The multi-tenant communication-isolation scheme according to the present invention has been described in detail above with reference to the accompanying drawings. The multi-tenant communication isolation scheme can provide a vxlan gateway system running in a virtual machine on a cloud, and the system enables an original VPC primary network in underley to be accessed when an overlay network supporting a multi-vlan pattern is constructed by an upper system by using the vxlan, so that equipment which is not registered in the VPC can be mixed with the primary VPC for networking. The scheme supports the condition that multiple tenants exist, and allows isolation configuration to be conducted on the tenant network. The scheme is simple in configuration, no complex control system exists, and the configuration is completed by using the original linux assembly, so that the scheme is very light.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for performing the steps defined in the above-mentioned method of the invention.
Or the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (11)
1. A multi-tenant communication isolation method, comprising:
Creating a first virtual network card and a second virtual network card, wherein the first virtual network card is positioned in a default network naming space;
Creating an outbound network namespace, and joining the second virtual network card into the outbound network namespace;
Creating a private network namespace and a first virtual network card pair for each tenant, wherein a first end of each first virtual network card pair is connected to the default network namespace, and a second end is connected to the private network namespace of the tenant; and
Creating a second pair of virtual network cards for each tenant, the first end of each second pair of virtual network cards being connected to the tenant's private network namespace, the second end being connected to the outbound network namespace,
The first virtual network card pair of each tenant is used for acquiring network information which is received by the first virtual network card and addresses the tenant; and the second virtual network card of each tenant transmits network information to be transmitted to the second virtual network card.
2. The method of claim 1, further comprising:
Creating a vxlan tunnel interface and a first bridge within the default network namespace;
and adding the tunnel interface device and the first end of each first virtual network card pair into the first network bridge, and forwarding the network information received by the first virtual network card to the first end of the first virtual network card pair of the network information addressing tenant through the tunnel interface by the first network bridge.
3. The method of claim 1, further comprising:
Creating a plurality of VLAN subinterfaces on the second end of the corresponding first virtual network card pair in the private network naming space of at least one tenant; and
And configuring a gateway interface address of a corresponding VLAN for each VLAN sub-interface.
4. A method as in claim 3, further comprising:
And configuring an IP conversion rule to perform address conversion on the data packet of which the source IP address belongs to the VLAN subinterface and send the address conversion to the outbound network naming space by the first end of the corresponding second virtual network card pair.
5. The method of claim 4, wherein address translating the data packet having the source IP address belonging to the VLAN sub-interface comprises:
and dynamically acquiring the IP address of the first end of the corresponding second virtual network card pair, and disguising the source IP address based on the dynamically acquired address segment.
6. The method of claim 3, wherein when the network information received by the first virtual network card is used for communication between different subnets within the same tenant, the network information is communicated between VLAN subinterfaces of the tenant and transmitted via the first virtual network card.
7. The method of claim 1, further comprising:
Creating a second bridge within the outbound network namespace;
and adding the second virtual network card and the second end of each second virtual network card pair into the second network bridge, and delivering the network information to be sent by the second end of each second virtual network card pair of each tenant to the second network bridge and converging the network information to the second virtual network card for sending.
8. The method of claim 1, further comprising:
And configuring FDB list items in the default network name space for forwarding guidance of the header of the data link layer broadcast and the unknown unicast.
9. A hybrid networking method comprising:
the first virtual network card of the virtual machine performing the method of any of claims 1 to 8 obtaining a data packet from a tenant in an unregistered node; and
And the virtual machine encapsulates the data packet into a data packet conforming to the vxlan protocol and sends the encapsulated data packet to a corresponding tenant resource in the registered node through the second virtual network card.
10. A computing device, comprising:
A processor; and
A memory having executable code stored thereon, which when executed by the processor causes the processor to perform the method of any of claims 1 to 9.
11. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1 to 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210139250.8A CN114640554B (en) | 2022-02-15 | 2022-02-15 | Multi-tenant communication isolation method and hybrid networking method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210139250.8A CN114640554B (en) | 2022-02-15 | 2022-02-15 | Multi-tenant communication isolation method and hybrid networking method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114640554A CN114640554A (en) | 2022-06-17 |
| CN114640554B true CN114640554B (en) | 2024-07-12 |
Family
ID=81946077
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210139250.8A Active CN114640554B (en) | 2022-02-15 | 2022-02-15 | Multi-tenant communication isolation method and hybrid networking method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114640554B (en) |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115277419B (en) * | 2022-08-09 | 2024-01-26 | 湖南大学 | Acceleration network starting method in service-free calculation |
| CN115473760B (en) * | 2022-08-31 | 2023-12-26 | 上海仙途智能科技有限公司 | Data transmission method and device, terminal equipment and computer readable storage medium |
| CN117857062A (en) * | 2022-09-29 | 2024-04-09 | 中兴通讯股份有限公司 | Tenant management method, equipment and storage medium |
| CN116049896A (en) * | 2023-03-29 | 2023-05-02 | 中孚安全技术有限公司 | Method, system, equipment and medium for realizing data isolation under linux system |
| CN116488959A (en) * | 2023-05-11 | 2023-07-25 | 阿里巴巴(中国)有限公司 | Network system, node and communication method based on virtual expansion local area network |
| CN119052196A (en) * | 2024-09-05 | 2024-11-29 | 浪潮云信息技术股份公司 | Multicast traffic forwarding device, method, equipment and medium |
| CN119696956B (en) * | 2024-12-31 | 2025-09-30 | 苏州元脑智能科技有限公司 | Network deployment method, device, equipment and computer readable storage medium |
| CN119865524A (en) * | 2025-01-03 | 2025-04-22 | 浪潮云信息技术股份公司 | Dynamic cloud connection method, device, equipment and medium applied to VPC |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107959614A (en) * | 2017-10-30 | 2018-04-24 | 广东睿江云计算股份有限公司 | A kind of self-defined network-building method of multi-tenant based on network namespace, system |
| CN112953858A (en) * | 2021-03-05 | 2021-06-11 | 网宿科技股份有限公司 | Message transmission method in virtual network, electronic device and storage medium |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104394130B (en) * | 2014-11-12 | 2017-07-25 | 国云科技股份有限公司 | A kind of multi-tenant virtual network partition method |
| US10148611B2 (en) * | 2015-03-30 | 2018-12-04 | EMC IP Holding Company LLC | Network address sharing in a multitenant, monolithic application environment |
| US11422840B2 (en) * | 2015-08-28 | 2022-08-23 | Vmware, Inc. | Partitioning a hypervisor into virtual hypervisors |
| CN105812222A (en) * | 2016-03-10 | 2016-07-27 | 汉柏科技有限公司 | Multi-tenant virtual network and realization method based on virtual machine and container |
-
2022
- 2022-02-15 CN CN202210139250.8A patent/CN114640554B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107959614A (en) * | 2017-10-30 | 2018-04-24 | 广东睿江云计算股份有限公司 | A kind of self-defined network-building method of multi-tenant based on network namespace, system |
| CN112953858A (en) * | 2021-03-05 | 2021-06-11 | 网宿科技股份有限公司 | Message transmission method in virtual network, electronic device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114640554A (en) | 2022-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114640554B (en) | Multi-tenant communication isolation method and hybrid networking method | |
| US11765000B2 (en) | Method and system for virtual and physical network integration | |
| EP3984181B1 (en) | L3 underlay routing in a cloud environment using hybrid distributed logical router | |
| KR102054338B1 (en) | Routing vlan tagged packets to far end addresses of virtual forwarding instances using separate administrations | |
| US10530656B2 (en) | Traffic replication in software-defined networking (SDN) environments | |
| JP5410614B2 (en) | Enterprise layer 2 seamless site expansion in cloud computing | |
| EP3664383B1 (en) | Scalable handling of bgp route information in vxlan with evpn control plane | |
| US9967182B2 (en) | Enabling hardware switches to perform logical routing functionalities | |
| US10367733B2 (en) | Identifier-based virtual networking | |
| US20150124823A1 (en) | Tenant dhcp in an overlay network | |
| US20130124750A1 (en) | Network virtualization without gateway function | |
| CN108199963B (en) | Message forwarding method and device | |
| US10020954B2 (en) | Generic packet encapsulation for virtual networking | |
| US20250310240A1 (en) | Communication Method, Gateway, and Management Method and Apparatus in Hybrid Cloud Environment | |
| WO2023109398A1 (en) | Packet transmission method and apparatus | |
| US12375394B2 (en) | Method and system for facilitating multi-tenancy routing in virtual private cloud |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |