Friday, September 20, 2019
Multi-Campus ICT Equipment Virtualization Architecture
Multi-Campus ICT Equipment Virtualization Architecture Multi-campus ICT equipment virtualization architectureà for cloud and NFV integrated service Abstract- We propose a virtualization architecture for multicampusà information and communication technology (ICT)à equipment with integrated cloud and NFV capabilities. Theà aim of this proposal is to migrate most of ICT equipment onà campus premises into cloud and NFV platforms. Adopting thisà architecture would make most of ICT services secure andà reliable and their disaster recovery (DR) economicallyà manageable. We also analyze a cost function and show cost advantages ofà this proposed architecture, describe implementation designà issues, and report a preliminary experimentation of NFV DRà transaction. This architecture would encourage academicà institutes to migrate their own ICT systems located on theirà premises into a cloud environments. Keywords; NFV, Data Center Migration, Disaster Recovery,à Multi-campus network I. INTRODUCTION There are many academic institutions that have multipleà campuses located in different cities. These institutions needà to provide information and communication technology (ICT)à services, such as E-learning services, equally for all studentsà on each campus. Usually, information technology (IT)à infrastructures, such as application servers, are deployed at aà main campus, and these servers are accessed by students onà each campus. For this purpose, each local area networkà (LAN) on each campus is connected to a main campus LANà via a virtual private network (VPN) over a wide areaà network (WAN). In addition, Internet access service isà provided to all students on the multi-campus environment. To access the Internet, security devices, such as firewalls andà intrusion detection systems (IDSs), are indispensable as theyà protect computing resources from malicious cyber activities. With the emergence of virtualization technologies suchà as the cloud computing[1] and network functionsà virtualization (NFV)[2], [3], we expected that ICTà infrastructures such as compute servers, storage devices, andà network equipment can be moved from campuses toà datacenters (DCs) economically. Some organizations haveà begun to move their ICT infrastructures from their ownà premises to outside DCs in order to improve security,à stability, and reliability. Also, there are a lot of contributionsà to archiving DR capabilities with cloud technologies [4], [5], [6]. Active-passive replication or active-active replication areà expected techniques that archive DR capabilities. In theseà replications, a redundant backup system is requiredà dedicatedly at a secondary site. With migration recovery [4],à these backup resources can be shared among many users.à These studies mainly focus on the application servers. While,à integrated DR capability for ICT infrastructures, bothà application and network infrastructures, are still immature.à We propose a multi-campus ICT equipment virtualizationà architecture for integrated cloud and NFV capabilities. Theà aim of this proposal is to migrate entire ICT infrastructuresà on campus premises into cloud and NFV platforms.à Adopting this architecture for multi-campus networks wouldà improve access link utilization, security device utilization,à network transmission delay, disaster tolerance, andà manageability at the same time.à We also analyze the cost function and show costà advantages of this proposed architecture.à To evaluate the feasibility of our proposed architecture,à we built a test bed on SINET5 (Science Informationà NETwork 5) [7], [8], [9]. We describe the test-bed design,à and preliminary experimentation on reducing the recoveryà time of VNF is reported. The rest of this paper is organized as follows. Section IIà shows background of this work. Section III shows proposedà multi-campus network virtualization architecture. Section IVà shows an evaluation of the proposed architecture in terms ofà cost advantages and implementation results. Section Và concludes the paper, and future work is discussedà II. BACKGROUND OF THIS WORK SINET5 is a Japanese academic backbone network forà about 850 research institutes and universities and provideà network services to about 30 million academic users.à SINET5 was wholly constructed and put into operation inà April 2016. SINET5 plays an important role in supporting aà wide range of research fields that need high-performanceà connectivity, such as high-energy physics, nuclear fusionà science, astronomy, geodesy, seismology, and computerà science. Figure 1 shows the SINET5 architecture. It providesà points of presence, called SINET-data centers (DCs), andà SINET DCs are deployed in each prefecture in Japan. Onà each SINET DC, an internet protocol (IP) router, MPLS-TPà system, and ROADM are deployed. The IP routerà accommodates access lines from research institutes andà universities. All Every pairs of internet protocol (IP) routersà are connected by a paier of MPLS-TP paths. These pathsà achieves low latency and high reliability. The IP routers andà MPLS-TP systems are connected by a 100-Gbps-basedà optical path. Therefore, data can be transmitted from aà SINET DC to another SINET DC in up to 100 Gbpsà throughput. In addition, users, who have 100 Gpbs accessà lines, can transmit data to other users in up to 100 Gbpsà throughput.à Currently, SINET5 provides a direct cloud connectionà service. In this service, commercial cloud providers connectà their data centers to the SINET5 with high-speed link such asà 10 Gbps link directly. Therefore, academic users can accessà cloud computing resources with very low latency and highà bandwidth via SINET5. Thus, academic users can receiveà high-performance computer communication betweenà campuses and cloud computing resources. Today, 17 cloudà service providers are directly connected to SINET5 and moreà than 70 universities have been using cloud resources directlyà via SINET5. To evaluate virtual technologies such as cloud computingà and NFV technologies, we constructed at test-bed platformà (shown as NFV platform in fig. 1) and will evaluate theà network delay effect for ICT service with this test bed. NFVà platform are constructed at four SINET-DCs on major citiesà in Japan: Sapporo, Tokyo, Osaka, and Fukuoka. At each site,à the facilities are composed of computing resources, such asà servers and storages, network resources, such as layer-2à switches, and controllers, such as NFV orchestrator, andà cloud controller. The layer-2 switch is connected to aà SINET5 router at the same site with high speed link,à 100Gbps. The cloud controller configures servers andà storages and NFV orchestrator configures the VNFs on NFVà platform. And user can setup and release VPNs betweenà universities, commercial clouds and NFV platformsà dynamically over SINET with on-demand controller. Thisà on-demand controller setup the router with NETCONFà interface. Also, this on-demand controller setup the VPN corelatedà with NFV platform with REST interface.à Today there are many universities which has multipleà campus deployed over wide area. In this multi-campusà university, many VPNs (VLANs), ex hundreds of VPNs, areà desired to be configured over SINET to extend inter-campusà LAN. In order to satisfy this demand, SINET starts newà VPN services, called virtual campus LAN service. With thisà service, layer 2 domains of multi-campus can be connectedà as like as layer 2 switch using preconfigured VLAN ragesà (ex. 1000-2000). III. PROPOSED MULTI-CAMPUS ICT EQUIPMENTà VIRTUALIZATION ARCHITECTURE In this section, the proposed architecture is described.à The architecture consists of two parts. First, we describe theà network architecture and clarify the issues with it. Next, aà NFV/cloud control architecture is described.à A. Proposed multi-campus network architectureà Multi-campus network architecture is shown in Figure 2.à There are two legacy network architectures and a proposedà network architecture. In legacy network architecture 1 (LA1),à Internet traffic for multiple campuses is delivered to a mainà campus (shown as a green line) and checked by securityà devices. After that, the internet traffic is distributed to eachà campus (shown as a blue line). ICT Applications, such as Elearningà services, are deployed in a main campus and accessà traffic to ICT application is carried by VPN over SINETà (shown as a blue line). In legacy network architecture 2à (LA2), the Internet access is different from LA1. Theà Internet access is directly delivered to each campus andà checked by security devices deployed at each campus. In theà proposed architecture (PA), the main ICT application isà moved from a main campus to an external NFV/cloud DC.à Thus, students on both main and sub-campuses can accessà ICT applications via VPN over SINET. Also, internet trafficà traverses via virtual network functions (VNFs), such asà virtual routers and virtual security devices, located atà NFV/cloud DCs. Internet traffic is checked in virtual securityà devices and delivered to each main/sub-campus via VPNà over SINET. There are pros and cons between these architectures.à Here, they are compared across five points: access linkà utilization, security device utilization, network transmissionà delay, disaster tolerance, and manageability.à (1) Access link utilization The cost of an access link from sub-campus to WAN isà same in LA1, LA2 and PA. While, the cost of an access linkà from a main campus to WAN of LA1 is larger than LA2 and PA because redundant traffic traverses through the link.à While, in PA, an additional access link from a NFV/cloudà DC to WAN is required. Thus, evaluating the total access linkà cost is important. In this evaluation, it is assumed thatà additional access links from NFV/cloud DCs to WAN areà shared among multiple academic institutions who use theà NFV/cloud platform and that the cost will be evaluatedà taking this sharing into account. (2) Security device utilization LA1 and PA is more efficient than LA2 because Internet traffic is concentrated in LA1 and PA and a statistically multiplexed traffic effect is expected.à In addition to it, in PA, the amount of physicalà computing resources can be suppressed because virtualà security devices share physical computing resources amongà multiple users. Therefore, the cost of virtual security devicesà for each user will be reduced. (3) Network transmission delay Network delay due to Internet traffic with LA1 is longerà than that with LA2 and PA because Internet traffic to subcampusesà is detoured and transits at the main campus in LA1,à however, in LA2, network delay of Internet to sub-campusesà is directly delivered from an Internet exchange point on aà WAN to the sub-campus, so delay is suppressed. In PA,à network delay can be suppressed because the NFV and cloudà data center can be selected and located near an Internetà access gateway on WAN. While, the network delay for ICT application servicesà will be longer in PA than it in LA1 and LA2. Therefore, theà effect of a longer network delay on the quality of ITà application services has to be evaluated.à (4) Disaster toleranceà Regarding Internet service, LA1 is less disaster tolerantà than LA2. In LA1, when a disaster occurs around the mainà campus and the network functions of the campus go down,à students on the other sub-campuses cannot access theà internet at this time. Regarding IT application service, IT services cannot beà accessed by students when a disaster occurs around the mainà campus or data center. While, in PA, NFV/cloud DC isà located in an environment robust against earthquakes andà flooding. Thus, robustness is improved compared with LA1à and LA2. Today, systems capable of disaster recovery (DR) areà mandatory for academic institutions. Therefore, serviceà disaster recovery functionality is required. In PA, back upà ICT infrastructures located at a secondary data center can beà shared with another user. Thus, no dedicated redundantà resources are required in steady state operation, so theà resource cost can be reduced. However, if VM migrationà cannot be fast enough to continue services, active-passive orà active-passive replication have to be adopted. Therefore,à reducing recovery time is required to adapt migrationà recovery to archive DR manageability more economicallyà (5) Manageability LA1 and PA is easier to manage than LA2. Becauseà security devices are concentrated at a site (a main campus orà NFV/cloud data center), the number of devices can beà reduced and improving manageability.à There are three issues to consider when adopting the PA.à Evaluating the access link cost of an NFV/cloudà data center. Evaluating the network delay effect for ICT services.à Evaluating the migration period for migrationà recovery replication. B. NFV and cloud control architectureà For the following two reasons, there is strong demand toà use legacy ICT systems continuously. Thus, legacy ICTà systems have to be moved to NFV/cloud DCs as virtualà application servers and virtual network functions. One reasonà is that institutions have developed their own legacy ICTà systems on their own premises with vender specific features.à The second reason is that an institutions work flows are notà easily changed, and the same usability for end users isà required. Therefore, their legacy ICT infrastructuresà deployed on a campus premises should be continuously usedà in the NFV/cloud environment. In the proposed multicampusà architecture, these application servers and networkà functions are controlled by using per-user orchestrators.à Figure 3 shows the proposed control architecture. Eachà institution deploys their ICT system on IaaS services. VMsà are created and deleted through the application interfaceà (API), which is provided by IaaS providers. Each institutionà sets up an NFV orchestrator, application orchestrator, andà management orchestrator on VMs. Both active and standbyà orchestrators are run in primary and secondary data centers,à respectively, and both active and standby orchestrators checkà the aliveness of each other. The NFV orchestrator creates theà VMs and installs the virtual network functions, such asà routers and virtual firewalls, and configures them. Theà application orchestrator installs the applications on VMs andà sets them up. The management orchestrator registers theseà applications and virtual network functions to monitoringà tools and saves the logs outputted from the IT serviceà applications and network functions. When an active data center suffers from disaster and theà active orchestrators go down, the standby orchestratorsà detect that the active orchestrators are down. They startà establishing the virtual network functions and applicationà and management functions. After that, the VPN is connectedà to the secondary data center being co-operated with the VPNà controller of WAN. In this architecture, each institution can select NFVà orchestrators that support a users legacy systems.à IV. EVALUATION OF PROPOSED NETWORK ARCHITECTURE This section details an evaluation of the access link costà of proposed network architecture. Also, the test-bedà configuration is introduced, and an evaluation of theà migration period for migration recovery is shown.à A. Access link cost of NFV/cloud data centerà In this sub-section, an evaluation of the access link costà of PA compared with LA1 is described.à First, the network cost is defined as follows.à There is an institution, u, that has a main campus and nuà sub-campuses. The traffic amount of institution u is defined as followsà different sites can be connected between a user site and cloudà sites by a SINET VPLS (Fig. 7). This VPLS can be dynamically established by a portal that uses the RESTà interface for the on-demand controller. For upper-layerà services such as Web-based services, virtual networkà appliances, such as virtual routers, virtual firewalls, andà virtual load balancers, are created in servers through theà NFV orchestrater. DR capabilities for NFV orchestrator isà under deployment. C. Migiration period for disaster recoveryà We evaluated the VNF recovering process for disasterà recovery. In this process, there are four steps.à Step 1: Host OS installation Step 2: VNF image copy Step 3: VNF configuration copy Step 4: VNF process activation This process is started from the host OS installation becauseà there are VNFs that are tightly coupled with the host OS andà hypervisor. There are several kinds and versions of host OS,à so the host OS can be changed to suite to the VNF. Afterà host OS installation, VNF images are copied into the createdà VMs. Then, the VNF configuration parameters are adjustedà to the attributions of the secondary data center environmentà (for example, VLAN-ID and IP address), and theà configuration parameters are installed into VNF. After that,à VNF is activated. In our test environment, a virtual router can be recoveredà from the primary data center to the secondary data center,à and the total duration of recovery is about 6 min. Eachà duration of Steps 1-4 is 3 min 13 sec, 3 min 19 sec, 11 sec,à and 17 sec, respectively. To shorten the recovery time, currently, the standby VNFà is able to be pre-setup and activated. If the sameà configuration can be applied in the secondary data centerà network environment, snapshot recovering is also available.à In this case, Step 1 is eliminated, and Steps 2 and 3 areà replaced by copying a snap shot of an active VNF image,à which takes about 30 sec. In this case, the recovering time isà about 30 sec. V. CONCLUSION Our method using cloud and NFV functions can achieveà DR with less cost. We proposed a multi-campus equipmentà virtualization architecture for cloud and NFV integratedà service. The aim of this proposal is to migrate entire ICTà infrastructures on campus premises into cloud and NFVà platforms. This architecture would encourage academicà institutions to migrate their own developed ICT systems located on their premises into a cloud environment. Adoptingà this architecture would make entire ICT systems secure andà reliable, and the DR of ICT services could be economicallyà manageable. In addition, we also analyzed the cost function, andà showed a cost advantages of this proposed architectureà described implementation design issues, and reported aà preliminary experimentation of the NFV DR transaction/
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.