Survey results: 3 questions about virtualizing the network edge

SDxCentral recently hosted a webinar with speakers from ACG Research, Intel and Enea: “NFVI as a Foundation for Service Value at the Network Edge”. The goal was to discuss how open source-based NFV infrastructure can ensure vCPE service flexibility, performance and cost-efficiency, while providing the foundation for more advanced capabilities such as service chaining.

By Erik Larsson


Paul Parker-Johnson of ACG Research described the role of the network edge for service enablement, and how business networking has reached an inflection point where using open, cloud-native designs now yields the right order of magnitude improvements in cost-effectiveness.

Chandresh Ruparel from Intel talked about the importance of common infrastructure and software scalability across cloud, and edge for service innovation and agility. He also explained the role played by initiatives such as the Intel Network Builder community, and the Intel Select Solutions for NFVI.

Nicolas Bouthors, NFV CTO at Enea discussed how a lightweight NFVI software platform based on open source can provide the foundation for vCPE agility and innovation, with high networking performance, mixed virtualization technologies (KVM and/or Docker containers), complete VNF lifecycle management, and value-added Service Function Chaining (SFC).

During the webinar, three questions were answered by a highly qualified audience of solution vendors, Systems Integrators and service providers.

Question 1: How important is it that an NFVI solution for the network edge provides flexibility for the operator to deploy VNFs agnostically and with equal ease at the customer premise or in the cloud?

More than half of the respondents (57%) deemed it absolutely necessary and 39% answered that this flexibility is nice to have, but not necessary. This confirms that flexibility is a key attribute that operators are looking for, especially for the transparent deployment of VNFs between customer premises and cloud. Not surprising, since this is key for service agility and cost optimization.

Depending on network topology and branch size, some network functions could be deployed at a PoP data center or on the CPE. For example, a small branch office might only run routing and SD WAN on the CPE, while VPN, FW and IDS functions run in the data center. On the other hand, a sizable brand office could use a large CPE running VPN, FW and IDS functions.

Question 2: Where will container-based services play the most important role?

Everyone agrees that containers will play a significant role, which is to be expected given the clear advantage of the technology in terms resource consumption, fast instantiation and OS independence.

Container-based services are seen as being equally important at uCPE and telco data centers (42%), but probably for different reasons. Using containers in uCPEs translates into lower cost per hardware unit, while data centers benefit from increased flexibility and overall cost-efficiencies spread over a large volume of racks.

A lower percentage of respondents (15%) thought that containers showed the greatest benefits at the aggregation / PoP level.

Question 3: When do you think service chaining will be widely available for commercial use?

There has been a lot interest in service chaining over the past couple of years, based on the technology’s ability to automatically optimize service sequence and service mix. Although there have already been some initial deployments, the webinar audience expects a gradual ramp up of SFC during 2018 and a move to wider use in 2019.

Lots of things happening… all leading to service innovation at the edge!