Technology approach for NFV Traffic Acceleration
NFV Traffic Acceleration
In the NFV approach of virtualizing traditional network services, minimal latency and higher throughput of VNF is desired and has been a challenge.
With the Standard implementation of a virtual switch, extra latency is added when the traffic processing happens from the hypervisor OS kernel.
With VNF transformation, larger Network service is divided into multiple smaller virtual functions which makes higher inter-communication within VNF. Lower packet size can introduce higher interrupts count to the kernel that will increase latency, resulting in lower packets-per-second (pps).
To achieve the nearline performance in VNFs, kernel bypass mechanism for traffic processing is required in NFV context. DPDK and SR-IOV are widely accepted methods for achieving this goal.
SR-IOV (single-root I/O virtualization) - Focuses on virtualizing the physical network interface, allocated and attached to VM instances, which will bypass virtual switches in the hypervisor environment. This will provide a higher packet rate for North-South traffic patterns.
DPDK (Data Plane Development Kit) - Set of user space libraries and drivers for fast packet processing. OVS-DPDK (DPDK Accelerated OpenvSwitch) replaces the standard OVS kernel forwarding path with a DPDK-based forwarding path which runs on User Space. PMD (Poll Mode Drivers) enable direct transferral of packets between user space and the physical interface, bypassing the kernel network stack. This offers a significant performance boost over kernel forwarding,
Both methods bypass the kernel and provide higher line rates on VNF network traffic. But every method has its own strengths and weaknesses.
VNF Traffic Types
VNF Traffic is classified into two types.
The Network traffic between a VNF instance and external network element outside of the NFVi scope. North-South traffic always goes through physical infrastructure.
The Network traffic between two VNF instances within the same NFVi. East-West traffic may or may not travel through a physical network based on the network architecture.
SR-IOV behaviour on VNF Traffic
When VNF uses SR-IOV interfaces, the network traffic bypasses the virtual networking layer in the hypervisor and directly communicates with the physical network via virtual function.
For both East-West and North-South traffic on a VNF with SR-IOV interfaces, packets need to traverse to the physical switch.
OVS-DPDK behaviour on VNF Traffic
When VNF uses OVS-DPDK interfaces, traffic bypasses the kernel space and is handled in the user space. East-West traffic in the same hypervisor traverses through the virtual switch only. North-South traffic always goes through the physical interface of the hypervisor and DPDK is not involved in traffic acceleration on NS traffic patterns.
SR-IOV vs OVS-DPDK
In a situation where the traffic is East-West within the same server, OVS-DPDK wins against SR-IOV. If traffic is routed/switched within the server and not going to the NIC. There is No advantage of bringing SR-IOV. Rather SR-IOV can become a bottleneck when the traffic path can become long and NIC resources are utilized. Therefore workloads that focus on East-West traffic in the same server can leverage the capabilities of OVS-DPDK.
In a scenario where VNF traffic in North-South, SR-IOV wins against OVS-DPDK. OVS-DPDK introduces a bottle-neck when traffic is server to server since it needs to traverse via physical NICs.
Both SR-IOV and OVS-DPDK provide traffic acceleration and have been designed for different use cases.
VNF Designer perspective
VNF designers need to consider the traffic pattern and VNF placement when choosing the interface types.
For VNFs or VNFCs those configured to run in same hypervisor can leverage OVS-DPDK based interfaces to accelerate traffic, whereas interfaces which uses SR-IOV VFs are more suitable for the traffic which is external or within VNFs or VNFCs those configured to run in multiple hypervisors.
NFV Platform Architect perspective
The proposed Traffic workload needs to be taken into consideration when designing the Cloud Hardware deployment.
OVS-DPDK enabled interfaces can be used for tenant and external networks where most of the East-West traffic is in inplace.
SR-IOV Virtual Functions can be utilized on External provider networks.
However placing OVS-DPDK and regular OVS interfaces are not recommended and not supported by the same hypervisor since kernel space and user space traffic traversal will cause huge performance degrades.