As VMware pushes NSX into the spotlight, Cisco responds that the network is already utilized.
Scrolling through hundreds of post VMworld articles, I ran into one from Embrane showing that network virtualization is still not well understood. In Embrane’s post they referenced a blog post by Padmasree Warrior, CTO of Cisco, about the limitations of a software only approach to datacenter networking. In Warrior’s post, she comments “Underutilization is not a problem in the network. In fact, server virtualization is pushing the limits of today’s network utilization and driving demand for higher port counts ….” Let’s discuss network utilization.
What is Network Utilization?
While network interfaces may be well utilized (at least at the aggregation layer), the power of networking is underutilized. Networking is not just about speeds and feeds. Cisco and other networking hardware vendors are limited, by design, when it comes to the processing power, memory and other resources they can offer in their hardware products. It’s hard if not impossible to slap the latest processor technology into a product designed around earlier technology, never mind the possible software issues. This leaves a gap where features/functions may be underutilized due to limitations of the onboard system hardware, running the software that is managing the networking hardware (brain v brawn).
The demand for more ports can push companies towards less expensive ToR hardware, putting the more expensive hardware in the EoR or aggregation position (if the company decides to upgrade the core network on an ongoing basis). While you can apply features at the EoR and aggregation layer, they are still necessary at the ToR and even server virtualization level. Network Virtualization can help put features in the right places.
Why Network Virtualization Makes Sense.
As I discussed earlier, networking hardware vendors are limited by how much processing power, memory and other resources they can offer in their products. Builders of PC based systems are less so and many vendors can offer prototype hardware for development. Companies like Nutanix offer systems that can be stacked to add more processing, memory and storage quickly and efficiently.
Essentially, server hardware is in a better position to make fast networking decisions than the processor of a hardware networking device. While there is a need for NPUs, TCAMs and other high-speed forwarding hardware, the brain doesn’t have to stay with it. A software only network either does not exist or conversely always exists (depending on how you look at things). There is always hardware in a software driven network and hardware requires software to function. Utilizing the latest processing hardware to offload and assist your networking hardware can be beneficial to both sides.
Don’t Replace The Network, Enhance The Network
I am a firm believer in utilizing hardware to the fullest. If your current networking infrastructure can handle the traffic, but needs some extra features, look at ways you can provide the features without using a forklift. One of the easier ways to do this is to push some of the traffic control on to the servers themselves. You can accomplish this by using a software router like the one from Brocade/Vyatta or even Cisco’s CSR1000v. I have not had the time to look into NSX so I cannot comment there other than to say, it sounds like a lot of the same features are available.
Virtualized routers can be managed by the same team who manages your physical routers and switches, leaving network control where it has been, in the hands of the network engineers.
Jason Iannone says
My vision is skewed as a service provider. I would like to reduce and homogenize the number of CPE devices installed at customer locations. I look at NFV as a solution to that problem.
We sell access with a number of value adds, such as security and managed router solutions. These managed services result in over one hundred deployment scenarios. If I can virtualize my CPE with industry standard hardware and whatever flavor of hypervisor at the central office, I can reduce the number of deployment scenarios by an order of magnitude. If I can push forwarding plane to the access edge, even better.
I understand the disruption to operations and organizational impact to a degree as noted by John Vincenzo at Embrane, but these are growing pains related to streamlining service instantiation and modification. One of the open questions for me is troubleshooting. What are the newly introduced problems? What are the proper steps for troubleshooting the standard issue problems?