Technology is always evolving. However, in recent times, two significant changes have emerged in the world of networking. First, the networking is moving to software that can run on commodity off-the-shelf hardware. Second, we are witnessing the introduction and use of many open-source technologies, removing the barrier of entry for new product innovation and rapid market access.
Networking is the last bastion within IT to adopt the open source. Consequently, the networking industry has been badly hit in terms of slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost-model changes over the past 10 years. However, IP networking has not changed much since the mid-’90s.
When I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.
The open development process
Open source refers to the software, which uses an open development process that has allowed the computed functions to become virtually free. In the past, networking used to be expensive and licensing came at a high cost. It still has to run on proprietary hardware that is often under patent or trade-secret protection.
The main disadvantages of proprietary hardware are the cost and vendor software release lock-in. A lot of major companies, such as Facebook, AT&T, and Google, are using open-source software and commodity white-box hardware on a huge scale. This has slashed the costs dramatically and has split open the barriers to innovation.
As software eats the world, agility is one of the great benefits. Thus, the speed of change becomes less inhibited by long product development cycles and new major functionality can be achieved in days and months, not years. Blackberry is a great example of a company that did nothing wrong, over and above having multi-year development cycles, but still they got eaten by Apple and Google.
The white box and grey box
The white box is truly off-the-shelf gear, while the grey box is taking off-the-shelf white-box hardware and making sure it has, for example, specific drivers, or versions of the operating system so that it’s optimized and supports the software. Today, many say they are a white box, but in reality they are a grey box.
With grey box, we are back into “I have a specific box with a specific configuration.” However, this keeps us from being totally free. Freedom is essentially the reason why we want white-box hardware and open-source software in the first place.
When networking became software-based, the whole objective was that it gave you the opportunity to run other software stacks on the same box. For example, you can run security, wide-area network (WAN) optimization stack, and a whole bunch of other functions on the same box.
However, within a grey-box environment, when you have to get specific drivers, for example for networking, it may inhibit other software functions that you might want to run on that stack. So it becomes a tradeoff. Objectively, a lot of testing needs to be performed so that there are no conflicts.
SD-WAN vendors and open source
Many SD-WAN vendors use open source as the foundation of their solution and then add additional functionality over the baseline. Originally, the major SD-WAN vendors did not start from zero code! A lot came from open-source code and they then added utilities on top.
The technology of SD-WAN did hit a sore spot of networking that needed attention — the WAN edge. However, one could argue that one of the reasons SD-WAN took off so quickly was the availability of open source. It enabled them to leverage all the available open-source components and then create their solution on top of that.
For example, let’s consider FRRouting (FRR), which is a fork off from the Quagga routing suite. It’s an open-source routing paradigm that many SD-WAN vendors are using. Essentially, FRR is an IP routing protocol suite for Linux and UNIX platforms that includes protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, and RIP. It’s growing with time and today it supports EVPN types 2, 3, and 5. Besides, you can even pair it with a Cisco device running EIGRP.
There is a pool of over 60 SD-WAN vendors at the moment. Practically, these vendors don’t have 500 people writing code every day. They are all getting open-source software stacks and using them as the foundation of the solution. This allows rapid entrance into the SD-WAN market. Ultimately, new vendors can enter really quickly at a low cost.
SD-WAN vendors and Casandra
Today, many SD-WAN vendors are using Casandra as the database to store all their stats. Casandra, licensed under Apache 2.0, is a free and open-source, distributed, wide-column store and NoSQL database-management system.
One of the issues that some SD-WAN vendors found with Casandra was that the code consumed a lot of hardware resources and that it didn’t scale very well. The problem was that if you have a large network where every router is generating 500 records per second and since most SD-WAN vendors track all flows and flow stats, you will get bogged down while managing all of the data.
A couple of SD-WAN vendors went to a different NoSQL database-management system stack that didn’t take up too much in hardware resources and rather distributed and scaled much better. Basically, this can be viewed as both an advantage and a disadvantage of using open-source components.
Yes, it does allow you to move quickly and at your own pace, but the disadvantage of using open source is that sometimes you end up with a fat stack. The code is not optimized, and you may need more processing power that you would not need with an optimized stack.
The disadvantages of open source
The biggest gap in open source is probably the management and support. Vendors keep making additions to the code. For example, zero-touch provision is not part of the open-source stack, but many SD-WAN vendors have added that capability to their product.
Besides, low-code/no-code coding can also become a problem. As we now have APIs, users are mixing and matching stacks together and not doing raw coding. We now have GUIs that have various modules, which can communicate with a REST API. Essentially, what you are doing is taking the open-source modules and aggregating them together.
The problem with pure network function virtualization (NFV) is that a bunch of different software stacks is running on a common virtual hardware platform. The configuration, support, and logging from each stack still require quite a bit of integration and support.
Some SD-WAN vendors are taking a “single pane of glass” approach where all the network and security functions are administered from a common management view. Alternatively, other SD-WAN vendors partner with security companies where security is a totally separate stack.
AT&T 5G rollout consisted of 5G
Part of AT&T 5G rollout consisted of open-source components in their cell towers. They deployed over 60,000 5G routers that were compliant with a newly released white-box spec hosted by the Open Compute Project.
This enabled them to break free from the constraints of proprietary silicon and feature roadmaps of traditional vendors. They are using disaggregated network operating system (dNOS) as the operating system within the white boxes. The dNOS’ function is to separate the router’s operating system software from the router’s underlying hardware.
Previously, the barriers to entry for creating a network operating system (NOS) have been too many. However, due to the advances in software with Intel’s DPDK, the power of YANG models and in hardware, the Broadcom silicon chips have marginally reduced the barriers. Hence, we are witnessing a rapid acceleration in the network innovation.
Intel’s DPDK, which consists of a set of software libraries, is a data-plane development kit that allows the chipsets to process and forward packets in a lot quicker fashion. Therefore, it boosts the packet-processing performance and throughput, allowing more time for data-plane applications.
Intel has built an equivalent of an API at the kernel level to allow the packet to be processed much faster. Intel also added AES New Instructions (NI) that allows an Intel chip to process encryption and decryption much faster. Intel AES NI is a new encryption-instruction set that improves on the Advanced Encryption Standard (AES) algorithm and accelerates the encryption of data.
Five years ago, no one wanted to put encryption on WAN routers because of the 10x performance hit. However, today, with Intel, the cost of CPU cycles from doing the encryption and decryption is much less than before.
The power of open source
In the past, the common network strategy was to switch when you can and route when you must. Switching is considerably fast and cheaper at gigabit speeds. However, with open source, the cost of routing is coming down and with the introduction of routing in the software; you can scale horizontally and not just vertically.
To put it in other words, instead of having a $1 million Terabit router, one can have 10×100 Gigabit routers at 10x10K or 100K, which is a significant 10x reduction in costs. It is close to 20x if one figures in redundancy. Today’s routers require a 1:1 primary/redundant router configuration, whereas when you scale horizontally, an M+N model can be used where one router can be used as the redundant for 10 or more production routers.
In the past, for a Terabyte router, you would have to pay a heap as you needed a single box. Whereas today, you can take a number of Gigabyte servers and the combination of horizontal scaling allows the total of Terabit speeds.
The future of open source
Evidently, the role of open source will only grow in networking. Traditional networking leaders, such as Cisco and Juniper, are likely to see a lot of pressure on their revenues and especially margins as the value-add for proprietary will become less and less.
The number of vendors getting into networking will also increase as the cost to create and deploy a solution is lower, which will also challenge the big vendors. In addition, we will witness more and more gigantic companies like Facebook and AT&T that will continue to use more open source in their networks to keep their costs down and scale out the next-generation networks, such as 5G, edge computing, and IoT.
Open source will also bring about changes in the design of networks and will continue to push routing to the edge of the network. As a result, more and more routing will occur at the edge, so you don’t need to backhaul traffic. Significantly, open source brings the huge advantage of less cost to deploy routing everywhere.
The biggest challenge with all the open-source initiatives is standardization. The branches of source code and the teams working on them split on a regular basis. For instance, look at all the variations of Linux. So, when an AT&T or other big company bets on a specific open-source stack and continues to contribute to it openly, it still does not guarantee that in three years this will be the industry standard.
A larger retailer in the U.S. has chosen an overall IT strategy of using open source wherever possible, including the network. They feel that to compete with Amazon, they have to become like Amazon.
Where to go from here?
Every technology and product has its place and time. The said enterprises should start investigating where open-source networking fits into their strategy. Some common use cases include:
- Open VPN – Moving to open source on remote connectivity.
- Open Container Internetworking – Networking Kubernetes of other container environments in hybrid, multi-cloud architectures. Evolving from VNFs to CNFs.
- Labs – Testing new concepts and features virtually for free.
- Network Management – Open-source and/or freemium tools that can add value with minimal investment.
- Adding open-source-based networking vendors into the RFP process, if nothing more than to put price pressure on the incumbent vendor.