|
|
|
|
|
|
|
|
|
|
|
The runway is running out on the Theory of Aggregation If server virtualization saved the industry billions of dollars in the past decade, surely packet/network/cable virtualization is the new frontier that will save the industry billions in the next decade.
Cheshire, Connecticut: Mon, 2/23/15 - 2:02pm View comments
Data center networks have been designed with core switches (e.g. Nexus 7xxx) at the center of the network, aggregation switches (e.g. Nexus 5xxx) connected to the core and then TOR switches connected to aggregation switches, and in turn servers connected to the TOR switches. This Hub & Spoke model has been prevalent for quite some time, and it is changing rapidly, with Spine/Leaf type configurations as alternatives. Although these new architectures are better, they are still fixed architectures. Can we imagine a world where we do not have to rely on such fixed architectures, where we can allow network administrators to reconfigure physical infrastructure as well as packet forwarding plane dynamically via software? In such a world we would allow any port to connect to any other port by making the change in the physical infrastructure via software. Can you imagine what would happen if you could cohesively control the change in the physical connectivity and the packet flow rules and decisions from the same programmatic software interface?
In our office in Cheshire, Connecticut we have a framed picture of a lighthouse in a stormy ocean, with the following words under it: "The voyage of discovery is not in seeking new landscapes, but in having new eyes" (Marcel Proust). Hence I want to stand back a little bit and look at how we are evolving our data center architecture, so that we can use the good from past designs, but are able to think differently when we need to. The end goal for all of us is to provide unrestricted growth to data centers at a controlled cost and with as much flexibility as possible. In doing so if we can simplify the network, it would be a bonus. I would like to ask the reader to look at data center networking from a different eye for the next few paragraphs, and for this short journey to break away from the "Theory of Aggregation", as I call it.
The Theory of Aggregation can be seen in the development of switches from incumbent and new players alike. Switches with 48 x 1Gbps ports, plus 4 x 10Gbps aggregation ports; or 48 x 10Gbps ports, plus 4 x 40Gbps aggregation ports. This type of aggregation is designed to support a multi-tier Hub/Spoke model and then live with the restrictions that come with this structure. During its evolution cycle the Theory of Aggregation saw us uplink 1Gbps ports into 10Gbps ports, a 10:1 ratio; then we evolved to aggregating 10Gbps into 40Gbps, a 4:1 ratio; now we evolve to aggregate 40Gbps into 100Gbps, a 2.5:1 ratio. As this ratio shrinks, the aggregation pipe seems to be getting more constricted indicating that the Theory of Aggregation has a rather shortening runway. At the same time the reason that we even have these switches in the data center is to allow connectivity between thousands of servers. The server processor and number of cores also continue to evolve (more VMS, etc.) which leads to fill rates in the pipes leaving the server to increase, further shortening the runway to aggregation. We need to relook at how we build networks and we expect data centers to shift towards a more flexible architecture that can provide the benefits of a purpose-built network; an architecture that can be reconfigured dynamically as the network grows and its needs change.
The Glass Core architecture from Fiber Mountain says that there are thousands of fiber strands available to be configured as the network grows or its needs change.
FIBER STRANDS ARE A REUSABLE RESOURCE THAT CAN BE CONFIGURED VIA SOFTWARE.
Cable once and use software to define how a particular strand will be used, and reconfigure it as many times as you desire. You can use a strand in a 10Gbps connection today and connect it from one point to destination X, and then tomorrow you can reuse the same strand for a 40Gbps connection and deliver it to the same destination X, or to a different destination Y, with zero packet processing hops in between, with latency of less than 10 nanoseconds. All of this controlled via software, no human hands need to touch the network; no 40Gbps cable required instead of the 10Gbps cable; all work performed from the comfort of your desktop. Add to this the luxury of programmatically optimizing your data transport plane and packet flow rules as the network requires.
If server virtualization saved the industry billions of dollars in the past decade, surely packet/network/cable virtualization is the new frontier that will save the industry billions in the next decade.
Let us talk a little bit about servers and see how the Theory of Aggregation has influenced this most important component of the data center. The network exists to serve these glorious machines that grow in performance and capability every day. Everything other than the server is plumbing. The cables, the switches, the routers, all plumbing, installed to serve the needs of the host computer and its applications.
Servers have a need to send traffic to thousands of destinations, however they have been told that they are to send only to one master, the switch, which will then determine how to reach those thousands of destinations, they have been told that you have a few interfaces to the switch and as long as you can get it out of one of these interfaces, the switch knows what to do, you should only worry about aggregating packets to go out of the few interfaces.
Now let us imagine that a central software system, such as Fiber Mountain's Alpine Orchestration System (AOS) is aware of the entire network and can help that server have a better view of the landscape. And also imagine that the interfaces at the server have many fibers in them (24-fiber MTP, or 64-fiber MXC). With that many FIBER STRANDS THAT ARE A RECONFIGURABLE RESOURCE CONTROLLED BY SOFTWARE, the server can send traffic down many paths, and break away from the Theory of Aggregation. You can programmatically designate as many connections as are required, directly to storage, or a different servers or server clusters, or use the connections to create a server cluster, all without any intermediary switches. Can packet forwarding decisions be made in the server? How about thinking of switching itself as a NFV, in the hands of the server for whom we built the network. Yes, it looks like the power is in software and yes compute virtualization and VMware can have a field day with this type of network control and power. Add the innovations and work in silicon photonics into the equation and the future of networking gets even more exciting.
The enhancements to servers will take some time and it depends on several vendors and how they work together, but with the benefits being what they are, we will expect to see it sooner than most people anticipate and much sooner than some incumbent vendors would hope.
Other than server enhancements, technology mentioned in this write-up is working today and Fiber Mountain, a young startup company, continues to evolve their technology every day.
Related story:
Should data centers replace core and aggregation switches with hundreds of intelligent fiber cables?
|
| |||||||
©2015 Alliance Networking LLC - Home - About - Repair - Power Supplies - Refurbished - Blog - Quick Links - Site Map - Contact Us |