The power of the open compute model for data center networking
For data centers, there’s an evolving need to increase the bandwidth over networks without needing to install more cable. Dense Wavelength Division Multiplexing (DWDM) has become the solution of choice for transporting large amounts of data efficiently between sites. DWDM can carry different data streams simultaneously over a single optical fiber, which helps optimize the existing network investment.
It’s an effective system, but it was designed primarily for large telco applications, not for data center networking. This means they tend to be large, complex, and vertically integrated platforms that are costly to own and operate, take up a significant amount of space, and require staff with levels of DWDM knowledge that wouldn’t otherwise be required within a data center. The scale, design and functionality of these centers can be designed in a way that is more efficient, cost-effective, and with a level of complexity that is appropriate for the non-telco scale.
In addition to size, there’s a more fundamental problem with traditional DWDM solutions, especially in the corporate and mega-data centers, and that’s the very rigid approach to data center networking. The telco systems require transponders to convert the output of the Ethernet and Fibre Channel switches to DWDM signals. These transponders are usually in the form of traffic dependent line cards. This means that for every individual Ethernet or Fibre Channel service, the system needs an individual line card that then takes the output of the switch and converts it to a DWDM signal. The user also needs to set up additional monitoring and control cards to perform various tasks on these traffic signals.