新しい CommScope.com 体験へようこそ。お困りですか？ ヘルプ
This blog was first published in Datacenter Dynamics on Nov. 19, 2019. According to the 2018 Equinix Global Interconnection Index, interconnection bandwidth is expected to grow globally at a 48-percent CAGR to 8,200+ terabytes per second of installed bandwidth. By 2021, enterprises interconnecting to network providers in order to address latency will account for 66 percent of all total interconnect bandwidth. With a 98 percent CAGR, the fastest growing segment is enterprises interconnecting to cloud and IT services in order to solve issues of network complexity.
CLICK TO TWEET: Mike Cooper explains that to develop a good cable management strategy, the devil is in the details. When done right it can provide a template that makes expansion and upgrades to the network faster and more reliable.
In a single visual seen below, it is easy to understand why interconnection is exploding. As latency requirements for emerging technologies and applications shrink, the mesh-type topology with its emphasis on redundant east-west connectivity becomes the best way to support ultra-reliable, low latency performance needed.
When viewed from the 30,000-foot level, a topology rich in interconnections appears somewhat elegant. At ground level in the data center, where the physical connections are made, the view can change dramatically. As data volumes coming into the data center continues to soar, the fiber plant inside the data center increases exponentially. Managing thousands of fiber strands is an ongoing challenge.
In order to manage their fiber, most data centers typically use a mixture of direct connect and interconnect cabling. As the name implies, a direct connection runs point-to-point between racks. A data center interconnect—not to be confused with the network interconnects mentioned previously—routes patch cords to a presentation panel. For large projects, this strategy can become difficult to manage as patch cords tend to become longer and cable pathways grow more congested. Once the number of fiber strands starts to exceed two or three thousand, the scales begin tipping in favor of a cross connect patching strategy.
Cross connects in the data center
A cross connect cabling plant offers a variety of benefits. Most importantly it involves a dedicated patching area that makes moves, adds and changes easier to manage. The patching area isolates mission critical active equipment, so there is less risk of disrupting live circuits while servicing the patch panels. Additionally, cross connects to carriers and cloud providers can also save money, improve reliability, and add versatility to your network.
At the same time, cross connects require more cabling. The proliferation of patch cords in a cross connect topology becomes a critical concern. As fiber counts rise, a good cable management strategy is imperative.
Importance of a cable management strategy
Many data center managers are diligent when it comes to planning and detailing their network’s evolution and migration needs. In focusing on the network solutions needed, they often overlook planning for the more routine Day 2 requirements, such as moves/add/changes. This gap in planning leaves them open to problems down the road:
A cable management strategy provides network-wide standards for how the fiber plant is managed. It helps ensure fiber performance, accelerate the speed and accuracy of moves/adds/changes and improve critical metrics such as mean-time-to-resolve.
A well-designed strategy addresses key aspects, including: how patch cords should be run within and between cabinets, parameters governing how cable trays and other pathways are to be used, labeling of optical fiber in the cross connects and best practices for periodically “harvesting” or decommissioning unused cords.
Power is in the details
As with any operational strategy, the value of a good cable management strategy is in its practical application. It must provide a working blueprint that every tech can understand and follow. The level of detail is important.
For example, a common problem in keeping the fiber plant orderly and manageable is how to handle cable slack. Without a process for storing the slack, the cabling quickly spills out into the aisles. One aspect of the cable management strategy, then, may be to specify patch panels and distribution frames that offer good on-board storage.
Another issue that often compounds congestion throughout the data center is the lack of protocols dictating how overhead cable pathways are used. Best practices suggest segregating larger trunk cables from smaller patch cords: ladder racks for large trunk cables, fiber raceways for the patch cords. Likewise, copper and fiber patch cords should have their own pathways.
These management tactics are not only intended to keep the data centers cable plant accessible and easier to service, it can have a significant effect on optical performance as well. A good example is what can happen when large trunk cables are run in raceways designed for smaller patching fibers. As the larger cables exit the raceway via so-called “waterfalls,” it is not uncommon for the cable to exceed the maximum allowable bend radius, affecting optical performance.
In developing a good cable management strategy, the devil is in the details. When done right, however, a good cable management strategy not only keeps the current cable plant highly serviceable, it also can provide a template that makes expansion and upgrades to the network faster and more reliable.