atcomsystems.ca/forum
Posted By: Kumba Cable Management for High-Density Colo Server - 01/30/09 11:08 PM
I have a rack that I'm rapidly filling up and am planning to migrate to a new hardware base. This prevents me with some technical challenges that I need to solve. I am not that up-to-date or familiar with rack/datacom products offered by vendors so I figured I would explain my situation here in hope I can get some insight.

So, without further a due, here is the problem in a nutshell.

The Backstory:
I have an enclosed rack that is 46-ish U (it's a weird rack, but I only pay for 44U). The front to back depth of the rack is up to 30". I have these servers that are 1U 19" Rackmount but only 9.8" deep. Other pertinent pieces of the story are four 48-port switches with 18"-depth and 4 20-outlet 0U PDU's (power strips) with 3-foot IEC computer cords..

The Plan:
I am going to re-locate the rails, and set the rail depth to 26" (2" from door to rail in front and back). I am then going to mount my 1U 19" 9.8"-depth servers in the front AND back of this rack. The four 48-port switches will be mounted with 2 on each side at the bottom. Since the switches are too long to be mounted with anything behind them they will be mounted with one facing the front, then the next U above that will face the back, and so on for the 4 switches (Stacked on top of each other). There will be a 1U horizontal wire management unit bolted behind each switch to cover the hole and provide support for the switch that is above or below it. The 0U PDU's will be bolted in the 6" or so of air-space between the servers and there will be a 3-foot power cord that plugs the server in.


The Problem:
All of these small 1U Servers have front-mounted ports with the exception of the power in the back. I am trying to find some sort of a vertical cable management for all the network cables this rack will have. Each server will have two Cat5 patch cables with each one running to one of the switches. Maybe some kind of an offset 1U cable ring that I can mount every 4U or so over the brackets of the servers. Keep in mind that this is in a colo-center so having wire management that clips on the outside of a rack wont work. The doors also need to be able to be shut and locked.

The rack is intentionally populated from bottom to top to make sure that all of those 84 servers have an air-column that is a straight-shot up (convection principle). There is also going to be a vented tile put under the cabinet as well as 4 120mm high-CFM muffin fans mounted in the top-plate of the cabinet. For those who are curious each one of those servers represent 150 phone lines of capacity. The rack will have 80-amps of service (four 20-amp circuits). The need to keep things neat and serviceable is paramount.

Since a picture is worth a thousand words I drew up something quick and dirty. Here it is: [Linked Image from azrael.crashsys.com]
I had a very similar challenge with some of my cabinets. On some, I was able to move the front rails back about four inches and mount a vertical cable retainer. On the others, where I had less depth to work with, I used a fairly shallow finger duct. I can take some pictures to give you some ideas on Monday if you like.
That would be great. I'm all for any idea that prevents me from just having cables hanging down the face of this thing.

The one thing that looked plausible were vertical lace strips from mid atlantic products and their velcro straps. That seems like it would be flexible enough to move when needed. Anyone have an experience with them and how well they do or dont work? https://www.middleatlantic.com/rackac/cablem/cablem.htm
Don't know about you but I have always kinda assumed that you put your equipment in the front of a rack and had access to it from the back. Did the laws of IT change that?

With equipment located on both sides how do you propose to get at the wiring in the middle? I would also give serious thought to the heat load of that many servers packed into that amount of space. I really don't think any amount of fans is going to cut it.

Why aren't you using at least two racks?

-Hal
Kumba, you didn't state the width of your rack. I'll assume that it is the typical 24" wide.
I would also be concerned with the heat load of those servers. It would seem to me that the switches overlapping at the bottom would restrict any airflow from entering the cabinet efficiently.
Kumba, you didn't state the width of your rack. I'll assume that it is the typical 24" wide. Here is a picture of what we have done in the past but it is a 32" cabinet. This is the solution that we came up with that did not take up any U's.

Todd

[Linked Image from i179.photobucket.com]
I know I'm new here, but Hal and Todd are quite correct. Installing hardware on both sides of the rack is asking for nothing except disaster.

First heat between the devices will not dissipate effectively (think toaster oven), creating a problem with effective air flow. (Air circulation is your best friend with enclosed racks)

Secondly I certainly would not like to be the person responcible for having to swap hardware on this setup. In order to get to the back of a switch or server, you're going to have to reach across a device just to get to the device you're after. Not to mention that if you're a level below you're now looking at a whole new problem (how to reach it).

Ever accidentally drop a plug end?
As the rack fills up with hardware placed in this manner, bringing a dropped power adapter back to the top will be "fun"..

Your best bet (as previously indicated by others) is to place your hardware on one side. This will not only make life easier with future maintenance, it will also provide better air flow, and cable routing options

Kerb
Kumba, please tell me where you found application servers that are only 10" deep?!?! I need those last year!!!

Secondly, I have to agree with Hal et al re:the positioning and heat. The normal practice in datacenters is to have all equipment facing one way. Apart from the wiring convenience and heat generated in the rack, the proper way is to have "hot" and "cold" aisles between racks, ie the heat venting sides of opposing racks are on the same aisle. That way the front of your rack never faces the back of another.
Also, if I may say, the rack positioning is non-standard: switches usually are placed above the servers, and panels (if any) above switches. You start at the bottom with the heavier equipment and, if weight difference is negligible, with equipment that has the largest power requirements. That's keeping with normal design of having comm wire on top (usually on ladder) and power wire on bottom.
Unless ofcourse you have infloor wiring, but this is generally frowned upon for equipment rooms, though it's ok for user space.
This is all in a raised-floor data center. The lieberts use the floor as the plenum and the return is pulled from the ceiling. Power is fed from under the floor. I've already made plans to have a vented tile installed under the rack and they have agreed to open the damper up on it completely for me. What I kind of mentioned is that these are funny racks in that they have 46'ish U's of space but at the very bottom there is about 2.5" where the rails don't extend down. You open the front or back of the rack and it's deadspace at the very bottom. This gives me area for the cool air to come in and go up the sides, back, and front of the cabinet. The external width of the enclosed rack is 24". I don't remember the external depth of the rack but it seems like there was about 2" from the rails to the front and rear door when they were at their full 30" adjustment.

The rack is on the end of an isle and the side-panel is removable with the turn of two bolts. The only cable that will be in the center is just the power cables. Everything uses standard IEC-style cords. I hate power bricks and wall warts.

The Switches are HP ProCurve 2650's and they consume more power and are heavier then each of these servers. On my tests the measured power draw of a fully-loaded procurve was around 1.2-amps (135'ish watts). The measured power-draw of a fully-loaded server was 1.0-amps (115'ish watts). The ProCurves are actually heaver (about 10-pounds) then the servers (about 8-pounds).

I do have a second rack next to it but that is busy holding traditional 1U servers for the databases, archives, web servers, SIP/Media gateways, etc.

I definitely understand and appreciate all the concern about heat build-up within the rack. This whole model is currently in a proof of concept stage and if I determine it to be stable it will be how I approach high-density applications in the future. I actually have a very good relationship with the colo staff and they have a high degree of interest in this succeeding for their own hosted business model. If the current plan proves to be unstable we will switch to a forced-cooling option. This involves mounting a plate in the bottom of the cabinet with a 6" duct that attaches to a blower placed under the raised floor. The front and back door will have the screens replaced with plexi-glass and the plate on the top of the rack with the fans will be removed and replaced with a wire mesh. Basically converting the rack from horizontal (front-back) to vertical (bottom-top) airflow. The switches at the bottom will already act like an air-plenum and force the cold air up the sides and front-back door of the rack. The other thing to remember is that i'm trying to provision for 100% load. In reality things will typically be running at 60-75% of load.

If I use the formula BTU=(watts*.95)*3.414 I estimate that I'll use about 33080-BTU's or rounded up to 3-tons of HVAC. I need to double check with the facilities but I believe the floor I'm on has a much higher thermal allowance per rack. I know electrically they are provisioned to be able to do up to 200-amps per rack. I would assume it they could deliver that amount of power then they should be able to deliver enough HVAC to cool it as well.

The idea here is to create a RAIC or Redundant Array of Inexpensive Computers. This method (if it can be made to work) is cost-effective, high-density, and highly-efficient. This particular approach I know is being utilitized by companies such as Google in order to power their search engines.

sph: These are custom-built machines. Nothing from an OEM could meet my requirements. My requirements where 2-nodes per 1U, 1.5-amps or less per node, at least one dedicated USB Port per node (software-requirement), and sub $1000 per node. Nothing (blades, dual-node 1U chassis, etc) could meet all 4 things. Given that, I spent a lot of money and time testing and eventually arriving at my current recipe for a server. Each server is an Intel Quad-Core 2.4ghz machine with 4-gigs of ram and a Raid-1 160-gig setup.
Well, K, it seems you have done your homework. You mentioned Liebert, I assume they were the facility suppliers, and they are definitely quality datacenter vendors. I wish you all the best.
Thanx for the info on the servers, that is a very good price-point. The one thing with custom-built equipment is the time involved into testing the parts and the whole, and in the appropriate burn-in. Most of the premium from "name" manufacturers supposedly reflects just that.
I have done some work with dense high-availability clusters, on Windows Datacenter, Solaris, and (a long time ago...) on Tandem computers. But these utilized pretty expensive nodes to begin with. RAICs I don't have experience with. Please, post your results when you have the time.
Liebert was just the manufacturer of the environmental controls/HVAC. They're pretty much the standard in precision HVAC and data centers it seems.

And you are right about the time and money spent figuring all this out. I was at their data center looking at their infrastructure, taking down model #'s, measuring, etc, for almost a whole day. Then it took another 3 weeks of buying hardware, testing it, seeing what made it overheat, measuring power consumption under different loads, etc.

In retrospect the company could have bought me a new 4-door sedan with the money spent on this project. But the thing is that just Having a second rack in a data center is a $1000/mo proposition (on average). Having a single 20-amp circuit is a $400/mo deal. It all add's up real quick, specially when you are paying premium boku rates for the facility.
Thanx for all the info, may I ask again that you post your experiences when you find the time.

Going back to the servers, the sticking point for me would be the thermal requirements. One of the reasons 1U server cases are rarely smaller than 18" is to provide adequate airflow and ambient temp to the components. Some time ago we built a custom made machine that had a single CPU putting out 85 Watts, I remember Intel's docs specifying a pretty impressive minimum CFM airflow over the processor. The design was not cost-effective for the project we had in mind, though the learning experience was. In any case our legal counsel-guy killed the idea even though the customer's IT auditors were willing to go along. The reason: we would have no recourse if the machines failed and lawsuits started flying, as they usually do. The additional liability was something we weren't really ready to stomach.
I realize this is an old post but I just wanted to offer up my experiences in building a high-density colo rack. My hardware recipe for the nodes has changed since my original posting but that was because of a CPU going EOL. The benefit being the new replacement CPU was more efficient, generated less heat, and was cheaper. The con being the motherboard was about $30 more (offset the CPU cost).

There are a few things I will change for the next high-density rack that I do. HP has come out with some new short-depth 48-port switches that are the same depth as the server nodes (10"). This will allow me to have a sort of water-fall going down the side of the cabinet to the servers without having to put the switches in an adjacent space. I am currently the highest density installation in my colo which is somewhat impressive considering there are over 10K servers in their environment.

Here are some things I have found to be noteworthy in my experience:

- A standardized hardware BOM is a must!
- Uniform network OS/Platform is very beneficial
- Ability to netboot/PXE-boot diags, utils, and OS installs helps a lot
- Managed power strips are worth their price
- Always grease the colo staff with beer and other assorted goodies
- Management of 100 servers requires elaborate, sometimes custom-written monitoring

Some of the things I feel I have accomplished that are not so trivial are:
- 40 nodes running off of 40-amps of power with an average usage around 22 amps, peak around 30.
- Standardized high-density nodes allows me to recover from a total failure in 15 minutes using snapshot back-up's remotely
- Monitoring has a resolution/response time of around 5 minutes to report incident/failure of various services
- Generates significantly less heat then a similar rack equipped with 20 Dell 2850 v3's (measured!)
- Completely modular expansion and easily distributed amongst distant racks
- All servers are isolated to themselves, no shared backplane or chassis components
- Many magnitudes below the cost of any comparable density blade system and below cost/higher-efficiency of equivalent performance traditional servers

And now that all the typing is out of the way, here are some pics (Let the critique begin):
Front Side Close Up

Front with Second Rack

Back Top

Back Bottom


As you can see the vertical managed power strips are in the center, and all the I/O ports are front-mounted on the servers. Since I could not find any sort of structured cabling to accomplish what I wanted to do within the confines of an enclosed colo rack, I used the space between the two racks to run cabling vertically. I used velcro from a sparky supply store to bundle the cables then zip tied the velcro to the horizontal supports. Not the neatest thing going but it's easy to split the bundle and push/pull slack around when you need it.

The switches are located in the adjacent rack which houses all my standard servers like databases, NAS, etc. The cable bundle you see in the last two pictures goes up to the top then over to this adjacent rack where it terminates into 288 ports of switch.

As you can see from the third picture, these servers will mount back to back, with approximately 8" of clearance between them, for a total density per rack of 78 servers in a standard 42U rack. Each one of those servers is a Quad-Core 2.33Ghz CPU, 4-gigs ECC (Exp. to 8), 160-gig Raid-1, and dual gigabit ethernet controllers. They use around 50-55 watts at idle and at max load around 125-130 watts. The only custom modification we do to the server build is bolting two 40mm/28cfm fans onto the back to ensure adequate front to back airflow. The on-board fan throttling is enabled and we have never seen these fans spin up in the colo from the server getting too hot. This is attributed to the constant 72 degrees C and 70% humidity that is maintained as well as the 300CFM blower fans that blow fresh cool air up the door using it like an air plenum. There is a vented tile under the server rack that blows fresh air up and into the blower intakes (see bullet about greasing the colo staff).

By far the thing I take the most pride in is the density of the installation and the relative efficiency of it (power/heat). It has been an interesting build so far and I've had to completely strip the rack out and rebuild it once to fix some of my initial deficiencies.

So that's it. I knew a few people were interested in the outcome of this project of mine. Let me know if you have any questions about it. Thanks.
While it certainly isn't a standard configuration, it does look very good. Are you going to try to add more servers to the back side? If so, what happens when you need to access the back of the servers?

What are the USB sticks for? Is it just backup memory, or some sort of dongle?

What kind of servers are these? (You told me before, but I didn't write it down...Sorry.)
Yes, the back side is getting racked up like the front side. The top panel that has four 4" muffin fans is being removed since the blowers and servers themselves create significant enough airflow to power the hot air up through the top of the rack. By the weekend there will be 10 servers on the back side. At my current rate I will have that rack full in 3-6 months.

The only thing that happens on the back side is power getting plugged in. The cords I use a 3 to 5 foot cords (depending upon placement in the rack). Whenever I have to pull a server out I pull the ethernet cables out through the rack ears, unbolt it, and just slide the server out. I need to go through and cut all the warning labels off the power cords to prevent a potential snag. So far, on the dozen or so I've pulled out for various reasons, the power cord just slides right out with the server. Maybe it is because it is all new but they really do seem to have quite a snug fit. The biggest gotcha I have is if the vertical power strip goes bad. In a case like that I have no other choice but to pull one side of the rack down, pull it out of the middle, and put a new one in. The current rack I have pictured there is on the end of an isle so I can take the whole side off and reach behind them. I can fish the strip out that way and access all the plugs if I have to. So far I haven't needed to do that at all.

The USB sticks are a dedicated 1khz hardware timer that is used by Asterisk for audio framing and transcoding purposes. It allows me to run the servers at a higher load without audio quality issues as the dedicated timer is not dependent upon CPU load like a software timer is. It translates into about another 15-20% of usable horsepower before the system get overloaded in other areas.

The servers are SuperMicro brand. It's the only brand I use or will sell as an enterprise server. The current hardware recipe is as follows:
x1 Intel 2.33Ghz Core-2 Quad-Core CPU
x1 SuperMicro Motherboard
x1 Crucial 4GB (2x2GB) ECC Memory
x2 Western Digital 160GB 2.5" Hard Drives
x1 SuperMicro 10" 1U front-IO case
x2 40mmx28mm 28CFM fans (LOUD buggers)

I'm using Linux RAID for ease or portability with the hard-drives. I can also add another 4-gigs of memory for a total of 8 if need be. Out of 100 servers I've built and used I have only had 2 failures. One was were the second ethernet port quit working, and the second one was a failed power supply after a few months. The second failure all I did was pop the hard drives out and throw them in another chassis. Only took about 15 minutes once I got my hands on the chassis.

If you want the actual hardware BOM just send me a PM. I'll shoot it across to ya along with a few instructions on mounting the fans (requires some decent tin snips).
Nice photos and details. I <3 Supermicro servers. Even their low end Atom 1U servers are quality. I put together such a server for a LUG that I am a member of. Nice, small, low powered server for hosting the LUG's web site and e-mail.
SuperMicro is one of VERY few manufacturers that we have found that focus on quality of build and compatibility with Linux. They have no brand-specific quirks (in my experience) that you have to work around like you do with Dell, HP, or IBM.

I just wish I could buy directly from them because I often run into issues with their suppliers have enough quantity for my orders sometimes.
Do those USB sticks ever need to be removed? If not, I was thinking that you might be able to mount them inside the case of the server with a USB tail like this that hooks to one of the USB headers on the motherboard (assuming there is a free header). One less thing to get broken off or removed accidentally. Still a pretty cool setup. Wish I could see that.
Sometimes they get hung because of a driver issue. We went with external because before we had remote-managed power strips we would have to reseat them. Since it's not a major ordeal we stick with them for uniformity.
© Sundance Business VOIP Telephone Help