I realize this is an old post but I just wanted to offer up my experiences in building a high-density colo rack. My hardware recipe for the nodes has changed since my original posting but that was because of a CPU going EOL. The benefit being the new replacement CPU was more efficient, generated less heat, and was cheaper. The con being the motherboard was about $30 more (offset the CPU cost).
There are a few things I will change for the next high-density rack that I do. HP has come out with some new short-depth 48-port switches that are the same depth as the server nodes (10"). This will allow me to have a sort of water-fall going down the side of the cabinet to the servers without having to put the switches in an adjacent space. I am currently the highest density installation in my colo which is somewhat impressive considering there are over 10K servers in their environment.
Here are some things I have found to be noteworthy in my experience:
- A standardized hardware BOM is a must!
- Uniform network OS/Platform is very beneficial
- Ability to netboot/PXE-boot diags, utils, and OS installs helps a lot
- Managed power strips are worth their price
- Always grease the colo staff with beer and other assorted goodies
- Management of 100 servers requires elaborate, sometimes custom-written monitoring
Some of the things I feel I have accomplished that are not so trivial are:
- 40 nodes running off of 40-amps of power with an average usage around 22 amps, peak around 30.
- Standardized high-density nodes allows me to recover from a total failure in 15 minutes using snapshot back-up's remotely
- Monitoring has a resolution/response time of around 5 minutes to report incident/failure of various services
- Generates significantly less heat then a similar rack equipped with 20 Dell 2850 v3's (measured!)
- Completely modular expansion and easily distributed amongst distant racks
- All servers are isolated to themselves, no shared backplane or chassis components
- Many magnitudes below the cost of any comparable density blade system and below cost/higher-efficiency of equivalent performance traditional servers
And now that all the typing is out of the way, here are some pics (Let the critique begin):
Front Side Close Up Front with Second Rack Back Top Back Bottom As you can see the vertical managed power strips are in the center, and all the I/O ports are front-mounted on the servers. Since I could not find any sort of structured cabling to accomplish what I wanted to do within the confines of an enclosed colo rack, I used the space between the two racks to run cabling vertically. I used velcro from a sparky supply store to bundle the cables then zip tied the velcro to the horizontal supports. Not the neatest thing going but it's easy to split the bundle and push/pull slack around when you need it.
The switches are located in the adjacent rack which houses all my standard servers like databases, NAS, etc. The cable bundle you see in the last two pictures goes up to the top then over to this adjacent rack where it terminates into 288 ports of switch.
As you can see from the third picture, these servers will mount back to back, with approximately 8" of clearance between them, for a total density per rack of 78 servers in a standard 42U rack. Each one of those servers is a Quad-Core 2.33Ghz CPU, 4-gigs ECC (Exp. to 8), 160-gig Raid-1, and dual gigabit ethernet controllers. They use around 50-55 watts at idle and at max load around 125-130 watts. The only custom modification we do to the server build is bolting two 40mm/28cfm fans onto the back to ensure adequate front to back airflow. The on-board fan throttling is enabled and we have never seen these fans spin up in the colo from the server getting too hot. This is attributed to the constant 72 degrees C and 70% humidity that is maintained as well as the 300CFM blower fans that blow fresh cool air up the door using it like an air plenum. There is a vented tile under the server rack that blows fresh air up and into the blower intakes (see bullet about greasing the colo staff).
By far the thing I take the most pride in is the density of the installation and the relative efficiency of it (power/heat). It has been an interesting build so far and I've had to completely strip the rack out and rebuild it once to fix some of my initial deficiencies.
So that's it. I knew a few people were interested in the outcome of this project of mine. Let me know if you have any questions about it. Thanks.