web statisticsweb stats

Business Phone Systems

Previous Thread
Next Thread
Print Thread
Rate Thread
Page 2 of 2 1 2
Joined: Oct 2007
Posts: 289
sph Offline
Member
Offline
Member
Joined: Oct 2007
Posts: 289
Thanx for all the info, may I ask again that you post your experiences when you find the time.

Going back to the servers, the sticking point for me would be the thermal requirements. One of the reasons 1U server cases are rarely smaller than 18" is to provide adequate airflow and ambient temp to the components. Some time ago we built a custom made machine that had a single CPU putting out 85 Watts, I remember Intel's docs specifying a pretty impressive minimum CFM airflow over the processor. The design was not cost-effective for the project we had in mind, though the learning experience was. In any case our legal counsel-guy killed the idea even though the customer's IT auditors were willing to go along. The reason: we would have no recourse if the machines failed and lawsuits started flying, as they usually do. The additional liability was something we weren't really ready to stomach.

Atcom VoIP Phones
VoIP Demo

Best VoIP Phones Canada


Visit Atcom to get started with your new business VoIP phone system ASAP
Turn up is quick, painless, and can often be done same day.
Let us show you how to do VoIP right, resulting in crystal clear call quality and easy-to-use features that make everyone happy!
Proudly serving Canada from coast to coast.

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
I realize this is an old post but I just wanted to offer up my experiences in building a high-density colo rack. My hardware recipe for the nodes has changed since my original posting but that was because of a CPU going EOL. The benefit being the new replacement CPU was more efficient, generated less heat, and was cheaper. The con being the motherboard was about $30 more (offset the CPU cost).

There are a few things I will change for the next high-density rack that I do. HP has come out with some new short-depth 48-port switches that are the same depth as the server nodes (10"). This will allow me to have a sort of water-fall going down the side of the cabinet to the servers without having to put the switches in an adjacent space. I am currently the highest density installation in my colo which is somewhat impressive considering there are over 10K servers in their environment.

Here are some things I have found to be noteworthy in my experience:

- A standardized hardware BOM is a must!
- Uniform network OS/Platform is very beneficial
- Ability to netboot/PXE-boot diags, utils, and OS installs helps a lot
- Managed power strips are worth their price
- Always grease the colo staff with beer and other assorted goodies
- Management of 100 servers requires elaborate, sometimes custom-written monitoring

Some of the things I feel I have accomplished that are not so trivial are:
- 40 nodes running off of 40-amps of power with an average usage around 22 amps, peak around 30.
- Standardized high-density nodes allows me to recover from a total failure in 15 minutes using snapshot back-up's remotely
- Monitoring has a resolution/response time of around 5 minutes to report incident/failure of various services
- Generates significantly less heat then a similar rack equipped with 20 Dell 2850 v3's (measured!)
- Completely modular expansion and easily distributed amongst distant racks
- All servers are isolated to themselves, no shared backplane or chassis components
- Many magnitudes below the cost of any comparable density blade system and below cost/higher-efficiency of equivalent performance traditional servers

And now that all the typing is out of the way, here are some pics (Let the critique begin):
Front Side Close Up

Front with Second Rack

Back Top

Back Bottom


As you can see the vertical managed power strips are in the center, and all the I/O ports are front-mounted on the servers. Since I could not find any sort of structured cabling to accomplish what I wanted to do within the confines of an enclosed colo rack, I used the space between the two racks to run cabling vertically. I used velcro from a sparky supply store to bundle the cables then zip tied the velcro to the horizontal supports. Not the neatest thing going but it's easy to split the bundle and push/pull slack around when you need it.

The switches are located in the adjacent rack which houses all my standard servers like databases, NAS, etc. The cable bundle you see in the last two pictures goes up to the top then over to this adjacent rack where it terminates into 288 ports of switch.

As you can see from the third picture, these servers will mount back to back, with approximately 8" of clearance between them, for a total density per rack of 78 servers in a standard 42U rack. Each one of those servers is a Quad-Core 2.33Ghz CPU, 4-gigs ECC (Exp. to 8), 160-gig Raid-1, and dual gigabit ethernet controllers. They use around 50-55 watts at idle and at max load around 125-130 watts. The only custom modification we do to the server build is bolting two 40mm/28cfm fans onto the back to ensure adequate front to back airflow. The on-board fan throttling is enabled and we have never seen these fans spin up in the colo from the server getting too hot. This is attributed to the constant 72 degrees C and 70% humidity that is maintained as well as the 300CFM blower fans that blow fresh cool air up the door using it like an air plenum. There is a vented tile under the server rack that blows fresh air up and into the blower intakes (see bullet about greasing the colo staff).

By far the thing I take the most pride in is the density of the installation and the relative efficiency of it (power/heat). It has been an interesting build so far and I've had to completely strip the rack out and rebuild it once to fix some of my initial deficiencies.

So that's it. I knew a few people were interested in the outcome of this project of mine. Let me know if you have any questions about it. Thanks.

Joined: Aug 2003
Posts: 5,154
Likes: 2
Moderator-Vertical, Vodavi
*****
Offline
Moderator-Vertical, Vodavi
*****
Joined: Aug 2003
Posts: 5,154
Likes: 2
While it certainly isn't a standard configuration, it does look very good. Are you going to try to add more servers to the back side? If so, what happens when you need to access the back of the servers?

What are the USB sticks for? Is it just backup memory, or some sort of dongle?

What kind of servers are these? (You told me before, but I didn't write it down...Sorry.)

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
Yes, the back side is getting racked up like the front side. The top panel that has four 4" muffin fans is being removed since the blowers and servers themselves create significant enough airflow to power the hot air up through the top of the rack. By the weekend there will be 10 servers on the back side. At my current rate I will have that rack full in 3-6 months.

The only thing that happens on the back side is power getting plugged in. The cords I use a 3 to 5 foot cords (depending upon placement in the rack). Whenever I have to pull a server out I pull the ethernet cables out through the rack ears, unbolt it, and just slide the server out. I need to go through and cut all the warning labels off the power cords to prevent a potential snag. So far, on the dozen or so I've pulled out for various reasons, the power cord just slides right out with the server. Maybe it is because it is all new but they really do seem to have quite a snug fit. The biggest gotcha I have is if the vertical power strip goes bad. In a case like that I have no other choice but to pull one side of the rack down, pull it out of the middle, and put a new one in. The current rack I have pictured there is on the end of an isle so I can take the whole side off and reach behind them. I can fish the strip out that way and access all the plugs if I have to. So far I haven't needed to do that at all.

The USB sticks are a dedicated 1khz hardware timer that is used by Asterisk for audio framing and transcoding purposes. It allows me to run the servers at a higher load without audio quality issues as the dedicated timer is not dependent upon CPU load like a software timer is. It translates into about another 15-20% of usable horsepower before the system get overloaded in other areas.

The servers are SuperMicro brand. It's the only brand I use or will sell as an enterprise server. The current hardware recipe is as follows:
x1 Intel 2.33Ghz Core-2 Quad-Core CPU
x1 SuperMicro Motherboard
x1 Crucial 4GB (2x2GB) ECC Memory
x2 Western Digital 160GB 2.5" Hard Drives
x1 SuperMicro 10" 1U front-IO case
x2 40mmx28mm 28CFM fans (LOUD buggers)

I'm using Linux RAID for ease or portability with the hard-drives. I can also add another 4-gigs of memory for a total of 8 if need be. Out of 100 servers I've built and used I have only had 2 failures. One was were the second ethernet port quit working, and the second one was a failed power supply after a few months. The second failure all I did was pop the hard drives out and throw them in another chassis. Only took about 15 minutes once I got my hands on the chassis.

If you want the actual hardware BOM just send me a PM. I'll shoot it across to ya along with a few instructions on mounting the fans (requires some decent tin snips).

Joined: Feb 2009
Posts: 664
Member
Offline
Member
Joined: Feb 2009
Posts: 664
Nice photos and details. I <3 Supermicro servers. Even their low end Atom 1U servers are quality. I put together such a server for a LUG that I am a member of. Nice, small, low powered server for hosting the LUG's web site and e-mail.

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
SuperMicro is one of VERY few manufacturers that we have found that focus on quality of build and compatibility with Linux. They have no brand-specific quirks (in my experience) that you have to work around like you do with Dell, HP, or IBM.

I just wish I could buy directly from them because I often run into issues with their suppliers have enough quantity for my orders sometimes.

Joined: Feb 2009
Posts: 664
Member
Offline
Member
Joined: Feb 2009
Posts: 664
Do those USB sticks ever need to be removed? If not, I was thinking that you might be able to mount them inside the case of the server with a USB tail like this that hooks to one of the USB headers on the motherboard (assuming there is a free header). One less thing to get broken off or removed accidentally. Still a pretty cool setup. Wish I could see that.

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
Sometimes they get hung because of a driver issue. We went with external because before we had remote-managed power strips we would have to reseat them. Since it's not a major ordeal we stick with them for uniformity.

Page 2 of 2 1 2

Moderated by  Silversam 

Link Copied to Clipboard
Forum Statistics
Forums84
Topics94,291
Posts638,815
Members49,767
Most Online5,661
May 23rd, 2018
Popular Topics(Views)
212,433 Shoretel
189,203 CTX100 install
187,444 1a2 system
Newest Members
Robbks, A2A Networks, James D., Nadisale, andreww
49,767 Registered Users
Top Posters(30 Days)
Toner 26
teleco 6
dexman 5
jsaad 4
Who's Online Now
0 members (), 129 guests, and 424 robots.
Key: Admin, Global Mod, Mod
Contact Us | Sponsored by Atcom: One of the best VoIP Phone Canada Suppliers for your business telephone system!| Terms of Service

Sundance Communications is not affiliated with any of the above manufacturers. Sundance Phone System Forums - VOIP & Cloud Phone Help
©Copyright Sundance Communications 1998-2024
Powered by UBB.threads™ PHP Forum Software 7.7.5