This blog forms part of the website which focuses on performance tuning UNIX-like Operating Systems (including Linux).

Wednesday, September 14, 2005


Sun's New Galaxy Servers To Run Linux..

Now that I have your attention, yes - its sort of true, the embedded daughterboard, designed by Sun, that provides the ILOM facilities runs embedded Linux. Anyway this gem was from "Sun FireTM X4100 and X4200 Server Architectures - A Technical White Paper". Its a very worthwhile read, giving an excellent background on these new servers.

It focuses quite closely on an idealised Intel architecture, ignoring the X-Architecure chipsets from IBM which deals with the northbridge/southbridge stuff. I'm sure this is good for beating up Dell but could be somewhat of an over-simplification of modern Intel architectures.

However, the TCO stuff brought up in the media by Sun is interesting (twice the perofrmance, half the power = quarter of the power), although I would like to see some data to back it all up - I will go trawl for the data this weekend, I am always somewhat cynical of these large claims concerning performance and power, until I see the configurations and the numbers (if anyone know where these are then please leave a little something in the comments and I will update this post).

After looking at the whitepapers I was quite impressed with the basic architecture, seems very solid. The following things impressed:

- use of IPMI 2.0 (assuming its stable/bug free)
- use of standardised components across the architecture (assuming this continues beyond the first two models)
- availability of IPMI options (although this is fairly standard in most of the current generation of servers)
- the attention to ILOM and the many options available
- support for SMASH/CLP

Things that failed to impress were:

- having to use a dedicated management port (this seemed like a poor decision when we are trying to drive up utilisation - better a VLAN on one of the Intel 82546EB)
- placing a card slower than 100Mhz in PCI slot zero will cause the whole bus to clock at that speed, effecting the performance of the LSI SAS1064
- forcing the use of SAS drives and not supporting SATA on the LSI SAS1064

All in all seems like a reasonable first crack at the "Standard Server Market" but I'm not sure if its really plane flying material. No doubt I will get the chance to benchmark one of these in the near future, so maybe I will be more impressed after I have done this - I open to being more impressed!

You were more right than you thought the new galaxy servers support Solaris 10 x64, Red Hat Linux, and SuSE Linux, all on a peer one level support so not only will Sun sell you linux, they will probide support, all the way up to the critical level 2 hours to your door step if needed. You can learn more by checking the NC03Q5 video entry and the link with in.
Thanks for that James, I was wondering why chapter eight in the white paper that I referred to was about the "Sun Installation Assistant", a Linux installation tool with the additonal drivers required for Linux but that clears it up, its because they support Linux 8^)...
Sun Installation Assistant is a Linux meta-installer that loads its own Linux kernel (with required storage drivers included) and starts the SuSE or RedHat installer in a controlled sandbox and then adds LSI driver RPMs after the Linux installer has completed. This eliminates the need for driver disks.

The decision between dedicated management network and shared management network is always a personal preference and no matter what choice is made roughly half the population will be unhappy. Nevertheless in this case there is a fundamental reason: shared-NIC IPMI management implementations generally send IPMI packets to the BMC over an I2C bus from the NIC. While this is fine (mostly) for IPMI traffic it does not scale for the out-of-band video and storage requirements (RKVMS) or for remote CLI via SSH.
My understanding was that i2c operating in high speed mode maxed out at about 3.2Mbps - I still think that this could easily have ben dealt with as a VLAN on a Gigabit card and is not justification for an on-board but as you say this is really personal preference.

My main objection is simply that it doubles the number of ports that are required to connect the thing with a minimal network configuration and this can be expensive when using high-end switching equipment in a Data Center.

Since transitioning to commodity x86 equipment my dislike of doubling up has increased. Network card failures and SCSI card failures are now so rare that they constitute an exceptional event and I therfore expect to fail over to DR for this (we have a legal requiremnt to have DR).

We still run most of our network at 100Mb/s (cost reasons), so the 4 gigabit cards are not necessarily useful and perhaps I would rather have the slots, again this is all personal preference and particular to the rules we use for our DC designs.

Yes, the design of the Sun Installer Agent is an attempt to bypass the process of having drivers accepted into the initrd images of the various vendors but again I would say this is of limited value in an Enteprise environment where kickstart is in use and you probably roll your own initrd images to support the exact hardware and driver set (remember this big pitch here is that these boxes are designed 100% with the DC in mind).

Again this is personal prefernce and I would rather have seen them just get their drivers put into the next/previous quarterly update and to provide them seperately for those who roll their own.
Post a Comment

Links to this post:

Create a Link

<< Home


November 2004   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   March 2006   April 2006  

This page is powered by Blogger. Isn't yours?