Your Linux Data Center Experts

I've been curious about SATA Port Multipliers because of my home storage server. SATA is great stuff, and not that bad even when dealing with 10 drives in a single relatively small case. However when you outgrow that case, or just as likely the power supply, you need to start adding drives externally. But do I really want 5 or 10 normal SATA cables routing out of my case? While it's easy to get 8 internal SATA ports, 8 eSATA ports is quite unusual.

I recently found that SATA II supports Port Multipliers, allowing multiple drives to be connected to a single SATA port. Sounds like just the trick, but how is support for them? Read on for more information.

There's some hard math involved in this though. Is it better to have 4x $200 1TB drives that can be mounted internally, or 8x $75 500GB drives which require a $240 external enclosure and a $80 eSATA card? If we're talking just straight storage per dollar, the 4 1TB drives are a clear winner.

What if I really want to do RAID-5 or RAID-6 (RAID-5 with two parity drives)? Once we add a parity, the 500GB drives move into the lead. Then there's future expansion and power consumption to think about… It's pretty easy to get stuck in analysis paralysis.

This post is about SATA port multipliers though. Basically these act like a network switch, allowing one host port to talk to multiple drives.

My test equipment included 5 500GB Hitachi SATA II drives, in an AMS DS-2350S 5 bay external enclosure, connected to a Addonics ADS3GX4R5-E PCI-X card with 4 eSATA ports. Note that the enclosure comes with a 2-port eSATA PCIe card, but I didn't have any PCIe slots to test it in. I got the Addonics card because I saw reports of it working with the Silicon Image Port Multiplier that is in the AMS enclosure.

This enclosure has one eSATA port on the back, connected to the port multiplier, and 5 drive bays internally. So I have one eSATA cable that the 5 drives connect to.

One thing that confused me initially was that the BIOS of the card only sees one drive, and only one drive spins up during the initial pre-boot. I took this as a bad sign, but contacted Adonics and they reported that this was expected… I had also tested by booting CentOS 5, which also only saw one drive.

However, CentOS 5 is an Enterprise release with an older kernel. I tried Ubuntu Hardy, and since it was only released a couple of months ago, it has a much more recent kernel and spun up and saw all the drives.

Well, almost… Sometimes if I boot, particularly after a hard power cycle of all the gear involved, some of the drives (randomly) will not be detected – generating an error when Linux tries to find them. If I reboot, it has always seen all the drives on the next boot.

Now, obviously, you are sharing the bandwidth of a single SATA II link across these drives. SATA II is 3Gbps, or around 300MB/sec. 1TB drives can run in full streaming at around 100MB/sec, but 500TB drives I'm seeing more like 65MB/sec. So at full streaming speed (writing a single huge file), the 500GB drives are just a bit faster than the SATA II link. However, if you write multiple files the throughput of your drive drops way down, every seek “costs” around half a MB to a MB and every file written can take several seeks to accomplish.

I'm not particularly concerned with the streaming performance, and I'm really not that concerned about performance in this case anyway. I mean, I only have one gigabit link into that box, so that's going to be my limit for streaming in most cases.

So, while it does require a fairly recent kernel to support, and there are still some small issues, it looks like SATA port multipliers are a good way to connect a backplane or external enclosure to a controller. It's probably a bit early right now (July 2008), it is definitely doable with some effort.

comments powered by Disqus

Join our other satisfied clients. Contact us today.