Performance measurement of SAS 15K drive

SAS drives are frequently used in enterprises. To have a better understanding on the big picture of current enterprise level storage, it's essential for us to be acquaintance with how it performs on one single SAS disk. We can find some documents telling us about it roughly, but I didn't found one that is detailed enough, e.g., to show how SAS disk performs when we use different number of threads to access the data (or to say, different IO depth).

So, I decided to test it myself.

Table of Contents

1 Goal

To see how SAS disk performs with different IO depth. Here, IO depth may also mean:

  • number of threads that doing IO
  • number of outstanding IO

2 Environment

I am using Linux with self-compiled kernel version 3.6.5.

xz-ws xz # uname -a
Linux xz-ws 3.6.5 #1 SMP Thu Nov 15 15:57:48 CST 2012 x86_64 GNU/Linux

I used one 300GB 15K SAS disk (Seagate, module number ST3300657SS) in the test, which is connected to my workstation via SAS HBA controller. It is shown as block device /dev/sdb on my Linux system.

3 Test result: hdparm

hdparm is a tool under Linux to get/set SATA/IDE device parameters, which also can be used to do basic benchmark of disks using -t/T params without file system overheads. So we can have a rough-idea about the performance of SAS disk using this tool, though only for seq read IO pattern.

paramusage
-tperform un-cached seq read
-Tperform cached seq read
xz-ws xz # hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   17362 MB in  2.00 seconds = 8691.60 MB/sec
 Timing buffered disk reads: 594 MB in  3.00 seconds = 197.69 MB/sec
xz-ws xz # hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   17388 MB in  2.00 seconds = 8705.05 MB/sec
 Timing buffered disk reads: 594 MB in  3.00 seconds = 197.94 MB/sec

Here I also tested un-cached case, though not too useful for us.

We can see that raw sequential read for SAS 15K disk can reach about 200MB/s.

4 Test result: fio

Fio is a IO test tool under Linux (actually it now should support all kinds of operation systems, like Windows, MacOS and Android). There are many parameters that may influnce the result:

  • direction: read/write
  • access pattern: seq/random
  • block size: 512B, 4KB, 8KB, …

So, I am going to test the performance under all different conditions. To achieve this, I used the tool fiox to help me do all these use cases automatically (fiox is a Perl written wrapper by me that added variable support feature in .fio configure files). Also, I used another tool to generate the graphs for me, which internally used Gnuplot as backend.

Here are the results.

4.1 randread (with different block size and IO depth)

So we can see that, IO depth does have a great impact on the random read IO pattern. From experienced engineers we can know that, it is possibly because the optimization of SAS controller in the read-queue algorithm. As you may know that, for un-cached IO access, most of the time is consumed in the spinning and seeking of disk. By improving the queue algorithm, when there are concurrent IO requests on one disk, the controller can get one optimized track to complete all the IO requests with minimum time for track seeking and disk spinning. This kind of algorithm cannot be seen when there are only one worker thread that requesting IO.

Meanwhile, there is some kind of drop after threads number reaches 64.

So we can say that: For small IO, 15K SAS can reach about 350 IOPS with single thread and easily reach 1000 when using 64 concurrent threads.

4.2 randwrite (with different block size and IO depth)

We cannot see the same thing when using write IO pattern. I assume that is the effect of write cache mechanism. Though I don't know which layer the buffer should belongs to, it seems that IO depth has little effect on random write case. And we can always get more than 700 IOPS even within one single thread when doing random write for small IOs.

Please let me know if there is anything questionable in the data. :)

Date: [2013-02-08 五 13:14]

Author: Peter Xu

Org version 7.7 with Emacs version 24

Validate XHTML 1.0