I posted the other day that I decided to give up on the network RAID configuration I did with my first iSCSI SAN setup using Ubuntu, iSCSI Enterprise Target (IET), ZFS and GlusterFS. The problem was in the data integrity after the LUN files were replicated to the passive node. If you take out the complication of GlusterFS though, the Ubuntu/IET setup is really easy and rock solid. The only issue with IET is that it currently doesn’t support SCSI-3 features like persistent reservations which VMWare likes, and Windows 2008 R2 server requires for failover clustering. I haven’t had an issue with VMWare, but I cannot get failover clustering in Windows 2008 R2 to work with it. The good news is that according to this thread, SCSI-3 should be available in the next version of IET.
Anyway, I decided to rebuild both nodes into stand-alone SAN’s. By themselves they are redundant enough for my liking. I have redundant NICs, the hard drives are in a RAID, I have redundant power supplies. The only singe points of failure on the nodes are the motherboard and the RAID controller really. To mitigate that risk I am purchasing an additional motherboard and a RAID card to have on standby.
The setup for the individual nodes is actually way easier, so I thought I would post this on how I have mine setup so you can do the same. I will be referring to the original post on some things to save time, and because I setup the stand-alones slightly different. Mainly I decided not to use ZFS in my setup because I’m not using the compression or dedupe features. I decided to go with XFS because you can format the storage partition at install time and be done with it. I didn’t go with EXT4 because it has a 16TB partition limit. Also, you can tweak performance of your XFS partition. I found an interesting guide on how to do that here: (Tweak XFS for RAID Performance)
For instance, after some Google searching I found that the default stripe size used on a 3Ware 9750-4i card is 256K, which makes my sunit 512. Since there are 12 drives in RAID 6 that makes my swidth 5120 (10 disks x sunit), therefore to optimally run XFS on my storage I formatted it using the following command:
#mkfs.xfs -d sunit=512,swidth=5120 /dev/sdb1
If you want to go with ZFS, you follow the setup instructions from the original post.
So lets get to the meat, here is an overview of my partition setup:
Device | Mount Point | Format | Size |
/dev/sda1 | / | ext4 | 9GB |
/dev/sda2 | None | swap | 1GB |
/dev/sdb1 | /data | xfs | 18TB |
If you look at the original post you see that the storage partition lost some space. That’s because I rebuilt the RAID array using RAID 6. The reasoning behind that was by a recommendation from Mike McAfferty, the CEO of M5 Hosting in San Diego. He said that with that much storage, I should have RAID 6 so the array can sustain two drive failures. He said with that much data, the chance of a second drive failing when rebuilding the RAID after a failure is higher. I couldn’t argue with that logic at all. Done!
After Ubuntu is installed, and your storage is partitioned and configured, you need to run the following to install IET, and a few other goodies:
#sudo apt-get install snmpd ifenslave iscsitarget sysstat
I install snmpd so I can monitor the SANs with Zenoss, ifenslave so I can team my NICs and systat so I can monitor I/O performance. I teamed my NICs the same as the original post.
After everything is installed you can setup your storage LUNs using the DD command. Change into /data, and if you want a thin provisioned LUN here is an example of the command you would use for a 1TB LUN:
#sudo dd if=/dev/zero of=LUN0 bs=1 count=0 seek=1T
If you wanted a 1TB thick provisioned LUN run the following:
#sudo sudo dd if=/dev/zero of=LUN0 bs=1024 count=1T seek=1T
Those commands will create an empty file called LUN0 that our servers will use for storage. You can name that file whatever you want.
Next we configure IET. Change into /etc/iet and edit the ietd.conf file with your favorite editor. You can delete all the crap in the original file. Assuming you created your LUN file with the same name above, you should enter the following in iet.conf:
Target iqn.2012-03.BAUER-POWER:iscsi.LUN0
Lun 0 Path=/data/LUN0,Type=fileio,ScsiSN=random-0001
Alias LUN0
You can add CHAP authentication here too if you want, but I’ll let you Google that. I don’t use CHAP because my iSCSI network is separate, and has no access from outside the network. I do however lock down connections to LUNs by IP addresses. To lock down by IP open the initiators.allow file in /etc/iet, delete all the junk in there and add the following:
Target iqn.2012-03.BAUER-POWER:iscsi.LUN0 100.100.10.148
That restricts access to LUN0 to only the server with the IP address of 100.100.10.148. After those are configured, restart the IET service by running the following:
#service iscsitarget restart
That’s it, now you are ready to store some data!
If this write up doesn’t make sense, read the original post and fill in the blanks. It’s basically the same setup without ZFS, Heartbeat and GlusterFS. This setup just works, and without the complexity of GlusterFS, I think it’s safer and simpler. This setup also still gives you the option for thin provisioning which is nice if you want to over commit your storage. You can still do the dedupe and compression stuff if you decide to go with ZFS as well, just make sure you have the proper hardware for it.
This setup gives me 18TB of storage for about $6,200. If you have any questions or comments about this setup, let me know in the comments!
del.icio.us Tags:
how to,
iscsi,
san,
nas,
ubuntu,
iscsitarget,
iet,
iscsi enterprise target,
zfs,
xfs,
supermicro