NOTE: Please read my post about installing SCST on Ubuntu 18.04 first...
Many moons ago I wrote about how to configure an Ubuntu Linux based iSCSI SAN. The first iteration used iSCSITarget as the iSCSI solution. The problem with that is that it didn't support SCSI-3 Persistent Reservations. That means it wouldn't work for Windows failover clustering, and you would probably see issues if you were trying to use it in VMWare, XenServer or Hyper-V.
The second iteration used SCST as the iSCSI solution, and that did work pretty well, but you had to compile it from source and the config file was kind of a pain in the ass. Still though, it did support SCSI-3 Persistent Reservations, and was VMWare ready. It's the solution I've been using sing 2012 and it's worked out pretty well.
Well the other day I decided to rebuild one of the original units I setup from scratch. The first two units I did this setup on were SuperMicro SC826TQ's with 4 NICs, 2 quad core CPUs and 4GB of RAM, 3Ware 9750-4i RAID Controller, and twelve 2TB SATA Drives. This sucker gave me about 18TB of usable backup storage after I configured the 12 disks in RAID 6.
This time I used Ubuntu 18.04 server because unlike the first time I did this, the latest versions of Ubuntu have native drivers for 3Ware controllers. On top of that, the latest versions of Ubuntu have the iSCSI software I wanted to use in the repositories... More on that later.
I partitioned my disk as follows:
Device | Mount Point | Format | Size |
/dev/sda1 | N/A | bios/boot | 1MB |
/dev/sda2 | / | ext4 | 10GB |
/dev/sda3 | N/A | swap | 4GB |
/dev/sda4 | /data | xfs | 18TB |
After Ubuntu was installed I needed to setup my network team. Ubuntu 18.04 uses Netplan for network configuration now, which means that NIC bonding or teaming is built in. In order to setup bonding or teaming you just need to modify your /etc/netplan/50-cloud-init.yaml file. Here is an example of how I setup my file to team the four NICs I had, as well as use MTU 9000 for jumbo frames:
network:
version: 2
ethernets:
enp6s0:
dhcp4: no
dhcp6: no
mtu: 9000
enp7s0:
dhcp4: no
dhcp6: no
mtu: 9000
enp1s0f0:
dhcp4: no
dhcp6: no
mtu: 9000
enp1s0f1:
dhcp4: no
dhcp6: no
mtu: 9000
bonds:
bond0:
interfaces: [enp6s0, enp7s0, enp1s0f0, enp1s0f1]
mtu: 9000
addresses: [100.100.10.15/24]
gateway4: 100.100.10.1
parameters:
mode: balance-rr
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
It's important to note that Netplan is picky about indentation. You must have everything properly indented or you will get errors. If you copy the above config, and modify it for your server, you should be fine though.
After setting up my bonded network, I installed my software. I opted to use tgt this time. If you are unfamiliar with it, it's apparently a re-write of iscsitarget, but it supports SCSI-3 Persistent Reservations. I tested it myself using a Windows Failover Cluster Validation test:
Boom! We're in business!
To install tgt simply run the following:
sudo apt-get install tgtAfter installing you will want to create a LUN file in /data. To create a thin provisioned disk run the following:
sudo dd if=/dev/zero of=/data/lun1 bs=1 count=0 seek=1TThis creates a 1TB thinly provisioned file in /data called lun1 that you can present to iSCSI initiators as a disk. If you want to create a thick provisioned disk simply run:
sudo dd if=/dev/zero of=/data/lun1 bs=1024 count=1T seek=1TOnce you have your LUN file, you will want to create a config file for your LUN. You can create separate config files for each LUN you want to make in /etc/tgt/conf.d. Just append .conf at the end of the file name and tgt will see it when the service restarts. For our purposes, I created one called lun1.conf and added the following:
<target iqn.2018-05.bauer-power.net:iscsi.lun1>
backing-store /data/lun1
write-cache off
vendor_id www.Bauer-Power.net
initiator-address 100.100.10.148
</target>
The above creates an iSCSI target and restricts access to it to only 100.100.10.148. You can also use initiator-name to restrict access to particular iSCSI initiators, or you can use incominguser to specify chap authentication. You can also use a combination of all three if you want. Restricting by IP works for me though.
I also opted to disable write-cache because with it enabled I noticed that tgt was pegging my RAM. On top of that, my RAID controller handles write-cache on it's own, so it actually helped my performance to disable it.
All of this being said, you can find lots of configuration options here: (tgt Config Options)
After you have your file created, all you have to do is restart the tgt daemon and you're ready to serve up your iSCSI LUN!
sudo service tgt restartAfter you restart, you can see your active LUNs by running:
sudo tgtadm --op show --mode targetYou can also create LUNs on the fly without restarting tgt. This is handy if you need to add a LUN and you don't want to mess up connections to LUNs you've already created. To do that, create your LUN file like you did before. Obviously, name it something new like lun2.
Next,make sure to note what LUNs you already have running by running this command:
sudo tgtadm --op show --mode targetTarget 1 = tid 1, Target 2 = tid 2 and so on and so forth. If you only have one target, then your next target will be tid 2. Assuming that, and assuming your new LUN file is called lun2 you would run:
sudo tgtadm --lld iscsi --op new --mode target --tid 2 -T iqn.2018-05.bauer-power.net:iscsi.lun2
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /data/lun2
sudo tgtadm --lld iscsi --op bind --mode target --tid 2 --initiator-address 100.100.10.148
This will create a target, and will be available only to 100.100.10.148. If you wanted to allow other IP's re-run that last line for each IP address you want to allow.
Now if you want to have this LUN persist after a reboot, you can either manually create a conf file in /etc/tgt/conf.d/ or you can run the following to automatically create one for you:
tgt-admin --dump | sudo tee /etc/tgt/conf.d/lun2.confThe only issue with the above is that it dumps all running target information in you new file. You will have to go in there and remove the other targets. In this case, it's just better to manually create the config file... but that's just me. Also, that is not a typo... tgt-admin is a different tool than tgtadm... Weird right?
Anyway, this setup is way easier than SCST ever was. I'm looking forward to replacing all of my SCST SANs with tgt in the upcoming months.
It's important to note that using the above hardware is not going to give you high performance. It's suitable for backup storage, and that's about it. If you want to run VMs or databases, I'd recommend getting 10GBe switches for use in iSCSI. You can get one fairly cheap here (10GBe switches). If you get 10GB switches, you will need a 10GB NIC as well. You can get one here (10GB NICs). Finally you will need faster disks. You can get 15K RPM SAS disks here (15K RPM SAS).
What do you think about this setup? Are you going to try it out? Let us know in the comments!