Introduction
On this series of post I’ll explain how to setup a high available and redundant NFS cluster using iSCSI with DM-Multipath and Corosync & Pacemaker to manage the cluster and the resources associated. The objective of this scenario it’s create a redundant and fault tolerant NFS storage with automatic failover, to ensure the maximum availability of the NFS exports most of the time.
For this environment I’ve used two servers running Ubuntu 14.04.2 LTS with two NICs configured on each server, one to provide the NFS service to the clients and another one to connect with the iSCSI SAN network. For the iSCSI SAN storage device, I’ve already setup two physical adapters and two network interfaces for each adapter for redundant network access and provide two physical paths to the storage system. Both NFS servers will have attached the LUN device using a different InitiatorName and will have setup the device mapper multipathing (DM-Multipath), which allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers, so basically It is as if the NFS servers had a single block device.
The cluster software used is Corosync and the resource manager Pacemaker, where Pacemaker will be the responsible to assign a VIP (virtual ip address), mount the file system from the block device and starts the nfs service with the specific exports for the clients on the active node of the cluster. In case of failure of the active node of the cluster the resources will be migrated to the passive node and the services will continue to operate as if nothing had happened.
This post specifically will cover the configuration part of the iSCSI initiator for both NFS servers and the configuration for the device mapper multipathing, to see the configuration for the cluster with corosync and pacemaker check the second part: http://opentodo.net/2015/06/high-available-nfs-server-setup-corosync-pacemaker/
So let’s get started with the setup!
iSCSI initiator configuration
– Install dependencies:
# aptitude install multipath-tools open-iscsi
Server 1
– Edit configuration file /etc/iscsi/initiatorname.iscsi:
InitiatorName=iqn.1647-03.com.cisco:01.vdsk-nfs1
Server 2
– Edit configuration file /etc/iscsi/initiatorname.iscsi:
InitiatorName=iqn.1647-03.com.cisco:01.vdsk-nfs2
NOTE: initiator identifiers on both servers are different but they are associated with the same LUN device.
– Runs a discovery on iSCSI targets:
# iscsiadm -m discovery -t sendtargets -p 10.54.61.35 # iscsiadm -m discovery -t sendtargets -p 10.54.61.36 # iscsiadm -m discovery -t sendtargets -p 10.54.61.37 # iscsiadm -m discovery -t sendtargets -p 10.54.61.38
– Connect and login with the iSCSI target:
# iscsiadm -m node -T iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.a -p 10.54.61.35 --login # iscsiadm -m node -T iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.a -p 10.54.61.36 --login # iscsiadm -m node -T iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.b -p 10.54.61.37 --login # iscsiadm -m node -T iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.b -p 10.54.61.38 --login
– Check the sessions established with the iSCSI SAN device:
# iscsiadm -m node 10.54.61.35:3260,1 iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.a 10.54.61.36:3260,2 iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.a 10.54.61.37:3260,1 iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.b 10.54.61.38.38:3260,2 iqn.2054-02.com.hp:storage.msa2012i.0390d423d2.b
– At this point the block devices should be available on both servers like a local attached devices, you can check it simply running fdisk:
# fdisk -l Disk /dev/sdb: 1000.0 GB, 1000000716800 bytes 255 heads, 63 sectors/track, 121576 cylinders, total 1953126400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 63 1953118439 976559188+ 83 Linux Disk /dev/sdc: 1000.0 GB, 1000000716800 bytes 255 heads, 63 sectors/track, 121576 cylinders, total 1953126400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1953118439 976559188+ 83 Linux
In my case /dev/sda is the local disk for the server and /dev/sdb and /dev/sdc corresponds to the iSCSI block devices (one device for each adapter). Now We need to setup a device mapper multipath for these two devices, /dev/sdb and /dev/sdc, so in case one of the adapter fails the LUN device will continue working in our system and multipath will switch the used disk for our block device.
Multipath configuration
– We need first to retrieve and generate a unique SCSI identifier to configure on the multipath configuration, running the following command for one of the iSCSI devices:
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb 3600c0ff000d823e5ed6a0a4b01000000
– Create the multipath configuration file /etc/multipath.conf with the following content:
## ## This is a template multipath-tools configuration file ## Uncomment the lines relevent to your environment ## defaults { user_friendly_names yes polling_interval 3 selector "round-robin 0" path_grouping_policy multibus path_checker directio failback immediate no_path_retry fail } blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z][[0-9]*]" } multipaths{ multipath { # id retrieved with the utility /lib/udev/scsi_id wwid 3600c0ff000d823e5ed6a0a4b01000000 alias nfs } }
– Restart multipath-tools service:
# service multipath-tools restart
– Check again the disks available in the system:
# fdisk -l Disk /dev/sdb: 1000.0 GB, 1000000716800 bytes 255 heads, 63 sectors/track, 121576 cylinders, total 1953126400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 63 1953118439 976559188+ 83 Linux Disk /dev/sdc: 1000.0 GB, 1000000716800 bytes 255 heads, 63 sectors/track, 121576 cylinders, total 1953126400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1953118439 976559188+ 83 Linux Disk /dev/mapper/nfs: 1000.0 GB, 1000000716800 bytes 255 heads, 63 sectors/track, 121576 cylinders, total 1953126400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/mapper/nfs1 63 1953118439 976559188+ 83 Linux Disk /dev/mapper/nfs-part1: 1000.0 GB, 999996609024 bytes 255 heads, 63 sectors/track, 121575 cylinders, total 1953118377 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Now as you can see we’ve a new block device using the alias setup on the multipath configuration file /dev/mapper/nfs. The disk I’ve partitioned and implemented the filesystem is the block device /dev/mapper/nfs-part1, so you can mount it in your system with the mount utility.
– You can check the health of the multipath block device and check if both devices are operational, running the following command:
# multipath -ll nfs (3600c0ff000d823e5ed6a0a4b01000000) dm-3 HP,MSA2012i size=931G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 6:0:0:0 sdb 8:16 active ready running `- 5:0:0:0 sdc 8:32 active ready running
References
https://help.ubuntu.com/14.04/serverguide/device-mapper-multipathing.html
http://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
This post is a second part of the series of post High available NFS server, find the second part here.
Pingback:High available NFS server: Setup Corosync & Pacemaker | root@opentodo#
Pingback:High available NFS server: Setup Corosync & Pacemaker | Linux Admins