Attending "Red Hat Enterprise Clustering and Storage Management" in August. Quite a few of these technologies I haven't touched upon before so probably best to go through them before the course.
Initially I wonder how many of these are Red Hat specific, or how many of these I can accomplish by using the free clones such as CentOS or Scientific Linux. We'll see :) At least a lot of Red Hat's guides will include their Storage Server.
I used the course content summary as a template for this post, my notes are made within them.. below.
For future questions and trolls: this is not a how-to for lazy people who just want to copy and paste. There are plenty of other sites for that. This is just the basics and it might have some pointers so that I know which are the basic steps and names/commands for each task. That way I hope it's possible to figure out how to use the commands and such by RTFM.
Course content summary
Clusters and storage
Get an overview of storage and cluster technologies.
ISCSI configuration
Set up and manage iSCSI.
Step 1: Setup a server that can present iSCSI LUNs. A target
- CentOS 6.4 - minimal. Set up basic stuff like networking, user account, yum update, ntp/time sync then make a clone of the VM.
- Install some useful software like: yum install ntp parted man
- Add a new disk to the VM
Step 2: Make nodes for the cluster.
- yum install iscsi-initiator-utils
Step 3: Setup an iSCSI target on the iSCSI server.
http://www.server-world.info/en/note?os=CentOS_6&p=iscsi
- yum install scsi-target-utils
- allow port 3260
- edit /etc/tgt/target.conf
- if you do comment out the ip range and authentication it's free-for-all
http://www.server-world.info/en/note?os=CentOS_6&p=iscsi&f=2
Step 4: Login to the target from at least two nodes by running 'iscsiadm' commands.
Next step would be to put an appropriate file system on the LUN.
UDEV
Learn basic manipulation and creation of udev rules.
http://www.reactivated.net/writing_udev_rules.html is an old link but just change the commands to "udevadm" instead of "udev*" and at least the sections I read worked the same.
udevadm info -a -n /dev/sdb
Above command helps you find properties which you can build rules from. Only use properties from one parent.
I have a USB key that I can pass through to my VM in VirtualBox, without any modifications it pops up as /dev/sdc.
By looking in the output of the above command I can create /etc/udev/rules.d/10-usb.rules that contains:
SUBSYSTEMS=="usb", ATTRS{serial}=="001CC0EC3450BB40E71401C9", NAME="my_usb_disk"
After "removing" the USB disk from the VM and adding it again the disk (and also all partitions!) will be called /dev/my_usb_disk. This is bad.
By using SYMLINK+="my_usb_disk" instead of NAME="my_usb_disk" all the /dev/sdc devices are kept and /dev/my_usb_disk points to /dev/sdc5. And on next boot it pointed to sdc6 (and before that sg3 and sdc7..). This is also bad.
To make one specific partition with a specific size be symlinked to /dev/my_usb_disk I could set this rule:
SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk"
You could do:
KERNEL=="sd*" SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk%n"
Which will create /dev/my_usb_disk5 !
This would perhaps be acceptable, but if you ever want to re-partition the disk then you'd have to change the udev rules accordingly.
If you want to create symlinks for each partition (based on it being a usb, a disk and have the USB with specified serial number):
SUBSYSTEMS=="usb", KERNEL=="sd*", ATTRS{serial}=="001CC0EC3450BB40E71401C9", SYMLINK+="my_usb_disk%n"
These things can be useful if you have several USB disks but you always want the disk to be called /dev/my_usb_disk and not sometimes /dev/sdb and sometimes /dev/sdc.
For testing one can use "udevadm test /sys/class/block/sdc"
Multipathing
Combine multiple paths to SAN devices into one fault-tolerant virtual device.
Ah, this one I've been in touch with before with fibrechannel, it also works with iSCSI. Multipath is the command and be wary of devices/multipaths vs default settings. Multipathd can be used in case there are actually multiple paths to a LUN (the target is perhaps available on two IP addresses/networks) but it can also be used to set a user_friendly name to a disk, based on its wwid.
Some good commands:
service multipathd status yum provides */multipath.conf # device-mapper-multipath is the package. multipath -ll
Copy in default multipath.conf to /etc; reload and hit multipath -ll to see what it does. After that the Fun begins!
Red Hat high-availability overview
Learn the architecture and component technologies in the Red Hat® High Availability Add-On.
Quorum
Understand quorum and quorum calculations.
Fencing
Understand Fencing and fencing configuration.
Resources and resource groups
Understand rgmanager and the configuration of resources and resource groups.
Advanced resource management
Understand resource dependencies and complex resources.
Two-node cluster issues
Understand the use and limitations of 2-node clusters.
http://en.wikipedia.org/wiki/Split-brain_(computing)
LVM management
Review LVM commands and Clustered LVM (clvm).
Create Normal LVM and make a snapshot
Tutonics has a good "ubuntu" guide for LVMs, but at least the snapshot part works the same.
- yum install lvm2
- parted /dev/vda # create two primary large physical partitions. With a CentOS64 VM in openstack I had to reboot after this step.
- pvcreate /dev/vda3 pvcreate /dev/vda4
- vgcreate VG1 /dev/vda3 /dev/vda4
- lvcreate -L 1G VG1 # create a smaller logical volume (to give room for snapshot volume)
- mkfs.ext4 /dev/VG1/
- mount /dev/VG1/lvol0 /mnt
- date >> /mnt/datehere
- lvcreate -L 1G -s -n snap_lvol0 /dev/VG1/lvol0
- date >> /mnt/datehere
- mkdir /snapmount
- mount /dev/VG1/snap_lvol0 /snapmount # mount the snapshot :)
- diff /snapmount/datehere /mnt/datehere
Revert a Logival Volume to the state of the snapshot:
- umount /mnt /snapmount
- lvconvert --merge /dev/VG1/snap_lvol0 # this also removes the snapshot under /dev/VG1/
- mount /mnt
- cat /mnt/datehere
XFS
Explore the Features of the XFS® file system and tools required for creating, maintaining, and troubleshooting.
yum provides */mkfs.xfs
yum install quota
XFS Quotas:
mount with uquota for user quotas, mount with uqnoenforce for soft quotas. use xfs_quota -x to set quotas help limit
To illustrate the quotas: set a limit for user "user":
xfs -x -c "limit bsoft=100m bhard=110m user"
Then create two 50M files. While writing the 3rd file the cp command will halt when it is at the hard limit:
[user@rhce3 home]$ cp 50M 50M_2 cp: writing `50M_2': Disk quota exceeded [user@rhce3 home]$ ls -l total 112636 -rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M -rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M_1 -rw-rw-r-- 1 user user 10477568 Aug 15 09:29 50M_2
Red Hat Storage
Work with Gluster to create and maintain a scale-out storage solution.
http://chauhan-rhce.blogspot.fi/2013/04/gluster-file-system-configuration-steps.html
Comprehensive review
Set up high-availability services and storage.