Step by step guide to configure/create software raid 1 (mdadm)

In case you are new to the term RAID I suggest you to read the below article to get an overall knowledge on RAID as well all the levels along with their usage

RAID levels 0, 1, 2, 3, 4, 5, 6, 0+1, 1+0 features explained in detail

In this article I will try to stick to the point and will explain how you can configure a Software RAID with level 1 i.e. mirroring feature.

To start of you need to know the pre-requisites of RAID 1 i.e. it requires minimum of 2 hard disks and since we are configuring software RAID so you can divide the same hard disk into two partition for testing purpose but it is always advised to use two separate hard disks so that you do not suffer any data loss in case of any failure.

I am doing this setup on my VMware Workstation where my box already has 2 hard disks in place. So next I will just start with creating partitions

Creating Partitions

# ls -l /dev/sd*  
 brw-rw----. 1 root disk 8,  0 Sep 19 22:50 /dev/sda  
 brw-rw----. 1 root disk 8,  1 Sep 19 22:50 /dev/sda1  
 brw-rw----. 1 root disk 8,  2 Sep 19 22:50 /dev/sda2  
 brw-rw----. 1 root disk 8, 16 Sep 19 22:54 /dev/sdb  
brw-rw----. 1 root disk 8, 32 Sep 19 22:52 /dev/sdc 

Verify that the device we are working with aren't already configured for mdraid

# mdadm --examine /dev/sdb /dev/sdc  
mdadm: No md superblock detected on /dev/sdb.  
mdadm: No md superblock detected on /dev/sdc.   

So it says "No md superblock detected" which means no raid is configured for both the disk

Next start creating the partition

# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklab  
                  el
Building a new DOS disklabel with disk identifier 0x37a6235e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1):[Leave blank and hit ENTER for default]
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-652, default 652):[Leave blank and hit ENTER for default]
Using default value 652
Command (m for help): p
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x37a6235e
  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         652     5237158+  83  Linux
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklab                                el
Building a new DOS disklabel with disk identifier 0x31c93154.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-783, default 1):[Leave blank and hit ENTER for default]
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-783, default 783):[Leave blank and hit ENTER for default]
Using default value 783
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Let us inform the OS of our new partitions and the changes

# partprobe  /dev/sdb  
# partprobe  /dev/sdc   

Re-examine our disk for the changes

# mdadm --examine /dev/sdb /dev/sdc  
/dev/sdb:  
   MBR Magic : aa55  
Partition[0] :     10474317 sectors at           63 (type fd)  
/dev/sdc:  
   MBR Magic : aa55  
Partition[0] :     12578832 sectors at           63 (type fd)

Creating RAID partition

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1  
 mdadm: Note: this array has metadata at the start and  
     may not be suitable as a boot device.  If you plan to  
     store '/boot' on this device please ensure that  
     your boot-loader understands md/v1.x metadata, or use  
     --metadata=0.90  
 mdadm: largest drive (/dev/sdc1) exceeds size (5233024K) by more than 1%  
 Continue creating array? y  
 mdadm: Defaulting to version 1.2 metadata  
 mdadm: array /dev/md0 started.
ValueDescription
--levelSet RAID level.  When used  with  --create,  options  are:  linear, raid0,  0,  stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.
--raid-devicesSpecify  the number of active devices in the array.

The below command will show you the active devices on the RAID

# cat /proc/mdstat  
Personalities : [raid1]  
 md0 : active raid1 sdc1[1] sdb1[0]  
       5233024 blocks super 1.2 [2/2] [UU]

unused devices: <none>   

To see more details about our md device

# mdadm --detail /dev/md0
/dev/md0:
       Version : 1.2
  Creation Time : Fri Sep 19 23:02:52 2014
     Raid Level : raid1
     Array Size : 5233024 (4.99 GiB 5.36 GB)
  Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent
    Update Time : Fri Sep 19 23:07:39 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
           Name : test2.example:0  (local to host test2.example)
           UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
         Events : 17
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Format the mdadm device

Now one thing to keep in mind is that now we are not working with any one of the partition instead we are working on a raid filesystem containing both the partition

We will format using ext4 filesystem since I am using CentOS 6.4 which supports ext4

# mkfs.ext4 /dev/md0  
mke2fs 1.41.12 (17-May-2010)  
Filesystem label=  
OS type: Linux  
Block size=4096 (log=2)  
Fragment size=4096 (log=2)  
Stride=0 blocks, Stripe width=0 blocks  
327680 inodes, 1308256 blocks  
65412 blocks (5.00%) reserved for the super user  
First data block=0  
Maximum filesystem blocks=1342177280  
40 block groups  
32768 blocks per group, 32768 fragments per group  
8192 inodes per group  
Superblock backups stored on blocks:  
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done  
Creating journal (32768 blocks): done  
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or  
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mount the md device

# mkdir /myraid

# mount /dev/md0 /myraid/   

Verify the partition

# df -h /myraid/  
Filesystem      Size  Used Avail Use% Mounted on  
/dev/md0        5.0G  138M  4.6G   3% /myraid

Now what would happen if one of my drive stops working?

To check the output first let us create some files inside our raid so that we can verify the data loss

I will copy some files from /root to /myraid/ directory

# cp -rvf /root/\* /myraid/  
`/root/anaconda-ks.cfg' -> `/myraid/anaconda-ks.cfg'  
`/root/Desktop' -> `/myraid/Desktop'  
`/root/Documents' -> `/myraid/Documents'  
`/root/Downloads' -> `/myraid/Downloads'  
`/root/install.log' -> `/myraid/install.log'  
`/root/install.log.syslog' -> `/myraid/install.log.syslog'  
`/root/log' -> `/myraid/log'  
`/root/Music' -> `/myraid/Music'  
`/root/Pictures' -> `/myraid/Pictures'  
`/root/Public' -> `/myraid/Public'  
`/root/Templates' -> `/myraid/Templates'  
`/root/Videos' -> `/myraid/Videos'   

So now we have files which we need to protect in case of any hard disk failure from the raid device.

Let us manually fail one of the device

# mdadm --fail /dev/md0 /dev/sdb1  
 mdadm: set /dev/sdb1 faulty in /dev/md0   

Verify your data

[root@test2 myraid]# ll  
total 120  
-rw-------. 1 root root  1629 Sep 19 23:18 anaconda-ks.cfg  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Desktop  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Documents  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Downloads  
-rw-r--r--. 1 root root 49565 Sep 19 23:18 install.log  
-rw-r--r--. 1 root root 10033 Sep 19 23:18 install.log.syslog  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 log  
drwx------. 2 root root 16384 Sep 19 23:14 lost+found  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Music  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Pictures  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Public  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Templates  
drwxr-xr-x. 2 root root  4096 Sep 19 23:18 Videos   

Everything looks in place. That was wonderful.

IMPORTANT NOTE: But you again you need to remember that RAID 1 needs minimum of 2 hard disk for mirroring purpose because if next time any of your hard disk fails you will loose all your data

Check the active and available devices under you mdraid device

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Sep 19 23:02:52 2014
     Raid Level : raid1
     Array Size : 5233024 (4.99 GiB 5.36 GB)
  Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent
    Update Time : Fri Sep 19 23:21:54 2014
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0
           Name : test2.example:0  (local to host test2.example)
           UUID : 5a463788:9bf2659a:09d1c73a:9adcbbbd
         Events : 19
    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1
       0       8       17        -      faulty   /dev/sdb1

Related Articles:
How to detect new hard disk attached without rebooting in Linux
How to detect new NIC/Ethernet card without rebooting in Linux
Taking Backup of Hard Disk
Disk Attachment Technology FC vs SAS vs iSCSI