【导语】软RAID是在操作系统层面进行的RAID配置,也能对数据进行保护,实际生产环境中使用存储中磁盘阵列和硬RAID实现冗余的情况比较多。
使用vmware workstation
OS版本:RHEL7.0
虚拟机添加三块硬盘,使用sdb、sdc、sdd三块盘进行配置
[root@rh ~]# fdisk -l|grep sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 41943039 20458496 8e Linux LVM
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
分别给三块盘分区
[root@rh ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, sectors or size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
其他两块采用同样的分区方法
创建raid5
[root@rh ~]# mdadm -C /dev/md0 -l5 -n3 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
查看raid状态,逐渐完成
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=========>...........] recovery = 47.8% (5010300/10476032) finish=0.6min speed=135439K/sec
unused devices:
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[============>........] recovery = 64.6% (6773352/10476032) finish=0.4min speed=131099K/sec
unused devices:
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=============>.......] recovery = 67.7% (7098984/10476032) finish=0.4min speed=128510K/sec
unused devices:
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices:
使用以下命令查看
[root@rh ~]# mdadm --query /dev/md0
/dev/md0: 19.98GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail.
查看细节、或者使用 mdadm --detail /dev/md0
[root@rh ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 2 11:49:09 2014
Raid Level : raid5
Array Size : 20952064 (19.98 GiB 21.45 GB)
Used Dev Size : 10476032 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Aug 2 11:50:32 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rh.ol.com:0 (local to host rh.ol.com)
UUID : 3c2cfd0d:08646c79:601cde6a:cdf532d7
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
在/dev/md0创建文件系统
[root@rh ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 2 11:49:09 2014
Raid Level : raid5
Array Size : 20952064 (19.98 GiB 21.45 GB)
Used Dev Size : 10476032 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Aug 2 11:53:44 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rh.ol.com:0 (local to host rh.ol.com)
UUID : 3c2cfd0d:08646c79:601cde6a:cdf532d7
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
创建文件系统
[root@rh ~]# mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0 isize=256 agcount=16, agsize=327296 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=5236736, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
挂载相应目录
[root@rh ~]# mkdir /md0
[root@rh ~]# mount /dev/md0 /md0
[root@rh ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 18G 3.5G 15G 20% /
devtmpfs 485M 0 485M 0% /dev
tmpfs 494M 148K 494M 1% /dev/shm
tmpfs 494M 7.2M 487M 2% /run
tmpfs 494M 0 494M 0% /sys/fs/cgroup
/dev/sda1 497M 119M 379M 24% /boot
/dev/sr0 4.0G 4.0G 0 100% /run/media/liuzhen/RHEL-7.0 Server.x86_64
/dev/md0 20G 33M 20G 1% /md0
写入自动挂载文件/etc/fstab
[root@rh ~]# echo /dev/md0 /md0 xfs defaults 1 2 >> /etc/fstab
查看文件系统类型
[root@rh ~]# findmnt /md0
TARGET SOURCE FSTYPE OPTIONS
/md0 /dev/md0 xfs rw,relatime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota
破坏、恢复raid实验,创建文件在/md0 目录下
[root@rh md0]# ls
testRAID testRAID1
[root@rh md0]# pwd
/md0
将/dev/sdb 状态改为faulty
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0](F)
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
unused devices:
关机 虚拟机删除sdb 添加一块新的硬盘
查看当前状态
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
查看细节
[root@rh md0]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 2 11:49:09 2014
Raid Level : raid5
Array Size : 20952064 (19.98 GiB 21.45 GB)
Used Dev Size : 10476032 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Aug 2 20:26:49 2014
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rh.ol.com:0 (local to host rh.ol.com)
UUID : 3c2cfd0d:08646c79:601cde6a:cdf532d7
Events : 24
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
新添加sde
[root@rh md0]# fdisk /dev/sde
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x93106f1e.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, sectors or size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rh md0]# mdadm /dev/md0 -a /dev/sde1
mdadm: added /dev/sde1
查看恢复状态
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=====>...............] recovery = 28.3% (2966160/10476032) finish=0.9min speed=128963K/sec
unused devices:
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[======>..............] recovery = 33.8% (3546676/10476032) finish=0.8min speed=131358K/sec
unused devices:
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=========>...........] recovery = 48.3% (5065596/10476032) finish=0.6min speed=140007K/sec
unused devices:
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices:
恢复完成,查看细节
[root@rh md0]# mdadm --query /dev/md0
/dev/md0: 19.98GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail.
查看文件系统中文件依然存在
[root@rh md0]# ls
testRAID testRAID1
[root@rh md0]# pwd
/md0
停用软raid
去电mount的文件系统,去掉/etc/fstab相应的行
[root@rh ~]# umount /md0
[root@rh ~]#
[root@rh ~]#
删除raid
[root@rh ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
查看raid状态,发现已经没有raid
[root@rh ~]# cat /proc/mdstat
Personalities :
unused devices:
Copyright © 2024 妖气游戏网 www.17u1u.com All Rights Reserved