Linux 常见 RAID 及软 RAID 创建
raid
可以大幅度的提高磁盘性能,以及可靠性,这么好的技术怎么能不掌握呢!此篇介绍一些常见raid
,及其在linux
上的软raid
创建方法。
mdadm
- 创建软
raid
mdadm -c -v /dev/创建的设备名 -l级别 -n数量 添加的磁盘 [-x数量 添加的热备份盘]
-c
:创建一个新的阵列--create
-v
:显示细节--verbose
-l
:设定raid级别--level=
-n
:指定阵列中可用device
数目--raid-devices=
-x
:指定初始阵列的富余device
数目--spare-devices=
,空闲盘(热备磁盘)能在工作盘损坏后自动顶替
- 查看详细信息
mdadm -d /dev/设备名
-d
:打印一个或多个md device
的详细信息--detail
- 查看
raid
的状态
cat /proc/mdstat
- 模拟损坏
mdadm -f /dev/设备名 磁盘
-f
:模拟损坏fail
- 移除损坏
mdadm -r /dev/设备名 磁盘
-r
:移除remove
- 添加新硬盘作为热备份盘
mdadm -a /dev/设备名 磁盘
-a
:添加add
raid0
raid0
俗称条带,它将两个或多个硬盘组成一个逻辑硬盘,容量是所有硬盘之和,因为是多个硬盘组合成一个,故可并行写操作,写入速度提高,但此方式硬盘数据没有冗余,没有容错,一旦一个物理硬盘损坏,则所有数据均丢失。因而,raid0
适合于对数据量大,但安全性要求不高的场景,比如音像、视频文件的存储等。
实验:raid0
创建,格式化,挂载使用。
1.添加2
块20g
的硬盘,分区,类型id
为fd
。
[root@localhost ~]# fdisk -l | grep raid /dev/sdb1 2048 41943039 20970496 fd linux raid autodetect /dev/sdc1 2048 41943039 20970496 fd linux raid autodetect
2.创建raid0
。
[root@localhost ~]# mdadm -c -v /dev/md0 -l0 -n2 /dev/sd{b,c}1 mdadm: chunk size defaults to 512k mdadm: fail create md0 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
3.查看raidstat
状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid0] md0 : active raid0 sdc1[1] sdb1[0] 41906176 blocks super 1.2 512k chunks unused devices: <none>
4.查看raid0
的详细信息。
[root@localhost ~]# mdadm -d /dev/md0 /dev/md0: version : 1.2 creation time : sun aug 25 15:28:13 2019 raid level : raid0 array size : 41906176 (39.96 gib 42.91 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 15:28:13 2019 state : clean active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 chunk size : 512k consistency policy : none name : localhost:0 (local to host localhost) uuid : 7ff54c57:b99a59da:6b56c6d5:a4576ccf events : 0 number major minor raiddevice state 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
5.格式化。
[root@localhost ~]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=512 agcount=16, agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
6.挂载使用。
[root@localhost ~]# mkdir /mnt/md0 [root@localhost ~]# mount /dev/md0 /mnt/md0/ [root@localhost ~]# df -ht filesystem type size used avail use% mounted on /dev/mapper/centos-root xfs 17g 1013m 16g 6% / devtmpfs devtmpfs 901m 0 901m 0% /dev tmpfs tmpfs 912m 0 912m 0% /dev/shm tmpfs tmpfs 912m 8.7m 904m 1% /run tmpfs tmpfs 912m 0 912m 0% /sys/fs/cgroup /dev/sda1 xfs 1014m 143m 872m 15% /boot tmpfs tmpfs 183m 0 183m 0% /run/user/0 /dev/md0 xfs 40g 33m 40g 1% /mnt/md0
raid1
raid1
俗称镜像,它最少由两个硬盘组成,且两个硬盘上存储的数据均相同,以实现数据冗余。raid1
读操作速度有所提高,写操作理论上与单硬盘速度一样,但由于数据需要同时写入所有硬盘,实际上稍为下降。容错性是所有组合方式里最好的,只要有一块硬盘正常,则能保持正常工作。但它对硬盘容量的利用率则是最低,只有50%
,因而成本也是最高。raid1
适合对数据安全性要求非常高的场景,比如存储数据库数据文件之类。
实验:raid1
创建,格式化,挂载使用,故障模拟,重新添加热备份。
1.添加3
块20g
的硬盘,分区,类型id
为fd
。
[root@localhost ~]# fdisk -l | grep raid /dev/sdb1 2048 41943039 20970496 fd linux raid autodetect /dev/sdc1 2048 41943039 20970496 fd linux raid autodetect /dev/sdd1 2048 41943039 20970496 fd linux raid autodetect
2.创建raid1
,并添加1
个热备份盘。
[root@localhost ~]# mdadm -c -v /dev/md1 -l1 -n2 /dev/sd{b,c}1 -x1 /dev/sdd1 mdadm: note: this array has metadata at the start and may not be suitable as a boot device. if you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 20953088k continue creating array? y mdadm: fail create md1 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md1 started.
3.查看raidstat
状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] md1 : active raid1 sdd1[2](s) sdc1[1] sdb1[0] 20953088 blocks super 1.2 [2/2] [uu] [========>............] resync = 44.6% (9345792/20953088) finish=0.9min speed=203996k/sec unused devices: <none>
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] md1 : active raid1 sdd1[2](s) sdc1[1] sdb1[0] 20953088 blocks super 1.2 [2/2] [uu] unused devices: <none>
4.查看raid1
的详细信息。
[root@localhost ~]# mdadm -d /dev/md1 /dev/md1: version : 1.2 creation time : sun aug 25 15:38:44 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 3 persistence : superblock is persistent update time : sun aug 25 15:39:24 2019 state : clean, resyncing active devices : 2 working devices : 3 failed devices : 0 spare devices : 1 consistency policy : resync resync status : 40% complete name : localhost:1 (local to host localhost) uuid : b921e8b3:a18e2fc9:11706ba4:ed633dfd events : 6 number major minor raiddevice state 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 - spare /dev/sdd1
5.格式化。
[root@localhost ~]# mkfs.xfs /dev/md1 meta-data=/dev/md1 isize=512 agcount=4, agsize=1309568 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=5238272, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
6.挂载使用。
[root@localhost ~]# mkdir /mnt/md1 [root@localhost ~]# mount /dev/md1 /mnt/md1/ [root@localhost ~]# df -ht filesystem type size used avail use% mounted on /dev/mapper/centos-root xfs 17g 1014m 16g 6% / devtmpfs devtmpfs 901m 0 901m 0% /dev tmpfs tmpfs 912m 0 912m 0% /dev/shm tmpfs tmpfs 912m 8.7m 904m 1% /run tmpfs tmpfs 912m 0 912m 0% /sys/fs/cgroup /dev/sda1 xfs 1014m 143m 872m 15% /boot tmpfs tmpfs 183m 0 183m 0% /run/user/0 /dev/md1 xfs 20g 33m 20g 1% /mnt/md1
7.创建测试文件。
[root@localhost ~]# touch /mnt/md1/test{1..9}.txt [root@localhost ~]# ls /mnt/md1/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
8.故障模拟。
[root@localhost ~]# mdadm -f /dev/md1 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md1
9.查看测试文件。
[root@localhost ~]# ls /mnt/md1/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
10.查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](f) 20953088 blocks super 1.2 [2/1] [_u] [=====>...............] recovery = 26.7% (5600384/20953088) finish=1.2min speed=200013k/sec unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md1 /dev/md1: version : 1.2 creation time : sun aug 25 15:38:44 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 3 persistence : superblock is persistent update time : sun aug 25 15:47:57 2019 state : active, degraded, recovering active devices : 1 working devices : 2 failed devices : 1 spare devices : 1 consistency policy : resync rebuild status : 17% complete name : localhost:1 (local to host localhost) uuid : b921e8b3:a18e2fc9:11706ba4:ed633dfd events : 22 number major minor raiddevice state 2 8 49 0 spare rebuilding /dev/sdd1 1 8 33 1 active sync /dev/sdc1 0 8 17 - faulty /dev/sdb1
11.再次查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](f) 20953088 blocks super 1.2 [2/2] [uu] unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md1 /dev/md1: version : 1.2 creation time : sun aug 25 15:38:44 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 3 persistence : superblock is persistent update time : sun aug 25 15:49:28 2019 state : active active devices : 2 working devices : 2 failed devices : 1 spare devices : 0 consistency policy : resync name : localhost:1 (local to host localhost) uuid : b921e8b3:a18e2fc9:11706ba4:ed633dfd events : 37 number major minor raiddevice state 2 8 49 0 active sync /dev/sdd1 1 8 33 1 active sync /dev/sdc1 0 8 17 - faulty /dev/sdb1
12.移除损坏的磁盘
[root@localhost ~]# mdadm -r /dev/md1 /dev/sdb1 mdadm: hot removed /dev/sdb1 from /dev/md1
[root@localhost ~]# mdadm -d /dev/md1 /dev/md1: version : 1.2 creation time : sun aug 25 15:38:44 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 15:52:57 2019 state : active active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 consistency policy : resync name : localhost:1 (local to host localhost) uuid : b921e8b3:a18e2fc9:11706ba4:ed633dfd events : 38 number major minor raiddevice state 2 8 49 0 active sync /dev/sdd1 1 8 33 1 active sync /dev/sdc1
13.重新添加热备份盘。
[root@localhost ~]# mdadm -a /dev/md1 /dev/sdb1 mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -d /dev/md1 /dev/md1: version : 1.2 creation time : sun aug 25 15:38:44 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 3 persistence : superblock is persistent update time : sun aug 25 15:53:32 2019 state : active active devices : 2 working devices : 3 failed devices : 0 spare devices : 1 consistency policy : resync name : localhost:1 (local to host localhost) uuid : b921e8b3:a18e2fc9:11706ba4:ed633dfd events : 39 number major minor raiddevice state 2 8 49 0 active sync /dev/sdd1 1 8 33 1 active sync /dev/sdc1 3 8 17 - spare /dev/sdb1
raid5
raid5
最少由三个硬盘组成,它将数据分散存储于阵列中的每个硬盘,并且还伴有一个数据校验位,数据位与校验位通过算法能相互验证,当丢失其中的一位时,raid
控制器能通过算法,利用其它两位数据将丢失的数据进行计算还原。因而raid5
最多能允许一个硬盘损坏,有容错性。raid5
相对于其它的组合方式,在容错与成本方面有一个平衡,因而受到大多数使用者的欢迎。一般的磁盘阵列,最常使用的就是raid5
这种方式。
实验:raid5
创建,格式化,挂载使用,故障模拟,重新添加热备份。
1.添加4
块20g
的硬盘,分区,类型id
为fd
。
[root@localhost ~]# fdisk -l | grep raid /dev/sdb1 2048 41943039 20970496 fd linux raid autodetect /dev/sdc1 2048 41943039 20970496 fd linux raid autodetect /dev/sdd1 2048 41943039 20970496 fd linux raid autodetect /dev/sde1 2048 41943039 20970496 fd linux raid autodetect
2.创建raid5
,并添加1
个热备份盘。
[root@localhost ~]# mdadm -c -v /dev/md5 -l5 -n3 /dev/sd[b-d]1 -x1 /dev/sde1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512k mdadm: size set to 20953088k mdadm: fail create md5 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md5 started.
3.查看raidstat
状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md5 : active raid5 sdd1[4] sde1[3](s) sdc1[1] sdb1[0] 41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [uu_] [====>................] recovery = 24.1% (5057340/20953088) finish=1.3min speed=202293k/sec unused devices: <none>
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md5 : active raid5 sdd1[4] sde1[3](s) sdc1[1] sdb1[0] 41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [uuu] unused devices: <none>
4.查看raid5
的详细信息。
[root@localhost ~]# mdadm -d /dev/md5 /dev/md5: version : 1.2 creation time : sun aug 25 16:13:44 2019 raid level : raid5 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 3 total devices : 4 persistence : superblock is persistent update time : sun aug 25 16:15:29 2019 state : clean active devices : 3 working devices : 4 failed devices : 0 spare devices : 1 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:5 (local to host localhost) uuid : a055094e:9adaff79:2edae9b9:0dcc3f1b events : 18 number major minor raiddevice state 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1 3 8 65 - spare /dev/sde1
5.格式化。
[root@localhost ~]# mkfs.xfs /dev/md5 meta-data=/dev/md5 isize=512 agcount=16, agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
6.挂载使用。
[root@localhost ~]# mkdir /mnt/md5 [root@localhost ~]# mount /dev/md5 /mnt/md5/ [root@localhost ~]# df -ht filesystem type size used avail use% mounted on /dev/mapper/centos-root xfs 17g 1014m 16g 6% / devtmpfs devtmpfs 901m 0 901m 0% /dev tmpfs tmpfs 912m 0 912m 0% /dev/shm tmpfs tmpfs 912m 8.7m 904m 1% /run tmpfs tmpfs 912m 0 912m 0% /sys/fs/cgroup /dev/sda1 xfs 1014m 143m 872m 15% /boot tmpfs tmpfs 183m 0 183m 0% /run/user/0 /dev/md5 xfs 40g 33m 40g 1% /mnt/md5
7.创建测试文件。
[root@localhost ~]# touch /mnt/md5/test{1..9}.txt [root@localhost ~]# ls /mnt/md5/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
8.故障模拟。
[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md5
9.查看测试文件。
[root@localhost ~]# ls /mnt/md5/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
10.查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](f) 41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_uu] [====>................] recovery = 21.0% (4411136/20953088) finish=1.3min speed=210054k/sec unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md5 /dev/md5: version : 1.2 creation time : sun aug 25 16:13:44 2019 raid level : raid5 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 3 total devices : 4 persistence : superblock is persistent update time : sun aug 25 16:21:31 2019 state : clean, degraded, recovering active devices : 2 working devices : 3 failed devices : 1 spare devices : 1 layout : left-symmetric chunk size : 512k consistency policy : resync rebuild status : 12% complete name : localhost:5 (local to host localhost) uuid : a055094e:9adaff79:2edae9b9:0dcc3f1b events : 23 number major minor raiddevice state 3 8 65 0 spare rebuilding /dev/sde1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1 0 8 17 - faulty /dev/sdb1
11.再次查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](f) 41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [uuu] unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md5 /dev/md5: version : 1.2 creation time : sun aug 25 16:13:44 2019 raid level : raid5 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 3 total devices : 4 persistence : superblock is persistent update time : sun aug 25 16:23:09 2019 state : clean active devices : 3 working devices : 3 failed devices : 1 spare devices : 0 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:5 (local to host localhost) uuid : a055094e:9adaff79:2edae9b9:0dcc3f1b events : 39 number major minor raiddevice state 3 8 65 0 active sync /dev/sde1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1 0 8 17 - faulty /dev/sdb1
12.移除损坏的磁盘。
[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb1 mdadm: hot removed /dev/sdb1 from /dev/md5
[root@localhost ~]# mdadm -d /dev/md5 /dev/md5: version : 1.2 creation time : sun aug 25 16:13:44 2019 raid level : raid5 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 3 total devices : 3 persistence : superblock is persistent update time : sun aug 25 16:25:01 2019 state : clean active devices : 3 working devices : 3 failed devices : 0 spare devices : 0 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:5 (local to host localhost) uuid : a055094e:9adaff79:2edae9b9:0dcc3f1b events : 40 number major minor raiddevice state 3 8 65 0 active sync /dev/sde1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1
13.重新添加热备份盘。
[root@localhost ~]# mdadm -a /dev/md5 /dev/sdb1 mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -d /dev/md5 /dev/md5: version : 1.2 creation time : sun aug 25 16:13:44 2019 raid level : raid5 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 3 total devices : 4 persistence : superblock is persistent update time : sun aug 25 16:25:22 2019 state : clean active devices : 3 working devices : 4 failed devices : 0 spare devices : 1 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:5 (local to host localhost) uuid : a055094e:9adaff79:2edae9b9:0dcc3f1b events : 41 number major minor raiddevice state 3 8 65 0 active sync /dev/sde1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1 5 8 17 - spare /dev/sdb1
raid6
raid6
是在raid5
的基础上改良而成的,raid6
再将数据校验位增加一位,所以允许损坏的硬盘数量也由 raid5
的一个增加到二个。由于同一阵列中两个硬盘同时损坏的概率非常少,所以,raid6
用增加一块硬盘的代价,换来了比raid5
更高的数据安全性。
实验:raid6
创建,格式化,挂载使用,故障模拟,重新添加热备份。
1.添加6
块20g
的硬盘,分区,类型id
为fd
。
[root@localhost ~]# fdisk -l | grep raid /dev/sdb1 2048 41943039 20970496 fd linux raid autodetect /dev/sdc1 2048 41943039 20970496 fd linux raid autodetect /dev/sdd1 2048 41943039 20970496 fd linux raid autodetect /dev/sde1 2048 41943039 20970496 fd linux raid autodetect /dev/sdf1 2048 41943039 20970496 fd linux raid autodetect /dev/sdg1 2048 41943039 20970496 fd linux raid autodetect
2.创建raid6
,并添加2
个热备份盘。
[root@localhost ~]# mdadm -c -v /dev/md6 -l6 -n4 /dev/sd[b-e]1 -x2 /dev/sd[f-g]1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512k mdadm: size set to 20953088k mdadm: fail create md6 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md6 started.
3.查看raidstat
状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md6 : active raid6 sdg1[5](s) sdf1[4](s) sde1[3] sdd1[2] sdc1[1] sdb1[0] 41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [uuuu] [===>.................] resync = 18.9% (3962940/20953088) finish=1.3min speed=208575k/sec unused devices: <none>
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md6 : active raid6 sdg1[5](s) sdf1[4](s) sde1[3] sdd1[2] sdc1[1] sdb1[0] 41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [uuuu] unused devices: <none>
4.查看raid6
的详细信息。
[root@localhost ~]# mdadm -d /dev/md6 /dev/md6: version : 1.2 creation time : sun aug 25 16:34:36 2019 raid level : raid6 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 4 total devices : 6 persistence : superblock is persistent update time : sun aug 25 16:34:43 2019 state : clean, resyncing active devices : 4 working devices : 6 failed devices : 0 spare devices : 2 layout : left-symmetric chunk size : 512k consistency policy : resync resync status : 10% complete name : localhost:6 (local to host localhost) uuid : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb events : 1 number major minor raiddevice state 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 4 8 81 - spare /dev/sdf1 5 8 97 - spare /dev/sdg1
5.格式化。
[root@localhost ~]# mkfs.xfs /dev/md6 meta-data=/dev/md6 isize=512 agcount=16, agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
6.挂载使用。
[root@localhost ~]# mkdir /mnt/md6 [root@localhost ~]# mount /dev/md6 /mnt/md6/ [root@localhost ~]# df -ht filesystem type size used avail use% mounted on /dev/mapper/centos-root xfs 17g 1014m 16g 6% / devtmpfs devtmpfs 901m 0 901m 0% /dev tmpfs tmpfs 912m 0 912m 0% /dev/shm tmpfs tmpfs 912m 8.7m 903m 1% /run tmpfs tmpfs 912m 0 912m 0% /sys/fs/cgroup /dev/sda1 xfs 1014m 143m 872m 15% /boot tmpfs tmpfs 183m 0 183m 0% /run/user/0 /dev/md6 xfs 40g 33m 40g 1% /mnt/md6
7.创建测试文件。
[root@localhost ~]# touch /mnt/md6/test{1..9}.txt [root@localhost ~]# ls /mnt/md6/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
8.故障模拟。
[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md6 [root@localhost ~]# mdadm -f /dev/md6 /dev/sdc1 mdadm: set /dev/sdc1 faulty in /dev/md6
9.查看测试文件。
[root@localhost ~]# ls /mnt/md6/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
10.查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md6 : active raid6 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1](f) sdb1[0](f) 41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/2] [__uu] [====>................] recovery = 23.8% (4993596/20953088) finish=1.2min speed=208066k/sec unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md6 /dev/md6: version : 1.2 creation time : sun aug 25 16:34:36 2019 raid level : raid6 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 4 total devices : 6 persistence : superblock is persistent update time : sun aug 25 16:41:09 2019 state : clean, degraded, recovering active devices : 2 working devices : 4 failed devices : 2 spare devices : 2 layout : left-symmetric chunk size : 512k consistency policy : resync rebuild status : 13% complete name : localhost:6 (local to host localhost) uuid : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb events : 27 number major minor raiddevice state 5 8 97 0 spare rebuilding /dev/sdg1 4 8 81 1 spare rebuilding /dev/sdf1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 0 8 17 - faulty /dev/sdb1 1 8 33 - faulty /dev/sdc1
11.再次查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid6] [raid5] [raid4] md6 : active raid6 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1](f) sdb1[0](f) 41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [uuuu] unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md6 /dev/md6: version : 1.2 creation time : sun aug 25 16:34:36 2019 raid level : raid6 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 4 total devices : 6 persistence : superblock is persistent update time : sun aug 25 16:42:42 2019 state : clean active devices : 4 working devices : 4 failed devices : 2 spare devices : 0 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:6 (local to host localhost) uuid : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb events : 46 number major minor raiddevice state 5 8 97 0 active sync /dev/sdg1 4 8 81 1 active sync /dev/sdf1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 0 8 17 - faulty /dev/sdb1 1 8 33 - faulty /dev/sdc1
12.移除损坏的磁盘。
[root@localhost ~]# mdadm -r /dev/md6 /dev/sd{b,c}1 mdadm: hot removed /dev/sdb1 from /dev/md6 mdadm: hot removed /dev/sdc1 from /dev/md6
[root@localhost ~]# mdadm -d /dev/md6 /dev/md6: version : 1.2 creation time : sun aug 25 16:34:36 2019 raid level : raid6 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 4 total devices : 4 persistence : superblock is persistent update time : sun aug 25 16:43:43 2019 state : clean active devices : 4 working devices : 4 failed devices : 0 spare devices : 0 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:6 (local to host localhost) uuid : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb events : 47 number major minor raiddevice state 5 8 97 0 active sync /dev/sdg1 4 8 81 1 active sync /dev/sdf1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1
13.重新添加热备份盘。
[root@localhost ~]# mdadm -a /dev/md6 /dev/sd{b,c}1 mdadm: added /dev/sdb1 mdadm: added /dev/sdc1
[root@localhost ~]# mdadm -d /dev/md6 /dev/md6: version : 1.2 creation time : sun aug 25 16:34:36 2019 raid level : raid6 array size : 41906176 (39.96 gib 42.91 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 4 total devices : 6 persistence : superblock is persistent update time : sun aug 25 16:44:01 2019 state : clean active devices : 4 working devices : 6 failed devices : 0 spare devices : 2 layout : left-symmetric chunk size : 512k consistency policy : resync name : localhost:6 (local to host localhost) uuid : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb events : 49 number major minor raiddevice state 5 8 97 0 active sync /dev/sdg1 4 8 81 1 active sync /dev/sdf1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 6 8 17 - spare /dev/sdb1 7 8 33 - spare /dev/sdc1
raid10
raid10
是先将数据进行镜像操作,然后再对数据进行分组,raid1
在这里就是一个冗余的备份阵列,而raid0
则负责数据的读写阵列。至少要四块盘,两两组合做raid1
,然后做raid0
,raid10
对存储容量的利用率和raid1
一样低,只有50%
。raid10
方案造成了50%
的磁盘浪费,但是它提供了200%
的速度和单磁盘损坏的数据安全性,并且当同时损坏的磁盘不在同一raid1
中,就能保证数据安全性,raid10
能提供比raid5
更好的性能。这种新结构的可扩充性不好,使用此方案比较昂贵。
实验:raid10
创建,格式化,挂载使用,故障模拟,重新添加热备份。
1.添加4
块20g
的硬盘,分区,类型id
为fd
。
[root@localhost ~]# fdisk -l | grep raid /dev/sdb1 2048 41943039 20970496 fd linux raid autodetect /dev/sdc1 2048 41943039 20970496 fd linux raid autodetect /dev/sdd1 2048 41943039 20970496 fd linux raid autodetect /dev/sde1 2048 41943039 20970496 fd linux raid autodetect
2.创建两个raid1
,不添加热备份盘。
[root@localhost ~]# mdadm -c -v /dev/md101 -l1 -n2 /dev/sd{b,c}1 mdadm: note: this array has metadata at the start and may not be suitable as a boot device. if you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 20953088k continue creating array? y mdadm: fail create md101 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md101 started.
[root@localhost ~]# mdadm -c -v /dev/md102 -l1 -n2 /dev/sd{d,e}1 mdadm: note: this array has metadata at the start and may not be suitable as a boot device. if you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 20953088k continue creating array? y mdadm: fail create md102 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md102 started.
3.查看raidstat
状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] md102 : active raid1 sde1[1] sdd1[0] 20953088 blocks super 1.2 [2/2] [uu] [=========>...........] resync = 48.4% (10148224/20953088) finish=0.8min speed=200056k/sec md101 : active raid1 sdc1[1] sdb1[0] 20953088 blocks super 1.2 [2/2] [uu] [=============>.......] resync = 69.6% (14604672/20953088) finish=0.5min speed=200052k/sec unused devices: <none>
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] md102 : active raid1 sde1[1] sdd1[0] 20953088 blocks super 1.2 [2/2] [uu] md101 : active raid1 sdc1[1] sdb1[0] 20953088 blocks super 1.2 [2/2] [uu] unused devices: <none>
4.查看两个raid1
的详细信息。
[root@localhost ~]# mdadm -d /dev/md101 /dev/md101: version : 1.2 creation time : sun aug 25 16:53:00 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 16:53:58 2019 state : clean, resyncing active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 consistency policy : resync resync status : 62% complete name : localhost:101 (local to host localhost) uuid : 80bb4fc5:1a628936:275ba828:17f23330 events : 9 number major minor raiddevice state 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
[root@localhost ~]# mdadm -d /dev/md102 /dev/md102: version : 1.2 creation time : sun aug 25 16:53:23 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 16:54:02 2019 state : clean, resyncing active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 consistency policy : resync resync status : 42% complete name : localhost:102 (local to host localhost) uuid : 38abac72:74fa8a53:3a21b5e4:01ae64cd events : 6 number major minor raiddevice state 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1
5.创建raid10
。
[root@localhost ~]# mdadm -c -v /dev/md10 -l0 -n2 /dev/md10{1,2} mdadm: chunk size defaults to 512k mdadm: fail create md10 when using /sys/module/md_mod/parameters/new_array mdadm: defaulting to version 1.2 metadata mdadm: array /dev/md10 started.
6.查看raidstat
状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] [raid0] md10 : active raid0 md102[1] md101[0] 41871360 blocks super 1.2 512k chunks md102 : active raid1 sde1[1] sdd1[0] 20953088 blocks super 1.2 [2/2] [uu] md101 : active raid1 sdc1[1] sdb1[0] 20953088 blocks super 1.2 [2/2] [uu] unused devices: <none>
7.查看raid10
的详细信息。
[root@localhost ~]# mdadm -d /dev/md10 /dev/md10: version : 1.2 creation time : sun aug 25 16:56:08 2019 raid level : raid0 array size : 41871360 (39.93 gib 42.88 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 16:56:08 2019 state : clean active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 chunk size : 512k consistency policy : none name : localhost:10 (local to host localhost) uuid : 23c6abac:b131a049:db25cac8:686fb045 events : 0 number major minor raiddevice state 0 9 101 0 active sync /dev/md101 1 9 102 1 active sync /dev/md102
8.格式化。
[root@localhost ~]# mkfs.xfs /dev/md10 meta-data=/dev/md10 isize=512 agcount=16, agsize=654208 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10467328, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5112, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
9.挂载使用。
[root@localhost ~]# mkdir /mnt/md10 [root@localhost ~]# mount /dev/md10 /mnt/md10/ [root@localhost ~]# df -ht filesystem type size used avail use% mounted on /dev/mapper/centos-root xfs 17g 1014m 16g 6% / devtmpfs devtmpfs 901m 0 901m 0% /dev tmpfs tmpfs 912m 0 912m 0% /dev/shm tmpfs tmpfs 912m 8.7m 903m 1% /run tmpfs tmpfs 912m 0 912m 0% /sys/fs/cgroup /dev/sda1 xfs 1014m 143m 872m 15% /boot tmpfs tmpfs 183m 0 183m 0% /run/user/0 /dev/md10 xfs 40g 33m 40g 1% /mnt/md10
10.创建测试文件。
[root@localhost ~]# touch /mnt/md10/test{1..9}.txt [root@localhost ~]# ls /mnt/md10/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
11.故障模拟。
[root@localhost ~]# mdadm -f /dev/md101 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md101 [root@localhost ~]# mdadm -f /dev/md102 /dev/sdd1 mdadm: set /dev/sdd1 faulty in /dev/md102
12.查看测试文件。
[root@localhost ~]# ls /mnt/md10/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt
13.查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] [raid0] md10 : active raid0 md102[1] md101[0] 41871360 blocks super 1.2 512k chunks md102 : active raid1 sde1[1] sdd1[0](f) 20953088 blocks super 1.2 [2/1] [_u] md101 : active raid1 sdc1[1] sdb1[0](f) 20953088 blocks super 1.2 [2/1] [_u] unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md101 /dev/md101: version : 1.2 creation time : sun aug 25 16:53:00 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 17:01:11 2019 state : clean, degraded active devices : 1 working devices : 1 failed devices : 1 spare devices : 0 consistency policy : resync name : localhost:101 (local to host localhost) uuid : 80bb4fc5:1a628936:275ba828:17f23330 events : 23 number major minor raiddevice state - 0 0 0 removed 1 8 33 1 active sync /dev/sdc1 0 8 17 - faulty /dev/sdb1
[root@localhost ~]# mdadm -d /dev/md102 /dev/md102: version : 1.2 creation time : sun aug 25 16:53:23 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 17:00:43 2019 state : clean, degraded active devices : 1 working devices : 1 failed devices : 1 spare devices : 0 consistency policy : resync name : localhost:102 (local to host localhost) uuid : 38abac72:74fa8a53:3a21b5e4:01ae64cd events : 19 number major minor raiddevice state - 0 0 0 removed 1 8 65 1 active sync /dev/sde1 0 8 49 - faulty /dev/sdd1
[root@localhost ~]# mdadm -d /dev/md10 /dev/md10: version : 1.2 creation time : sun aug 25 16:56:08 2019 raid level : raid0 array size : 41871360 (39.93 gib 42.88 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 16:56:08 2019 state : clean active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 chunk size : 512k consistency policy : none name : localhost:10 (local to host localhost) uuid : 23c6abac:b131a049:db25cac8:686fb045 events : 0 number major minor raiddevice state 0 9 101 0 active sync /dev/md101 1 9 102 1 active sync /dev/md102
14.移除损坏的磁盘。
[root@localhost ~]# mdadm -r /dev/md101 /dev/sdb1 mdadm: hot removed /dev/sdb1 from /dev/md101 [root@localhost ~]# mdadm -r /dev/md102 /dev/sdd1 mdadm: hot removed /dev/sdd1 from /dev/md102
[root@localhost ~]# mdadm -d /dev/md101 /dev/md101: version : 1.2 creation time : sun aug 25 16:53:00 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 1 persistence : superblock is persistent update time : sun aug 25 17:04:59 2019 state : clean, degraded active devices : 1 working devices : 1 failed devices : 0 spare devices : 0 consistency policy : resync name : localhost:101 (local to host localhost) uuid : 80bb4fc5:1a628936:275ba828:17f23330 events : 26 number major minor raiddevice state - 0 0 0 removed 1 8 33 1 active sync /dev/sdc1
[root@localhost ~]# mdadm -d /dev/md102 /dev/md102: version : 1.2 creation time : sun aug 25 16:53:23 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 1 persistence : superblock is persistent update time : sun aug 25 17:05:07 2019 state : clean, degraded active devices : 1 working devices : 1 failed devices : 0 spare devices : 0 consistency policy : resync name : localhost:102 (local to host localhost) uuid : 38abac72:74fa8a53:3a21b5e4:01ae64cd events : 20 number major minor raiddevice state - 0 0 0 removed 1 8 65 1 active sync /dev/sde1
15.重新添加热备份盘。
[root@localhost ~]# mdadm -a /dev/md101 /dev/sdb1 mdadm: added /dev/sdb1 [root@localhost ~]# mdadm -a /dev/md102 /dev/sdd1 mdadm: added /dev/sdd1
16.再次查看状态。
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] [raid0] md10 : active raid0 md102[1] md101[0] 41871360 blocks super 1.2 512k chunks md102 : active raid1 sdd1[2] sde1[1] 20953088 blocks super 1.2 [2/1] [_u] [====>................] recovery = 23.8% (5000704/20953088) finish=1.2min speed=208362k/sec md101 : active raid1 sdb1[2] sdc1[1] 20953088 blocks super 1.2 [2/1] [_u] [======>..............] recovery = 32.0% (6712448/20953088) finish=1.1min speed=203407k/sec unused devices: <none>
[root@localhost ~]# cat /proc/mdstat personalities : [raid1] [raid0] md10 : active raid0 md102[1] md101[0] 41871360 blocks super 1.2 512k chunks md102 : active raid1 sdd1[2] sde1[1] 20953088 blocks super 1.2 [2/2] [uu] md101 : active raid1 sdb1[2] sdc1[1] 20953088 blocks super 1.2 [2/2] [uu] unused devices: <none>
[root@localhost ~]# mdadm -d /dev/md101 /dev/md101: version : 1.2 creation time : sun aug 25 16:53:00 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 17:07:28 2019 state : clean active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 consistency policy : resync name : localhost:101 (local to host localhost) uuid : 80bb4fc5:1a628936:275ba828:17f23330 events : 45 number major minor raiddevice state 2 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
[root@localhost ~]# mdadm -d /dev/md102 /dev/md102: version : 1.2 creation time : sun aug 25 16:53:23 2019 raid level : raid1 array size : 20953088 (19.98 gib 21.46 gb) used dev size : 20953088 (19.98 gib 21.46 gb) raid devices : 2 total devices : 2 persistence : superblock is persistent update time : sun aug 25 17:07:36 2019 state : clean active devices : 2 working devices : 2 failed devices : 0 spare devices : 0 consistency policy : resync name : localhost:102 (local to host localhost) uuid : 38abac72:74fa8a53:3a21b5e4:01ae64cd events : 39 number major minor raiddevice state 2 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1
常用 raid 间比较
名称 | 硬盘数量 | 容量/利用率 | 读性能 | 写性能 | 数据冗余 |
---|---|---|---|---|---|
raid0 | n | n块总和 | n倍 | n倍 | 无,一个故障,丢失所有数据 |
raid1 | n(偶数) | 50% | ↑ | ↓ | 写两个设备,允许一个故障 |
raid5 | n≥3 | (n-1)/n | ↑↑ | ↓ | 计算校验,允许一个故障 |
raid6 | n≥4 | (n-2)/n | ↑↑ | ↓↓ | 双重校验,允许两个故障 |
raid10 | n(偶数,n≥4) | 50% | (n/2)倍 | (n/2)倍 | 允许基组中的磁盘各损坏一个 |
一些话
此篇涉及到的操作很简单,但是,有很多的查看占用了大量的篇幅,看关键点,过程都是一个套路,都是重复的。
上一篇: Oracle笔记_多表查询
下一篇: Oracle笔记_基础