Raid 5 (软raid)创建、模拟损坏、移除、添加新磁盘

2025年11月02日 14点热度 0人点赞 0条评论

1. 查看磁盘分区,做raid5 最少需要3块盘

lsblk[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 127G 0 disk
├─sda1 8:1 0 600M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 125.4G 0 part
├─cs-root 253:0 0 70G 0 lvm /
├─cs-swap 253:1 0 7.9G 0 lvm [SWAP]
└─cs-home 253:2 0 47.5G 0 lvm /home
sdb 8:16 0 16G 0 disk
sdc 8:32 0 16G 0 disk
sdd 8:48 0 16G 0 disk
sde 8:64 0 16G 0 disk
sdf 8:80 0 16G 0 disk
sdg 8:96 0 16G 0 disk
sdh 8:112 0 16G 0 disk
sdi 8:128 0 16G 0 disk
sdj 8:144 0 16G 0 disk
sdk 8:160 0 16G 0 disk
sdl 8:176 0 16G 0 disk
sdm 8:192 0 16G 0 disk
sdn 8:208 0 16G 0 disk
sdo 8:224 0 16G 0 disk
sdp 8:240 0 16G 0 disk
sdq 65:0 0 16G 0 disk

 

2. 用mdadm 命令创建raid5 阵列

使用命令:mdadm --create /dev/md0 --level=5 --raid-devices=16 /dev/sd[b-q]

--create /dev/md0:创建
--level=5:raid 级别
--raid-devices=16:创建硬盘数量
/dev/sd[b-q]:指定硬盘

 

[root@localhost ~]# mdadm --create /dev/md0 --level=5 --raid-devices=16 /dev/sd[b-q]
To optimalize recovery speed, it is recommended to enable write-indent bitmap, do you want to enable it now? [y/N]? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

 

3. 查看raid5阵列信息,可以看到我们的raid5 已经创建完毕

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov  2 19:01:28 2025
        Raid Level : raid5
        Array Size : 251397120 (239.75 GiB 257.43 GB)
     Used Dev Size : 16759808 (15.98 GiB 17.16 GB)
      Raid Devices : 16
     Total Devices : 16
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Nov  2 19:02:35 2025
             State : clean, degraded, recovering
    Active Devices : 15
   Working Devices : 16
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

    Rebuild Status : 80% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : b45e6239:f4321e20:c0058af8:2d9f7133
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf
       5       8       96        5      active sync   /dev/sdg
       6       8      112        6      active sync   /dev/sdh
       7       8      128        7      active sync   /dev/sdi
       8       8      144        8      active sync   /dev/sdj
       9       8      160        9      active sync   /dev/sdk
      10       8      176       10      active sync   /dev/sdl
      11       8      192       11      active sync   /dev/sdm
      12       8      208       12      active sync   /dev/sdn
      13       8      224       13      active sync   /dev/sdo
      14       8      240       14      active sync   /dev/sdp
      16      65        0       15      spare rebuilding   /dev/sdq

等所有硬盘状态从 rebuilding 变成 sync ,就完成了raid5创建

 

4. 生成配置文件

mdadm -Ds > /etc/mdadm.conf

如没这一步,重启后 /dev/md0 会变成 /dev/md127

dracut -f

(CentOS/RHEL系统)更新初始化内存文件系统,确保配置生效。

 

5. 对其创建文件系统,在这里我选择的是xfs格式,并进行挂载操作

[root@localhost ~]# mkfs.xfs -f /dev/md0
meta-data=/dev/md0 isize=512 agcount=16, agsize=3928192 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=0
data = bsize=4096 blocks=62849280, imaxpct=25
= sunit=128 swidth=1920 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
log =internal log bsize=4096 blocks=30688, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

 

6. 挂载,并查看挂载状态

[root@localhost ~]# mount /dev/md0 /mnt/md0/
[root@localhost ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/cs-root xfs 70G 2.6G 68G 4% /
devtmpfs devtmpfs 7.7G 0 7.7G 0% /dev
tmpfs tmpfs 7.7G 0 7.7G 0% /dev/shm
efivarfs efivarfs 128M 34K 128M 1% /sys/firmware/efi/efivars
tmpfs tmpfs 3.1G 11M 3.1G 1% /run
tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service
/dev/sdb2 xfs 960M 229M 732M 24% /boot
/dev/mapper/cs-home xfs 48G 964M 47G 2% /home
/dev/sdb1 vfat 599M 8.4M 591M 2% /boot/efi
tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/getty@tty1.service
tmpfs tmpfs 1.6G 4.0K 1.6G 1% /run/user/1000
/dev/md0 xfs 240G 4.7G 236G 2% /mnt/md0

 

7. 开机自动挂载

先查找分区UUID

[root@localhost ~]# ls -all /dev/disk/by-uuid/
total 0
drwxr-xr-x. 2 root root 160 Nov  2 19:33 .
drwxr-xr-x. 8 root root 160 Nov  2 19:31 ..
lrwxrwxrwx. 1 root root  10 Nov  2 19:31 4de6582e-75ce-41cd-b462-f45d7ed9e4c5 -> ../../dm-1
lrwxrwxrwx. 1 root root  10 Nov  2 19:31 62904add-ac26-4681-9f56-58262ed9ce5c -> ../../dm-0
lrwxrwxrwx. 1 root root  10 Nov  2 19:31 7a099979-69c5-4a64-a5d1-102fcb527d6d -> ../../dm-2
lrwxrwxrwx. 1 root root   9 Nov  2 19:33 8d4bd631-43fd-497a-a24c-63350fb03a73 -> ../../md0
lrwxrwxrwx. 1 root root  10 Nov  2 19:31 aeaca083-477c-4b98-948b-4270cf7431ec -> ../../sdb2
lrwxrwxrwx. 1 root root  10 Nov  2 19:31 D110-1DEC -> ../../sdb1

其中 ../../md0 为刚才格式化的分区,将UUID增加到 fstab 文件

UUID=8d4bd631-43fd-497a-a24c-63350fb03a73 /data xfs defaults 1 2

/data 为分区挂载目录,修改成自己想要的路径。重启,查看挂载是否成功。

[root@localhost ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/mapper/cs-root xfs        70G  2.6G   68G   4% /
devtmpfs            devtmpfs  7.7G     0  7.7G   0% /dev
tmpfs               tmpfs     7.7G     0  7.7G   0% /dev/shm
efivarfs            efivarfs  128M   34K  128M   1% /sys/firmware/efi/efivars
tmpfs               tmpfs     3.1G   11M  3.1G   1% /run
tmpfs               tmpfs     1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
/dev/sda2           xfs       960M  229M  732M  24% /boot
/dev/sda1           vfat      599M  8.4M  591M   2% /boot/efi
/dev/mapper/cs-home xfs        48G  964M   47G   2% /home
/dev/md0            xfs       240G  4.7G  236G   2% /data
tmpfs               tmpfs     1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
tmpfs               tmpfs     1.6G  4.0K  1.6G   1% /run/user/1000

 

8. 模拟硬盘损坏

先将目标磁盘从阵列中标记为失效,避免数据不一致:

[root@localhost ~]# mdadm --manage /dev/md0 --fail /dev/sdc
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov  2 19:26:11 2025
        Raid Level : raid5
        Array Size : 251397120 (239.75 GiB 257.43 GB)
     Used Dev Size : 16759808 (15.98 GiB 17.16 GB)
      Raid Devices : 16
     Total Devices : 16
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Nov  2 19:40:45 2025
             State : clean, degraded
    Active Devices : 15
   Working Devices : 15
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : ca39bc78:c9f823e7:91cf7d03:2f2d5856
            Events : 31

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed
       2       8       48        2      active sync   /dev/sdd
       3       8      176        3      active sync   /dev/sdl
       4       8       96        4      active sync   /dev/sdg
       5       8       64        5      active sync   /dev/sde
       6       8       80        6      active sync   /dev/sdf
       7       8      224        7      active sync   /dev/sdo
       8       8      144        8      active sync   /dev/sdj
       9       8      112        9      active sync   /dev/sdh
      10       8      128       10      active sync   /dev/sdi
      11       8      192       11      active sync   /dev/sdm
      12       8      160       12      active sync   /dev/sdk
      13       8      208       13      active sync   /dev/sdn
      14       8      240       14      active sync   /dev/sdp
      16      65        0       15      active sync   /dev/sdq

       1       8       32        -      faulty   /dev/sdc

可看到 /dev/sdc 状态变成了 faulty

 

9. 移除已失效的磁盘

[root@localhost ~]# mdadm --manage /dev/md0 --remove /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov  2 19:26:11 2025
        Raid Level : raid5
        Array Size : 251397120 (239.75 GiB 257.43 GB)
     Used Dev Size : 16759808 (15.98 GiB 17.16 GB)
      Raid Devices : 16
     Total Devices : 15
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Nov  2 19:42:05 2025
             State : clean, degraded
    Active Devices : 15
   Working Devices : 15
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : ca39bc78:c9f823e7:91cf7d03:2f2d5856
            Events : 32

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed
       2       8       48        2      active sync   /dev/sdd
       3       8      176        3      active sync   /dev/sdl
       4       8       96        4      active sync   /dev/sdg
       5       8       64        5      active sync   /dev/sde
       6       8       80        6      active sync   /dev/sdf
       7       8      224        7      active sync   /dev/sdo
       8       8      144        8      active sync   /dev/sdj
       9       8      112        9      active sync   /dev/sdh
      10       8      128       10      active sync   /dev/sdi
      11       8      192       11      active sync   /dev/sdm
      12       8      160       12      active sync   /dev/sdk
      13       8      208       13      active sync   /dev/sdn
      14       8      240       14      active sync   /dev/sdp
      16      65        0       15      active sync   /dev/sdq

可见 /dev/sdc Number变成了-,状态变成了 removed,从阵列中移除了。

 

10. 增加新的硬盘替换失效硬盘

[root@localhost ~]# mdadm --manage /dev/md0 --add /dev/sdc
mdadm: re-added /dev/sdc
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov  2 19:26:11 2025
        Raid Level : raid5
        Array Size : 251397120 (239.75 GiB 257.43 GB)
     Used Dev Size : 16759808 (15.98 GiB 17.16 GB)
      Raid Devices : 16
     Total Devices : 16
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Nov  2 19:53:50 2025
             State : clean
    Active Devices : 16
   Working Devices : 16
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : ca39bc78:c9f823e7:91cf7d03:2f2d5856
            Events : 50

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       80        3      active sync   /dev/sdf
       4       8       64        4      active sync   /dev/sde
       5       8       96        5      active sync   /dev/sdg
       6       8      128        6      active sync   /dev/sdi
       7       8      160        7      active sync   /dev/sdk
       8       8      176        8      active sync   /dev/sdl
       9       8      112        9      active sync   /dev/sdh
      10       8      144       10      active sync   /dev/sdj
      11       8      208       11      active sync   /dev/sdn
      12       8      192       12      active sync   /dev/sdm
      13       8      224       13      active sync   /dev/sdo
      14      65        0       14      active sync   /dev/sdq
      16       8      240       15      active sync   /dev/sdp

添加新硬盘后阵列会自动同步数据

路灯

这个人很懒,什么都没留下

文章评论