Table of Contents
-
Check if the kernel supports quotas
-
Check if the mount properties of the specified partition meet the conditions
-
quotacheck generates configuration files for users and groups
-
edquota edits the quota file to set specified limit sizes
-
Start quota management
-
Stop quota management
-
quota view quota information for specified users and groups
-
repquota view disk quotas for specified partitions
-
setquota non-interactive command to set disk quotas
-
Small experiment with disk quotas
-
PV physical volume creation and removal
-
VG volume group creation and removal
-
LV logical volume creation and removal
-
Increase LV capacity (add 5G space to LV)
-
Decrease LV capacity (reduce LV capacity by 5G)
-
LV snapshot function
-
Mdadm command analysis
-
Build a RAID 5
-
RAID emulation rescue mode
If your Linux server has multiple users frequently accessing data, disk quotas are a very useful tool to maintain fair usage of disk capacity among all users. Additionally, if your users often complain about insufficient disk space, then more advanced file systems need to be learned. In this chapter, we will introduce disk arrays (RAID) and Logical Volume Management (LVM), which can help you manage and maintain the available disk capacity for users.
Quota Disk Quota Configuration
The term “quota” literally means how much “limit” there is. If applied to pocket money, it is similar to “how much pocket money is available in a month”. If applied to the disk usage on a computer host, in Linux, it means how much capacity limit there is. We can use quota to make disk capacity usage more fair. Below, we will introduce what quota is and provide a complete example of how to use it.
Since Linux is a multi-user management operating system, by default, it does not limit the amount of disk space each user can use. If a user neglects or maliciously fills up the disk space, it can lead to the system disk being unable to write or even crashing. To ensure that the system disk has enough remaining space, we need to impose disk space usage limits on users and groups.
Types of disk quota limits:
[Citation]
Levels of disk quota limits:
[Citation]
In minimal mode, this command is not available. Execute yum install -y quota to install.
Back to top
Check if the kernel supports quotas
[root@localhost ~]# cat /boot/config-3.10.0-693.el7.x86_64 |grep "CONFIG_QUOTA"
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
Back to top
Check if the mount properties of the specified partition meet the conditions
[root@localhost ~]# dumpe2fs -h /dev/vdb |grep "Default mount options"
dumpe2fs 1.42.9 (28-Dec-2013)
Default mount options: user_xattr acl
# Check if the result contains the mount properties usrquota and grpquota
Back to top
quotacheck generates configuration files for users and groups
[root@localhost ~]# quotacheck --help
Utility for checking and repairing quota files.
quotacheck [-gucbfinvdmMR] [-F <quota-format>] filesystem|-a
Syntax: [ quota [options] [partition name] ]
-a # Scan all partitions in /etc/mtab that have disk quota functionality enabled. If this parameter is added, the partition name does not need to be included after the command.
-u # Create user quota configuration files, i.e., generate aquota.user
-g # Create group quota configuration files, i.e., aquota.group
-v # Display the scanning process
-c # Clear existing configuration files and create new ones
Back to top
edquota edits the quota file to set specified limit sizes
[root@localhost ~]# edquota --help
edquota: Usage:
edquota [-rm] [-u] [-F formatname] [-p username] [-f filesystem] username ...
edquota [-rm] -g [-F formatname] [-p groupname] [-f filesystem] groupname ...
edquota [-u|g] [-F formatname] [-f filesystem] -t
edquota [-u|g] [-F formatname] [-f filesystem] -T username|groupname ...
Syntax: [ edquota [options] [username or group name] ]
-u # Username
-g # Group name
-t # Set grace period
-p # Copy disk quota rules, no need to set each user or group manually
# edquota -p template_user -u target_user
# Note: The sizes written in the configuration file are in KB by default
Back to top
Start quota management
[root@localhost ~]# quotaon --help
quotaon: Usage:
quotaon [-guvp] [-F quotaformat] [-x state] -a
quotaon [-guvp] [-F quotaformat] [-x state] filesys ...
Syntax: [ quotaon [options] [partition name] ]
-a # Start disk quotas for all partitions based on /etc/mtab (no partition name needed)
-u # Start user disk quotas
-g # Start group disk quotas
-v # Display startup process information
Back to top
Stop quota management
[root@localhost ~]# quotaoff --help
quotaoff: Usage:
quotaoff [-guvp] [-F quotaformat] [-x state] -a
quotaoff [-guvp] [-F quotaformat] [-x state] filesys ...
Syntax: [ quotaoff [options] [partition name] ]
-a # Stop disk quotas for all partitions based on /etc/mtab (no partition name needed)
-u # Stop user disk quotas
-g # Stop group disk quotas
-v # Display shutdown process information
Back to top
quota view quota information for specified users and groups
[root@localhost ~]# quota --help
quota: unrecognized option '--help'
quota: Usage: quota [-guqvswim] [-l | [-Q | -A]] [-F quotaformat]
quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -u username ...
quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -g groupname ...
quota [-qvswugQm] [-F quotaformat] -f filesystem ...
Syntax: [ quota [options] [username] ]
-u # Username
-g # Group name
-v # Display detailed information
-s # Display size in common units
Back to top
repquota view disk quotas for specified partitions
[root@localhost ~]# repquota --help
repquota: Utility for reporting quotas.
Usage:
repquota [-vugsi] [-c|C] [-t|n] [-F quotaformat] (-a | mntpoint)
Syntax: [ repquota [options] [partition name] ]
-u # Query user quotas
-g # Query group quotas
-v # Display details
-s # Display in common units
Back to top
setquota non-interactive command to set disk quotas
[root@localhost ~]# setquota --help
setquota: Usage:
setquota [-u|-g] [-rm] [-F quotaformat] <user|group>
<block-softlimit><block-hardlimit><inode-softlimit><inode-hardlimit> -a|<filesystem>...
setquota [-u|-g] [-rm] [-F quotaformat] <-p protouser|protogroup><user|group> -a|<filesystem>...
setquota [-u|-g] [-rm] [-F quotaformat] -b [-c] -a|<filesystem>...
setquota [-u|-g] [-F quotaformat] -t<blockgrace><inodegrace> -a|<filesystem>...
setquota [-u|-g] [-F quotaformat] <user|group>-T<blockgrace><inodegrace> -a|<filesystem>...
setquota -u username soft(limit) hard(limit) soft(number) hard(number) partition name
Note: This non-interactive command is more suitable for scripting, and if many users have the same disk quota configuration, it can also be achieved by copying.
Back to top
Small experiment with disk quotas
⦁ Here is an unpartitioned disk /dev/sdb, manually partition and format it.⦁ Enable disk quotas and write it to the startup list.⦁ Create user lyshark and group temp.⦁ Configure soft limit of 200M for lyshark, hard limit of 500M, configure soft limit of 100M for temp group, hard limit of 200M.
1. Check if the system supports quotas
[root@localhost ~]# cat /boot/config-3.10.0-862.el7.x86_64 |grep "CONFIG_QUOTA"
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
2. View disk information
[root@localhost ~]# ll /dev/sd*
brw-rw---- 1 root disk 8, 0 Jun 24 09:14 /dev/sda
brw-rw---- 1 root disk 8, 1 Jun 24 09:14 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jun 24 09:14 /dev/sda2
brw-rw---- 1 root disk 8, 16 Jun 24 09:14 /dev/sdb
3. Partition /dev/sdb and format it to ext4 format
[root@localhost ~]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart
Partition name? []? sdb1
File system type? [ext2]? ext2
Start? 1M
End? 10000M
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 10.0GB 9999MB ext4 sdb1
(parted) q
Information: You may need to update /etc/fstab.
[root@localhost ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
610800 inodes, 2441216 blocks
122060 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
75 block groups
32768 blocks per group, 32768 fragments per group
8144 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
4. Create a mount point and mount the device
[root@localhost ~]# mkdir /sdb1
[root@localhost ~]# mount /dev/sdb1 /sdb1/
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.4G 6.7G 17% /
devtmpfs 98M 0 98M 0% /dev
tmpfs 110M 0 110M 0% /dev/shm
tmpfs 110M 5.5M 104M 6% /run
tmpfs 110M 0 110M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 22M 0 22M 0% /run/user/0
/dev/sr0 4.2G 4.2G 0 100% /mnt
/dev/sdb1 9.1G 37M 8.6G 1% /sdb1
5. Check if the partition supports quotas (mainly check for usrquota, grpquota)
[root@localhost ~]# dumpe2fs -h /dev/sdb1 |grep "Default mount options"
dumpe2fs 1.42.9 (28-Dec-2013)
Default mount options: user_xattr acl
[root@localhost ~]# cat /proc/mounts |grep "/dev/sdb1"
/dev/sdb1 /sdb1 ext4 rw,relatime,data=ordered 0 0
# If the above does not show the relevant permissions, we need to remount the disk with the permissions
[root@localhost ~]# mount -o remount,usrquota,grpquota /dev/sdb1
[root@localhost ~]# cat /proc/mounts |grep "/dev/sdb1"
/dev/sdb1 /sdb1 ext4 rw,relatime,quota,usrquota,grpquota,data=ordered 0 0
6. Set the partition to automatically mount at boot and enable quotas
[root@localhost ~]# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Sep 21 20:07 13d5ccc2-52db-4aec-963a-f88e8edcf01c -> ../../sda1
lrwxrwxrwx 1 root root 9 Sep 21 20:07 2018-05-03-20-55-23-00 -> ../../sr0
lrwxrwxrwx 1 root root 10 Sep 21 20:07 4604dcf2-da39-455a-9719-e7c5833e566c -> ../../dm-0
lrwxrwxrwx 1 root root 10 Sep 21 20:07 939cbeb8-bc88-44aa-9221-50672111e123 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Sep 21 20:07 f6a4b420-aa6a-4e66-bbb3-c8e8280a099f -> ../../dm-1
[root@localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Sep 18 09:05:06 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 00
UUID=13d5ccc2-52db-4aec-963a-f88e8edcf01c /boot xfs defaults 00
/dev/mapper/centos-swap swap swap defaults 00
UUID=7d7f22ed-466e-4205-8efe-1b6184dc5e1b swap swap defaults 00
UUID=939cbeb8-bc88-44aa-9221-50672111e123 /sdb1 ext4 defaults,usrquota,grpquota 00
[root@localhost ~]# mount -o remount,usrquota,grpquota /dev/sdb1
7. Generate quota files quotacheck -ugv [partition name]
[root@localhost ~]# quotacheck -ugv /dev/sdb1
quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.
quotacheck: Scanning /dev/sdb1 [/sdb1] done
quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted.
quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted.
quotacheck: Checked 3 directories and 0 files
quotacheck: Old file not found.
quotacheck: Old file not found.
8. Edit limits, edquota -ugtp [username/group name]
Configure soft limit of 200M and hard limit of 500M for lyshark
[root@localhost ~]# edquota -u lyshark
Disk quotas for user lyshark (uid 1000):
↓Filesystem soft(capacity) hard(capacity) inodes soft(number) hard(number)
Filesystem blocks soft hard inodes soft hard
/dev/sdb1 0 200M 500M 0 0 0
Configure soft limit of 100M and hard limit of 200M for temp group.
[root@localhost ~]# edquota -g temp
Disk quotas for group temp (gid 1001):
Filesystem blocks soft hard inodes soft hard
/dev/sdb1 0 102400 204800 0 0 0
9. Enable quotas, quota on/off -augv
[root@localhost ~]# quotaon -augv
/dev/sdb1 [/sdb1]: group quotas turned on
/dev/sdb1 [/sdb1]: user quotas turned on
10. View quota for specified user or group, quota -ugvs
[root@localhost ~]# quota -ugvs
Disk quotas for user root (uid 0):
Filesystem space quota limit grace files quota limit grace
/dev/sdb1 20K 0K 0K 2 0 0
Disk quotas for group root (gid 0):
Filesystem space quota limit grace files quota limit grace
/dev/sdb1 20K 0K 0K 2 0 0
LVM Logical Volume Manager
LVM (Logical Volume Manager) is a mechanism for managing disk partitions in a Linux environment. The ordinary disk partition management method cannot change its size after the partitions are divided. When a logical partition cannot store a certain file, the usual solution is to use symbolic links or tools to resize partitions, but this is only a temporary solution and does not fundamentally solve the problem. In simple terms, LVM merges physical disks into one or several large virtual disk storage pools, allowing us to allocate space from the storage pool as needed. Since it is a virtual storage pool, the size can be freely adjusted when allocating space, as follows:
Components of LVM:
[Citation]
Prepare 4 hard disks, no need to partition or format.
[root@localhost ~]# ll /dev/sd[b-z]
brw-rw---- 1 root disk 8, 16 Sep 21 22:04 /dev/sdb
brw-rw---- 1 root disk 8, 32 Sep 21 22:04 /dev/sdc
brw-rw---- 1 root disk 8, 48 Sep 21 22:04 /dev/sdd
brw-rw---- 1 root disk 8, 64 Sep 21 22:04 /dev/sde
Back to top
PV physical volume creation and removal
Creating PV
pvcreate [partition path],[partition path][.......]
[root@localhost ~]# ll /dev/sd[b-z]
brw-rw---- 1 root disk 8, 16 Sep 21 22:04 /dev/sdb
brw-rw---- 1 root disk 8, 32 Sep 21 22:04 /dev/sdc
brw-rw---- 1 root disk 8, 48 Sep 21 22:04 /dev/sdd
brw-rw---- 1 root disk 8, 64 Sep 21 22:04 /dev/sde
[root@localhost ~]# pvcreate /dev/sdb /dev/sdc # Create with 3 hard disks
Physical volume "/dev/sdb" successfully created.
Physical volume "/dev/sdc" successfully created.
Physical volume "/dev/sdd" successfully created.
[root@localhost ~]# pvs # Query created hard disks
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb lvm2 --- 10.00g 10.00g
/dev/sdc lvm2 --- 10.00g 10.00g
/dev/sdd lvm2 --- 10.00g 10.00g
Removing PV
pvremove [partition path]
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb lvm2 --- 10.00g 10.00g
/dev/sdc lvm2 --- 10.00g 10.00g
/dev/sdd lvm2 --- 10.00g 10.00g
[root@localhost ~]# pvremove /dev/sdd # Remove /dev/sdd
Labels on physical volume "/dev/sdd" successfully wiped.
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb lvm2 --- 10.00g 10.00g
/dev/sdc lvm2 --- 10.00g 10.00g
Back to top
VG volume group creation and removal
Creating VG volume group, VG volume group must select from PV
vgcreate -s [specified PE size] [VG volume group name] [partition path] [partition path][.....]
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb lvm2 --- 10.00g 10.00g
/dev/sdc lvm2 --- 10.00g 10.00g
[root@localhost ~]# vgcreate -s 4M my_vg /dev/sdb /dev/sdc # Create a VG volume group here
Volume group "my_vg" successfully created
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <9.00g 0
my_vg 2 0 0 wz--n- 19.99g 19.99g # This is the VG volume group, named my_vg
Adding a new PV to the current my_vg volume group, which is to extend the volume group
vgextend [volume group name] [physical volume partition]
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb my_vg lvm2 a-- <10.00g <10.00g
/dev/sdc my_vg lvm2 a-- <10.00g <10.00g
/dev/sdd lvm2 --- 10.00g 10.00g # This physical volume has not been assigned to a volume group
[root@localhost ~]# vgextend my_vg /dev/sdd # Add a PV to the specified volume group
Volume group "my_vg" successfully extended
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb my_vg lvm2 a-- <10.00g <10.00g
/dev/sdc my_vg lvm2 a-- <10.00g <10.00g
/dev/sdd my_vg lvm2 a-- <10.00g <10.00g # This physical volume has been assigned to the my_vg volume group
Removing a PV from the VG volume group (removing a single PV)
vgreduce [volume group name] [physical volume partition]
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb my_vg lvm2 a-- <10.00g <10.00g
/dev/sdc my_vg lvm2 a-- <10.00g <10.00g
/dev/sdd my_vg lvm2 a-- <10.00g <10.00g
[root@localhost ~]# vgreduce my_vg /dev/sdd # Remove /dev/sdd from the my_vg volume group
Removed "/dev/sdd" from volume group "my_vg"
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <9.00g 0
/dev/sdb my_vg lvm2 a-- <10.00g <10.00g
/dev/sdc my_vg lvm2 a-- <10.00g <10.00g
Removing an entire VG volume group
vgremove [volume group name]
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <9.00g 0
my_vg 2 0 0 wz--n- 19.99g 19.99g
[root@localhost ~]# vgremove my_vg # Remove the entire volume group
Volume group "my_vg" successfully removed
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <9.00g 0
[root@localhost ~]#
Removing empty physical volume from VG
vgreduce -a [volume group name]
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <9.00g 0
my_vg 3 0 0 wz--n- <29.99g <29.99g
[root@localhost ~]# vgreduce -a my_vg # Only remove empty volume groups
Removed "/dev/sdb" from volume group "my_vg"
Removed "/dev/sdc" from volume group "my_vg"
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <9.00g 0
my_vg 1 0 0 wz--n- <10.00g <10.00g
Back to top
LV logical volume creation and removal
Creating LVM
lvcreate -L [specified size] -n [LV name] [VG volume group: from which volume group to allocate]
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- <8.00g
swap centos -wi-ao---- 1.00g
[root@localhost ~]# lvcreate -L 10G -n my_lv my_vg # Create LVM logical volume
Logical volume "my_lv" created.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- <8.00g
swap centos -wi-ao---- 1.00g
my_lv my_vg -wi-a----- 10.00g
Format and mount for use
[root@localhost ~]# mkdir /LVM # First create a mount point
[root@localhost ~]#
[root@localhost ~]# mkfs.ext4 /dev/my_vg/my_lv # Format the LVM partition
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mount /dev/my_vg/my_lv /LVM/ # Mount the LVM
[root@localhost ~]#
[root@localhost ~]# df -h # Check the result
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.2G 6.9G 15% /
devtmpfs 98M 0 98M 0% /dev
tmpfs 110M 0 110M 0% /dev/shm
tmpfs 110M 5.5M 104M 5% /run
tmpfs 110M 0 110M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 22M 0 22M 0% /run/user/0
/dev/mapper/my_vg-my_lv 9.8G 37M 9.2G 1% /LVM ← Mounted successfully
Back to top
LV capacity increase (add 5G space to LV)
Note: Here, to extend, first extend LVM, then extend the file system
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.2G 6.9G 15% /
devtmpfs 98M 0 98M 0% /dev
tmpfs 110M 0 110M 0% /dev/shm
tmpfs 110M 5.5M 104M 5% /run
tmpfs 110M 0 110M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 22M 0 22M 0% /run/user/0
/dev/mapper/my_vg-my_lv 9.8G 37M 9.2G 1% /LVM ← This shows 10G
[root@localhost ~]# lvextend -L +5G /dev/my_vg/my_lv # Execute the increase command, allocate 5G from the VG volume group
Size of logical volume my_vg/my_lv changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840).
Logical volume my_vg/my_lv successfully resized.
[root@localhost ~]# resize2fs -f /dev/my_vg/my_lv # Extend the file system
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/my_vg/my_lv is mounted on /LVM; on-line resizing required
old_desc_blocks =2, new_desc_blocks =2
The filesystem on /dev/my_vg/my_lv is now 3932160 blocks long.
[root@localhost ~]# df -h # Verify the extension result
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.2G 6.9G 15% /
devtmpfs 98M 0 98M 0% /dev
tmpfs 110M 0 110M 0% /dev/shm
tmpfs 110M 5.5M 104M 5% /run
tmpfs 110M 0 110M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 22M 0 22M 0% /run/user/0
/dev/mapper/my_vg-my_lv 15G 41M 14G 1% /LVM ← This has increased from 10G to 15G
Back to top
LV capacity decrease (reduce LV capacity by 5G)
Note: Here, to reduce, first unmount the file system, check the partition, then reduce the file system, and finally reduce LVM
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.2G 6.9G 15% /
devtmpfs 98M 0 98M 0% /dev
tmpfs 110M 0 110M 0% /dev/shm
tmpfs 110M 5.5M 104M 5% /run
tmpfs 110M 0 110M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 22M 0 22M 0% /run/user/0
/dev/mapper/my_vg-my_lv 15G 41M 14G 1% /LVM ← This shows 15G space
[root@localhost ~]# umount /dev/my_vg/my_lv # Unmount the LVM volume group
[root@localhost ~]# e2fsck -f /dev/my_vg/my_lv # Check the file system
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/my_vg/my_lv: 11/983040 files (0.0% non-contiguous), 104724/3932160 blocks
[root@localhost ~]# resize2fs -f /dev/my_vg/my_lv 10G (reduced size) # Reduce the file system
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/my_vg/my_lv to 2621440 (4k) blocks.
The filesystem on /dev/my_vg/my_lv is now 2621440 blocks long.
[root@localhost ~]# lvreduce -L 10G /dev/my_vg/my_lv # Reduce LVM
WARNING: Reducing active logical volume to 10.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce my_vg/my_lv? [y/n]: y # Input y
Size of logical volume my_vg/my_lv changed from 15.00 GiB (3840 extents) to 10.00 GiB (2560).
Logical volume my_vg/my_lv successfully resized.
[root@localhost ~]# mount /dev/my_vg/my_lv /LVM/ # Mount
[root@localhost ~]# df -h # Check the partition changes again
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.2G 6.9G 15% /
devtmpfs 98M 0 98M 0% /dev
tmpfs 110M 0 110M 0% /dev/shm
tmpfs 110M 5.5M 104M 5% /run
tmpfs 110M 0 110M 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
tmpfs 22M 0 22M 0% /run/user/0
/dev/mapper/my_vg-my_lv 9.8G 37M 9.2G 1% /LVM ← This has decreased from 15G to 10G
Back to top
LV snapshot function
Taking a snapshot
lvcreate [-s snapshot] -n [snapshot name] -L [snapshot size] [specified partition]
[root@localhost LVM]# ls
1 12162 232730343841454952566 63677074788185899296
10 131720242831353942465 535760646871757982869 9397
100141821252932364 434750545861656972768 8387909498
11 151922263 333740444851555962667 7377808488919599
[root@localhost LVM]# lvcreate -s -n mylv_back -L 200M /dev/my_vg/my_lv # Take a snapshot of the /LVM directory
Logical volume "mylv_back" created.
[root@localhost LVM]# lvs # View the snapshot
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- <8.00g
swap centos -wi-ao---- 1.00g
my_lv my_vg owi-aos--- 10.00g
mylv_back my_vg swi-a-s--- 200.00m my_lv 0.01 ← This is the snapshot
Restoring from snapshot
[root@localhost LVM]# ls
1 12162 232730343841454952566 63677074788195899296
10 131720242831353942465 535760646871757982869 9397
100141821252932364 434750545861656972768 8387909498
11 151922263 333740444851555962667 7377808488919599
[root@localhost LVM]# rm -fr * # Simulate deletion
[root@localhost LVM]# mkdir /back # Create a mount point
[root@localhost LVM]# mount /dev/my_vg/mylv_back /back/ # Mount the backup file
[root@localhost LVM]# cp -a /back/* ./ # Copy backup files
[root@localhost LVM]# ls
1 12162 232730343841454952566 63677074788195899296
10 131720242831353942465 535760646871757982869 9397
100141821252932364 434750545861656972768 8387909498
11 151922263 333740444851555962667 7377808488919599
RAID Redundant Array of Independent Disks
Definition: An array of independent disks with redundancy capabilities
Types of disk arrays: one is external disk array cabinets, the second is internal disk array cards, and the third is software emulation
1. By organizing multiple disks together as a logical volume to provide disk spanning functionality2. By dividing data into multiple data blocks (Block) and writing/reading them in parallel to multiple disks to improve access speed3. By mirroring or checksum operations to provide fault tolerance
Note: RAID disk arrays are mainly to ensure that business does not terminate in the event of hardware failure, and cannot prevent operational errors
Classification of disk arrays
[Citation]
Introduction to RAID disk arrays
[Citation]
[Citation]
[Citation]
[Citation]
Back to top
Mdadm command analysis
[root@localhost ~]# mdadm --help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm
mdadm --create --auto=yes /dev/md[0-9] --raid-devices=[0-n] \
--level=[015] --spare-devices=[0-n] /dev/sd[a-z]
--create # New RAID parameters
--auto=yes # Default configuration
--raid-devices=N # Number of disks in the array
--spare-devices=N # Number of backup disks
--level [015] # Array level
mdadm --detail # Query array information
Back to top
Build a RAID 5
Note: In minimal mode, this command is not installed, execute yum install -y mdadm to install
[root@localhost ~]# ls -l /dev/sd[b-z]brw-rw---- 1 root disk 8, 16 Sep 21 23:06 /dev/sdbbrw-rw---- 1 root disk 8, 32 Sep 21 23:06 /dev/sdcbrw-rw---- 1 root disk 8, 48 Sep 21 23:06 /dev/sddbrw-rw---- 1 root disk 8, 64 Sep 21 23:04 /dev/sde
[root@localhost ~]# mdadm --create --auto=yes /dev/md0 --level=5 \
--raid-devices=3 --spare-devices=1 /dev/sd{b,c,d,e} # Create a RAID, where the interface is /dev/md0, level is RAID5mdadm: Defaulting to version 1.2 metadata # Main disk count 3, backup disk count 1, providing sd{b,c,d,e} diskmdadm: array /dev/md0 started.
[root@localhost ~]# mdadm --detail /dev/md0 # View array information/dev/md0: ← Device file name Version : 1.2 Creation Time : Fri Sep 21 23:19:09 2018 ← Creation date Raid Level : raid5 ← RAID level Array Size : 20953088 (19.98 GiB 21.46 GB) ← Available space Used Dev Size : 10476544 (9.99 GiB 10.73 GB) ← Available space for each device Raid Devices : 3 ← Number of RAID devices Total Devices : 4 ← Total number of devices Persistence : Superblock is persistent
Update Time : Fri Sep 21 23:19:26 2018 State : clean, degraded, recovering Active Devices : 3 ← Active disks Working Devices : 4 ← Available disks Failed Devices : 0 ← Failed disks Spare Devices : 1 ← Backup disks
Layout : left-symmetric Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 34% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8 ← This device UUID Events : 6
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 48 2 spare rebuilding /dev/sdd
3 8 64 - spare /dev/sde
Format /dev/md0 and mount for use
[root@localhost ~]# mkfs -t ext4 /dev/md0 # Formatmke2fs 1.42.9 (28-Dec-2013)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=128 blocks, Stripe width=256 blocks1310720 inodes, 5238272 blocks261913 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=2153775104160 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000
Allocating group tables: doneWriting inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done
[root@localhost ~]# mkdir /RAID # Create a new mount directory[root@localhost ~]#[root@localhost ~]# mount /dev/md0 /RAID/ # Mount the device[root@localhost ~]#[root@localhost ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/centos-root 8.0G 1.2G 6.9G 15% /devtmpfs 98M 0 98M 0% /dev/dev/sr0 4.2G 4.2G 0 100% /mnt/dev/md0 20G 45M 19G 1% /RAID ← This shows successful mounting
Back to top
RAID emulation rescue mode
mdadm --manage /dev/md[0-9] --add device --remove device --fail device
--add # Add the following device to md --remove # Remove device --fail # Set faulty disk------------------------------------------------------------[Experiment]
[root@localhost /]# mdadm --manage /dev/md0 --fail /dev/sdb # Mark /dev/sdb as faultymdadm: set /dev/sdb faulty in /dev/md0
[root@localhost /]# mdadm --detail /dev/md0 # Check the status/dev/md0: Version : 1.2 Creation Time : Fri Sep 21 23:19:09 2018 Raid Level : raid5 Array Size : 20953088 (19.98 GiB 21.46 GB) Used Dev Size : 10476544 (9.99 GiB 10.73 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent
Update Time : Fri Sep 21 23:50:12 2018 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 ← One faulty disk Spare Devices : 1
Layout : left-symmetric Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 5% complete ← Note here, it is recovering data, wait until 100% to return to normal operation
Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8 Events : 20
Number Major Minor RaidDevice State 3 8 64 0 spare rebuilding /dev/sde 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd
0 8 16 - faulty /dev/sdb ← Faulty disk
[root@localhost /]# mdadm --manage /dev/md0 --remove /dev/sdb # Remove this faulty diskmdadm: hot removed /dev/sdb from /dev/md0
[root@localhost /]# mdadm --manage /dev/md0 --add /dev/sdb # Add a new diskmdadm: added /dev/sdb
Link: https://www.cnblogs.com/LyShark/p/10221799.html
(Copyright belongs to the original author, infringement will be deleted)
WeChat group
WeChat group
To facilitate better communication on operation and maintenance and related technical issues, a WeChat group has been created. If you want to join the group, please scan the QR code below to add me as a friend (note: join the group).
Blog
Guest
Blog
CSDN Blog: https://blog.csdn.net/qq_25599925
Juejin Blog: https://juejin.cn/user/4262187909781751
Long press to recognize the QR code to visit the blog website and see more high-quality original content.