Click the blue “Most Programmer” to follow me!
Add a “star“, every day at 18:03 to learn technology together
Redirection
Standard input stdin: code is 0, use < or <<Standard output stdout: code is 1, use > or >>Standard error output stderr: code is 2, use 2> or 2>>Special syntax: write both stdout and stderr to a file, use 2>&1
# Redirect the result of ll to out.txt file, overwriting if the file has content
ll /home > out.txt
# Append the result of ll to out.txt file
ll /etc >> out.txt
# Write stdout and stderr to the same file
find /home -name .bashrc > out.txt 2>&1 # Note that 2>&1 is at the end
find /home -name .bashrc &> out.txt # Or use &>
Pipeline
Use <span>command A | command B | command C</span>
to take the standard output of command A as the standard input of command B (note that it can only receive the standard output of the previous command). Each pipe must be followed by a command that can accept stdin, such as less, more, head, tail, etc. Commands like ls, cp, mv cannot be used. To receive the stdout of the previous command, you need to use <span>2>&1</span>
to convert stdout to stdin.
tee Command
<span>tee [OPTION]... [FILE]...</span>
reads stdin and writes to stdout and file. Combined with the above pipeline:
# Display the result of ll on the screen and record it to a file
ll /home | tee list_home.out
# Display the find result (normal and error) on the screen and record it to a file
find /home -name .bashrc 2>&1 | tee find.out
xargs Command
<span>xargs [options] [command [initial-arguments]]</span>
reads stdin, using spaces or newlines as delimiters, and splits stdin into arguments.
# Pass the result of find as arguments to the ls -lh command
find /usr/sbin -perm /7000 | xargs ls -lh
# Pass the result of find as arguments to the du command
find /home -name "*.go" | xargs du -cb
Text Processing – vim, grep, awk, sed, sort, wc, uniq, cut, tr
grep
<span>grep [OPTION...] PATTERNS [FILE...]</span>
searches for text that matches a certain pattern in the text.
# Find lines in list.out that contain the rvs character
[leadcom@localhost test]$ grep rvs list.out
drwx------ 4 rvs rvs 12712月 1618:41 rvs
drwxrwxrwx 16 root root 2858月 410:03 rvslocal
drwxrwxrwx 2 root root 65月 102021 rvsremote
# Use pipeline to find lines containing a certain character in the previous command
ps -ef | grep postgres
cut
<span>cut OPTION... [FILE]...</span>
processes each line of the file according to the option and outputs to standard output. The cut command cuts bytes, characters, and fields from each line of the file and writes these bytes, characters, and fields to standard output. If the File parameter is not specified, the cut command will read from standard input. One of the -b, -c, or -f flags must be specified.
# Use : as a delimiter to take the first element
gw1@gw1-PC:~$ echo$PATH
/home/gw1/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/sbin:/usr/sbin
gw1@gw1-PC:~$ echo$PATH | cut -d ":" -f 1
/home/gw1/.local/bin
gw1@gw1-PC:~$ export
declare -x DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"
declare -x DISPLAY="localhost:10.0"
declare -x HOME="/home/gw1"
declare -x LANG="zh_CN.UTF-8"
declare -x LANGUAGE="zh_CN"
declare -x LOGNAME="gw1"
...
# Only take the content after the 12th character of declare -x in export
gw1@gw1-PC:~$ export | cut -c 12-
DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"
DISPLAY="localhost:10.0"
HOME="/home/gw1"
LANG="zh_CN.UTF-8"
LANGUAGE="zh_CN"
LOGNAME="gw1"
...
awk
gawk [ POSIX or GNU style options ] -f program-file [ -- ] file ...
gawk [ POSIX or GNU style options ][ -- ] program-text file ...
-
Usage One
awk '{[pattern] action}' {filenames} # Line matching statement awk '' can only use single quotes
# Split each line by space or TAB, output the 1st and 4th items in the text [leadcom@localhost test]$ cat log.txt 2 this is a test 3 Are you like awk This's a test 10 There are orange,apple,mongo [leadcom@localhost test]$ awk '{print$1,$4}' log.txt 2 a 3 like This's 10 orange,apple,mongo
-
Usage Two
<span>awk -F #-F is equivalent to the built-in variable FS, specifying the delimiter</span>
[leadcom@localhost test]$ awk -F, '{print $1,$4}' log.txt 2 this is a test 3Are you like awk This's a test 10 There are orange
sed
<span>sed [OPTION]... {script-only-if-no-other-script} [input-file]...</span>
<span>sed [-hnV][-e<script>][-f<script文件>][文本文件]</span>
The Linux sed command processes text files using scripts. sed can process and edit text files according to the instructions in the script. Sed is mainly used to automate the editing of one or more files, simplify repetitive operations on files, and write conversion programs, etc.
Parameter description:
-e<sript> or --expression=<script> to process the input text file with the script specified in the option.
-f<script文件> or --file=<script文件> to process the input text file with the script file specified in the option.
-n or --quiet or --silent only displays the result after script processing.
Action description:
a : add, a can be followed by a string, and these strings will appear in a new line (the next line of the current one)~
c : replace, c can be followed by a string, and these strings can replace lines between n1 and n2!
d : delete, since it is a deletion, d usually does not follow any arguments;
i : insert, i can be followed by a string, and these strings will appear in a new line (the previous line of the current one);
p : print, that is to print certain selected data. Usually p is run together with the parameter sed -n ~
s : replace, can directly perform replacement work! Usually, this s action can be combined with regular expressions! For example, 1,20s/old/new/g is it!
[leadcom@localhost test]$ cat testfile
HELLO LINUX!
Linux is a free unix-type operating system.
This is a linux testfile!
Linux test
# Add a line after the fourth line in standard output
[leadcom@localhost test]$ sed -e 4a"\naaaa" testfile
HELLO LINUX!
Linux is a free unix-type operating system.
This is a linux testfile!
Linux test
naaaa
# Delete lines 2 to 5 in standard output
[leadcom@localhost test]$ nl testfile | sed '2,5d'
1 HELLO LINUX!
sort
<span>sort [OPTION]... [FILE]...</span>
The Linux sort command is used to sort the contents of text files. sort can sort the contents of text files by line.
gw1@gw1-PC:~$ cat testfile
test 30
Hello 95
Linux 85
gw1@gw1-PC:~$ sort testfile
Hello 95
Linux 85
test 30
wc
<span>wc [OPTION]... [FILE]...</span>
The Linux wc command is used to count words. Using the wc command, we can count the number of bytes, words, or lines in a file. If no file name is specified, or if the given file name is “-“, the wc command will read data from standard input.
[leadcom@localhost test]$ ps -ef | grep postgres | wc -l
16
uniq
<span>uniq [OPTION]... [INPUT [OUTPUT]]</span>
The Linux uniq command is used to check and delete duplicate lines in text files, generally used in conjunction with the sort command. uniq can check for duplicate lines in text files.
gw1@gw1-PC:~$ cat testfile
test 30
test 30
test 30
Hello 95
Hello 95
Hello 95
Hello 95
Linux 85
Linux 85
gw1@gw1-PC:~$ uniq testfile
test 30
Hello 95
Linux 85
Linux 85
BASH copy full screen
tr
<span>tr [OPTION]... SET1 [SET2]</span>
The Linux tr command is used to translate or delete characters in a file. The tr command reads data from standard input, translates the string, and outputs the result to standard output.
gw1@gw1-PC:~$ cat testfile
It uses a mix of theory and practical techniques to
teach administrators how to install and
use security applications, as well as how the
applcations work and why they are necessary.
# Convert lowercase to uppercase
gw1@gw1-PC:~$ cat testfile | tr a-z A-Z
IT USES A MIX OF THEORY AND PRACTICAL TECHNIQUES TO
TEACH ADMINISTRATORS HOW TO INSTALL AND
USE SECURITY APPLICATIONS, AS WELL AS HOW THE
APPLICATIONS WORK AND WHY THEY ARE NECESSARY.
gw1@gw1-PC:~$
Link: https://www.cnblogs.com/maseus/p/17122690.html