跟散仙学shell命令(二)
程序员文章站
2022-05-12 19:07:08
...
(1)rm删除命令,删除文件或目录 rm -rf source ,如果想删除某个目录下所有文件,使用命令rm -rf * ,这是一个非常谨慎的操作,删除后就无法恢复了,尽量不要使用root用户操作这个命令。
(2)mkdir命令,创建一个目录,用法 mkdir 目录名
(3)查看文件统计信息,stat命令 用法: stat 文件名或目录名,例子如下:
(4)file命令,查看文件类型,总共分3类文件,文本文件,可执行文件,数据文件,例子如下:
(5)查看文件命令,cat 用法,cat 文件名
cat -n 文件名,给输出加上行号
上面的例子,最后一行也加了,这种空白行,不应该算行,我么应该使用
cat -b 命令:
(6) more命令,适合查看大型文本,cat命令适合查看较小的文件,例如我们查看hadoop的日志文件,就不适合用cat命令:
注意最后有百分比,我们可以按enter键,一点点向下滚动,也可以按z键,翻一页,,最后使用q命令退出
(7)less命令,less命令符合了少即是多的概念,它提供了额外的信心,显示了文件的总行数,以及行区间,支持所有more命令。
用法参数如下:
(8) tail命令,tail命令在实际开发中,非常常用,散仙的经验,基本所有的log文件,都可以用tail 命令查看,因为它可以动态监视日志文件,比如,你在查看日志文件过程中,系统动态输出的文件,都会在屏幕上,实时打印出来,简直就是实时监控的利器。用法如下
tail -f 文件名
具体的参数
(9)head命令,head命令时显示文件开头的内容,默认显示前10行的文本,对于大文件,想大概了解一下文件内容,很有用。
用法: head 文件名
[root@h1 ~]# man rm RM(1) User Commands RM(1) NAME rm - remove files or directories SYNOPSIS rm [OPTION]... FILE... DESCRIPTION This manual page documents the GNU version of rm. rm removes each specified file. By default, it does not remove directories. If the -I or --interactive=once option is given, and there are more than three files or the -r, -R, or --recursive are given, then rm prompts the user for whether to proceed with the entire operation. If the response is not affirmative, the entire command is aborted. Otherwise, if a file is unwritable, standard input is a terminal, and the -f or --force option is not given, or the -i or --interactive=always option is given, rm prompts the user for whether to remove the file. If the response is not affirmative, the file is skipped. OPTIONS Remove (unlink) the FILE(s). -f, --force ignore nonexistent files, never prompt -i prompt before every removal -I prompt once before removing more than three files, or when removing recursively. Less intrusive than -i, while still giving protection against most mistakes --interactive[=WHEN] prompt according to WHEN: never, once (-I), or always (-i). Without WHEN, prompt always --one-file-system when removing a hierarchy recursively, skip any directory that is on a file system different from that of the corresponding command line argument --no-preserve-root :
(2)mkdir命令,创建一个目录,用法 mkdir 目录名
(3)查看文件统计信息,stat命令 用法: stat 文件名或目录名,例子如下:
[root@h1 ~]# stat count.txt File: "count.txt" Size: 52 Blocks: 8 IO Block: 4096 普通文件 Device: fd00h/64768d Inode: 674329 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-07-31 19:47:19.826696827 +0800 Modify: 2014-07-31 19:46:57.271696962 +0800 Change: 2014-07-31 19:47:11.123698114 +0800 [root@h1 ~]# stat login/ File: "login/" Size: 4096 Blocks: 8 IO Block: 4096 目录 Device: fd00h/64768d Inode: 663800 Links: 3 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-07-31 19:42:37.128697676 +0800 Modify: 2014-07-09 04:08:01.337714013 +0800 Change: 2014-07-09 04:08:01.337714013 +0800 [root@h1 ~]#
(4)file命令,查看文件类型,总共分3类文件,文本文件,可执行文件,数据文件,例子如下:
[root@h1 ~]# file hh.txt hh.txt: very short file (no magic) [root@h1 ~]# file login/ login/: directory [root@h1 ~]# file setlimit.sh setlimit.sh: ASCII text [root@h1 ~]#
(5)查看文件命令,cat 用法,cat 文件名
[root@h1 ~]# cat count.txt 中国#23 美国#90 中国#100 中国#10 法国#20 [root@h1 ~]#
cat -n 文件名,给输出加上行号
[root@h1 ~]# cat -n count.txt 1 中国#23 2 美国#90 3 中国#100 4 中国#10 5 法国#20 6 [root@h1 ~]#
上面的例子,最后一行也加了,这种空白行,不应该算行,我么应该使用
cat -b 命令:
[root@h1 ~]# cat -b count.txt 1 中国#23 2 美国#90 3 中国#100 4 中国#10 5 法国#20 [root@h1 ~]#
(6) more命令,适合查看大型文本,cat命令适合查看较小的文件,例如我们查看hadoop的日志文件,就不适合用cat命令:
[search@h1 logs]$ more hadoop-search-namenode-h1.log 2014-07-31 22:07:30,494 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = h1/192.168.46.32 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.2.0 STARTUP_MSG: classpath = /home/search/hadoop/etc/hadoop/:/home/search/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/home/search/hadoop/share/hadoop/common/lib/netty-3.6.2. Final.jar:/home/search/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/search/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/search/hadoop/share/h adoop/common/lib/commons-httpclient-3.1.jar:/home/search/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/search/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.j ar:/home/search/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/search/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/home/search/hadoop/share/hadoop/common/lib/com mons-net-3.1.jar:/home/search/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/search/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/search/hadoop/share/hado op/common/lib/xmlenc-0.52.jar:/home/search/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/search/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/se arch/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/home/search/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/home/search/hadoop/share/hadoop/common/lib/paranamer-2.3 .jar:/home/search/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/home/search/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/search/hadoop/share/hadoop/c ommon/lib/hadoop-annotations-2.2.0.jar:/home/search/hadoop/share/hadoop/common/lib/xz-1.0.jar:/home/search/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/home/search/hadoo p/share/hadoop/common/lib/log4j-1.2.17.jar:/home/search/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/home/search/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar :/home/search/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/search/hadoop/share/hadoop/common/lib/activation-1.1.jar:/home/search/hadoop/share/hadoop/common/lib/comm ons-digester-1.8.jar:/home/search/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/search/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/search/hadoop/share/ha doop/common/lib/jackson-xc-1.8.8.jar:/home/search/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/h ome/search/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/search/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/home/search/hadoop/share/hadoop/common/lib/commons-c onfiguration-1.6.jar:/home/search/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/home/search/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/search/hadoop/ share/hadoop/common/lib/jersey-server-1.9.jar:/home/search/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/common/lib/commons-cli-1 .2.jar:/home/search/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/home/search/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/home/search/hadoop/share/hadoop/common/lib/ asm-3.2.jar:/home/search/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/search/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/search/hadoop/share/hadoop/ common/lib/avro-1.7.4.jar:/home/search/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/home/search/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/search/had oop/share/hadoop/common/lib/commons-lang-2.5.jar:/home/search/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/search/hadoop/share/hadoop/common/lib/commons-codec-1.4 .jar:/home/search/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/home/search/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/search/hadoop/share/hadoop/common/hadoop -common-2.2.0-tests.jar:/home/search/hadoop/share/hadoop/hdfs:/home/search/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/search/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.F inal.jar:/home/search/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/search/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/search/hadoop/share/hadoop/hdfs/lib/c ommons-daemon-1.0.13.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/search/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/search/hadoop/sh are/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/search/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/search/hadoop /share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/searc h/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jersey-s erver-1.9.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/search/hadoop/share/had oop/hdfs/lib/servlet-api-2.5.jar:/home/search/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/search/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/home/search/hadoop/share/hado op/hdfs/lib/commons-lang-2.5.jar:/home/search/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/search/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/search/ha doop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/search/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/search/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/hom e/search/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/search/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/search/hadoop/share/hadoop/yarn/lib/commons- compress-1.4.1.jar:/home/search/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/home/search/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/home/search/hadoop/share/hadoop/yarn/lib/co mmons-io-2.1.jar:/home/search/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/search/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/search/hadoop/share/ha doop/yarn/lib/xz-1.0.jar:/home/search/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/search/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/search/hadoop/share/hado op/yarn/lib/guice-servlet-3.0.jar:/home/search/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/sear ch/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/search/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/yarn/lib/asm-3.2.j ar:/home/search/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/search/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/search/hadoop/share/hadoop/yarn/lib/ham crest-core-1.1.jar:/home/search/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/search/hado op/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-applicati ons-distributedshell-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/ search/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/had oop-yarn-server-nodemanager-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/search/hadoop/share/hadoop/yarn/hadoop-yarn-applications-un managed-am-launcher-2.2.0.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/searc h/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/search/hadoop/share/hadoop/mapreduce /lib/guice-3.0.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/search/hadoop/share/had oop/mapreduce/lib/paranamer-2.3.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/ search/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/guic e-servlet-3.0.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/search/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/search/had --More--(9%)
注意最后有百分比,我们可以按enter键,一点点向下滚动,也可以按z键,翻一页,,最后使用q命令退出
(7)less命令,less命令符合了少即是多的概念,它提供了额外的信心,显示了文件的总行数,以及行区间,支持所有more命令。
用法参数如下:
[search@h1 logs]$ man less LESS(1) LESS(1) NAME less - opposite of more SYNOPSIS less -? less --help less -V less --version less [-[+]aBcCdeEfFgGiIJKLmMnNqQrRsSuUVwWX~] [-b space] [-h lines] [-j line] [-k keyfile] [-{oO} logfile] [-p pattern] [-P prompt] [-t tag] [-T tagsfile] [-x tab,...] [-y lines] [-[z] lines] [-# shift] [+[+]cmd] [--] [filename]... (See the OPTIONS section for alternate option syntax with long option names.) DESCRIPTION Less is a program similar to more (1), but which allows backward movement in the file as well as forward movement. Also, less does not have to read the entire input file before starting, so with large input files it starts up faster than text editors like vi (1). Less uses termcap (or terminfo on some systems), so it can run on a variety of terminals. There is even limited support for hardcopy terminals. (On a hardcopy terminal, lines which should be printed at the top of the screen are prefixed with a caret.) Commands are based on both more and vi. Commands may be preceded by a decimal number, called N in the descriptions below. The number is used by some commands, as indicated. COMMANDS In the following descriptions, ^X means control-X. ESC stands for the ESCAPE key; for example ESC-v means the two character sequence "ESCAPE", then "v". h or H Help: display a summary of these commands. If you forget all the other commands, remember this one. SPACE or ^V or f or ^F Scroll forward N lines, default one window (see option -z below). If N is more than the screen size, only the final screenful is displayed. Warning: some systems use ^V as a special literalization character. z Like SPACE, but if N is specified, it becomes the new window size. ESC-SPACE Like SPACE, but scrolls a full screenful, even if it reaches end-of-file in the process. RETURN or ^N or e or ^E or j or ^J Scroll forward N lines, default 1. The entire N lines are displayed, even if N is more than the screen size. d or ^D Scroll forward N lines, default one half of the screen size. If N is specified, it becomes the new default for subsequent d and u commands. b or ^B or ESC-v Scroll backward N lines, default one window (see option -z below). If N is more than the screen size, only the final screenful is displayed. w Like ESC-v, but if N is specified, it becomes the new window size. y or ^Y or ^P or k or ^K Scroll backward N lines, default 1. The entire N lines are displayed, even if N is more than the screen size. Warning: some systems use ^Y as
(8) tail命令,tail命令在实际开发中,非常常用,散仙的经验,基本所有的log文件,都可以用tail 命令查看,因为它可以动态监视日志文件,比如,你在查看日志文件过程中,系统动态输出的文件,都会在屏幕上,实时打印出来,简直就是实时监控的利器。用法如下
tail -f 文件名
具体的参数
[search@h1 logs]$ man tail TAIL(1) User Commands TAIL(1) NAME tail - output the last part of files SYNOPSIS tail [OPTION]... [FILE]... DESCRIPTION Print the last 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=K output the last K bytes; alternatively, use -c +K to output bytes starting with the Kth of each file -f, --follow[={name|descriptor}] output appended data as the file grows; -f, --follow, and --follow=descriptor are equivalent -F same as --follow=name --retry -n, --lines=K output the last K lines, instead of the last 10; or use -n +K to output lines starting with the Kth --max-unchanged-stats=N with --follow=name, reopen a FILE which has not changed size after N (default 5) iterations to see if it has been unlinked or renamed (this is the usual case of rotated log files). With inotify, this option is rarely useful. --pid=PID with -f, terminate after process ID, PID dies -q, --quiet, --silent never output headers giving file names --retry keep trying to open a file even when it is or becomes inaccessible; useful when following by name, i.e., with --follow=name -s, --sleep-interval=N with -f, sleep for approximately N seconds (default 1.0) between iterations. With inotify and --pid=P, check process P at least once every N seconds. -v, --verbose always output headers giving file names --help display this help and exit --version output version information and exit If the first character of K (the number of bytes or lines) is a ‘+’, print beginning with the Kth item from the start of each file, otherwise, print the last K items in the file. K may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y. :
(9)head命令,head命令时显示文件开头的内容,默认显示前10行的文本,对于大文件,想大概了解一下文件内容,很有用。
用法: head 文件名
上一篇: 代码中的无形性能损耗
下一篇: java 文件删除不成功